CFP 📔 CSCT 2022 📚 Proceedings in SCOPUS indexed Springer book series ‘Smart Innovation, Systems and Technologies’


Greetings from SAU Center for Research and Innovative Learning (SCRIL), South Asian University, India.

We are pleased to inform you that  SAU Center for Research and Innovative Learning (SCRIL), South Asian University, India is organizing the Congress on Smart Computing Technologies (CSCT 2022). The details of the CSCT 2022 are as follows:

Title of the conference:  Congress on Smart Computing Technologies (CSCT 2022)



After Conference Proceedings:  Springer Book Series ‘Smart Innovation, Systems and Technologies’

Indexing of the Proceedings:  Indexed by SCOPUS, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST), SCImago, DBLP.

Date of the conference:  December 10-11, 2022, Mode of the Conference: In-person and Online (Hybrid Mode) 

Venue: SCRIL, South Asian University New Delhi, India and online.

Topics to be covered (but are not limited to):  Neural Networks, Machine Learning, Evolutionary Algorithms, Information and Network Security, Bioinformatics and Biomedical Systems, Swarm Intelligence, Intelligent Agents and Robotics, Data Mining and Visualization, Intelligent Systems and Control, Computational Methods for Mathematical Problems,  Optimization Algorithms


Important Dates: 
             Last date of full paper submission: October 05, 2022 
             Notification of acceptance: November 10, 2022
             Registration and final paper submission of accepted Paper: November 20, 2022

Contact us through email
csct.scril@gmail.com if you have any queries.

With Best Regards, 
Jagdish Chand Bansal South Asian University Delhi, India    
Antorweep Chakravorty University of Stavanger, Norway    
Harish Sharma Rajasthan Technical University, Kota, India
(General Chairs, CSCT 2022)

All things Attention: Bridging different perspectives on attention

On behalf of the co-organisers, we would like to invite you to submit your work to our NeurIPS workshop on “All Things Attention: Bridging Different Perspectives on Attention”. The details of the workshop and submission instructions are as follows:

All Things Attention: Bridging Different Perspectives on Attention

The Thirty Sixth Conference on Neural Information Processing Systems (NeurIPS)

Dec 2, 2022

NeurIPS 2022 is a hybrid Conference

https://attention-learning-workshop.github.io/

The All Things Attention workshop aims to foster connections across disparate academic communities that conceptualize “Attention” such as Neuroscience, Psychology, Machine Learning, and Human Computer Interaction. Workshop topics of interest include (but are not limited to):

  1. Relationships between biological and artificial attention

    1. What are the connections between different forms of attention in the human brain and present deep neural network architectures? 

    2. Can the anatomy of human attention models provide usable insights to researchers designing architectures for artificial systems? 

    3. Given the same task and learning objective, do machines learn attention mechanisms that are different from humans? 

  1. Attention for reinforcement learning and decision making

    1. How have reinforcement learning agents leveraged attention in decision making?

    2. Do decision-making agents today have implicit or explicit formalisms of attention?

    3. How can AI agents build notions of attention without explicitly baked in notions of attention?

    4. Can attention significantly enable AI agents to scale e.g. through gains in sample efficiency, and generalization?

    5. How should learning systems reason about computational attention (which parts of sensed inputs to focus computation on)?

  2. Benefits and formulation of attention mechanisms for continual / lifelong learning

    1. How can continual learning agents optimize for retention of knowledge for tasks that it already learned? 

    2. How can the amount of interference between different inputs be controlled via attention? 

    3. How does the executive control of attention evolve with learning in humans? 

    4. How can we study the development of attentional systems in infancy and childhood to better understand how attention can be learned?

  3. Attention as a tool for interpretation and explanation

    1. How have researchers leveraged attention as a visualization tool?

    2. What are the common approaches when using attention as a tool for interpretability in AI? 

    3. What are the major bottlenecks and common pitfalls in leveraging attention as a key tool for explaining the decisions of AI agents?

    4. How can we do better?

  4. The role of attention in human-computer interaction and human-robot interaction

    1. How do we detect aspects of human attention during interactions, from sensing to processing to representations?   

    2. What systems benefit from human attention modeling, and how do they use these models?

    3. How can systems influence a user’s attention, and what systems benefit from this capability?

    4. How can a system communicate or simulate its own attention (humanlike or algorithmic) in an interaction, and to what benefit?

    5. How do attention models affect different applications, like collaboration or assistance, in different domains, like autonomous vehicles and driver assistance systems, learning from demonstration, joint attention in collaborative tasks, social interaction, etc.?

    6. How should researchers thinking about attention in different biological and computational fields organize the collection of human gaze data sets, modeling gaze behaviors, and utilizing gaze information in various applications for knowledge transfer and cross-pollination of ideas?

  5. Attention mechanisms in Deep Neural Network (DNN) architectures

    1. How does attention in DNN such as transformers relate to existing formalisms of attention in cogsci/psychology? 

    2. Do we have a concrete understanding of how and if self-attention in transformers contributes to its vast success in recent models such as GPT2, GPT3, DALLE.? 

    3. Can our understanding of attention from other fields inform the progress we have achieved in recent breakthroughs?

SUBMISSION INSTRUCTIONS

We invite you to submit papers (up to 9 pages for long papers and up to 5 pages for short papers, excluding references and appendix) in the NeurIPS 2022 format. All submissions will be managed through OpenReview (submission website). Supplementary Materials uploads are to only be used optionally for extra videos/code/data/figures and should be uploaded separately in the submission website.

The review process is double-blind so the submission should be anonymized. Accepted work will be presented as posters during the workshop, and select contributions will be invited to give spotlight talks during the workshop. Each accepted work entering the poster sessions will have an accompanying pre-recorded 5-minute video. Please note that at least one coauthor of each accepted paper will be expected to have a NeurIPS conference registration and participate in one of the poster sessions. 

We will be giving out 6 free registrations to student authors of accepted papers. Preference will be given to underrepresented groups in the field.

Submissions will be evaluated based on novelty, rigor, and relevance to the theme of the workshop. Both empirical and theoretical contributions are welcome. Submissions should not have previously appeared in a journal or conference (including accepted papers to NeurIPS 2022). Submissions must adhere to the NeurIPS Code of Conduct.

The focus of the work should relate to the list of the topics specified below. The review process will be double-blind and accepted submissions will be presented as virtual talks or posters. There will be no proceedings for this workshop, however, authors can opt to have their abstracts/papers posted on the workshop website.

We encourage submissions on the following topics from the focus of bridging different perspectives on attention:

  • Relationships between biological and artificial attention

  • Attention for reinforcement learning and decision making

  • Benefits and formulation of attention mechanisms for continual / lifelong learning

  • Attention as a tool for interpretation and explanation

  • The role of attention in human-computer interaction and human-robot interaction

  • Attention mechanisms in Deep Neural Network (DNN) architectures

Please submit your papers via the following link: submission website

IMPORTANT DATES

* Submission deadline: Oct 3, 2022 at 11:59PM (AOE) submission website 

* Accept/Reject Notification: Oct 20, 2022

* Camera-ready (final) paper deadline: Nov 25, 2022 at 11:59PM (Anywhere on earth)

* Workshop: Dec 2, 2022

CONFIRMED SPEAKERS & PANELISTS

Speakers:

Pieter Roelfsema (Netherlands Institute for Neuroscience)

James Whittington (University of Oxford)

Ida Momennejad (Microsoft Research)

Erin Grant (UC Berkeley)

Henny Admoni (Carnegie Mellon University)

Tobias Gerstenberg (Stanford University)

Vidhya Navalpakkam (Google Research)

Shalini De Mello (NVIDIA)

Panelists:

David Ha (Google Brain)

Pieter Roelfsema (Netherlands Institute for Neuroscience)

James Whittington (University of Oxford)

Ida Momennejad (Microsoft Research)

Henny Admoni (Carnegie Mellon University)

Tobias Gerstenberg (Stanford University)

Shalini De Mello (NVIDIA)

Vidhya Navalpakkam (Google Research)

Erin Grant (UC Berkeley)

Ramakrishna Vedantam (Meta AI Research)

Megan deBettencourt (University of Chicago)
Ashish Vaswani (Adept AI)

Cyril Zhang (Microsoft Research)

ORGANIZERS

Akanksha Saran (Microsoft Research, NYC)

Khimya Khetarpal (McGill University, Mila Montreal)

Reuben Aronson (Carnegie Mellon University)

Abhijat Biswas (Carnegie Mellon University)

Ruohan Zhang (Stanford University)

Grace Lindsay (University College London, New York University)

Scott Neikum (University of Texas at Austin, University of Massachusetts)

REGISTRATION

Participants should refer to the NeurIPS 2022 website (https://neurips.cc/Conferences/2022/Dates) for information on how to register.

CONTACT

Please reach out to us at attention-workshop@googlegroups.com  if you have any questions. We look forward to receiving your submissions!

Kind Regards,

Workshop Organizers

All Things Attention: Bridging Different Perspectives on Attention

2nd Edition of Graph Models for Learning and Recognition (GMLR) Track at 37th ACM-SAC 2022 in Brno, Czech Republic

 

Call for Papers

 

Graph Models for Learning and Recognition (GMLR) Track

The 38th ACM Symposium on Applied Computing (SAC 2023)

               March 27 – April 2, 2023, Tallinn, Estonia

                              http://phuselab.di.unimi.it/GMLR2023

 

Important Dates

===============

Submission of regular papers:                                  October 1, 2022 October 15, 2022

Notification of acceptance/rejection:                    November 19, 2022

Camera-ready copies of accepted papers:           December 6, 2022

SAC Conference:                                                          March 27 – April 2, 2023

 

 

Motivations and topics

======================

The ACM Symposium on Applied Computing (SAC 2023) has been a primary gathering

forum for applied computer scientists, computer engineers, software engineers,

and application developers from around the world. SAC 2023 is sponsored by the

ACM Special Interest Group on Applied Computing (SIGAPP), and will be held in

Tallinn, Estonia. The technical track on Graph Models for Learning and

Recognition (GMLR) is the second edition and is organized within SAC 2023.

Graphs have gained a lot of attention in the pattern recognition community

thanks to their ability to encode both topological and semantic information.

Despite their invaluable descriptive power, their arbitrarily complex

structured nature poses serious challenges when they are involved in learning

systems. Some (but not all) of challenging concerns are: a non-unique

representation of data, heterogeneous attributes (symbolic, numeric, etc.),

and so on.

In recent years, due to their widespread applications, graph-based learning

algorithms have gained much research interest. Encouraged by the success of

CNNs, a wide variety of methods have redefined the notion of convolution and

related operations on graphs. These new approaches have in general enabled

effective training and achieved in many cases better performances than

competitors, though at the detriment of computational costs.

Typical examples of applications dealing  with graph-based representation are:

scene graph generation, point clouds classification, and action recognition in

computer vision; text classification, inter-relations of documents or words to

infer document labels in natural language processing; forecasting traffic

speed, volume or the density of roads in traffic networks, whereas in

chemistry researchers apply graph-based algorithms to study the graph

structure of molecules/compounds.

 

This track intends to focus on all aspects of graph-based representations and

models for learning and recognition tasks. GMLR spans, but is not limited to,

the following topics:

 

● Graph Neural Networks: theory and applications

● Deep learning on graphs

● Graph or knowledge representational learning

● Graphs in pattern recognition

● Graph databases and linked data in AI

● Benchmarks for GNN

● Dynamic, spatial and temporal graphs

● Graph methods in computer vision

● Human behavior and scene understanding

● Social networks analysis

● Data fusion methods in GNN

● Efficient and parallel computation for graph learning algorithms

● Reasoning over knowledge-graphs

● Interactivity, explainability and trust in graph-based learning

● Probabilistic graphical models

● Biomedical data analytics on graphs

 

Authors of selected top papers of this track will be asked to publish an

extended version in a Special Issue of a high-impact Journal (the journal

will be announced later).

 

 

Track Chairs

============

Donatello Conte (University of Tours)

Alessandro D'Amelio (University of Milan)

Giuliano Grossi (University of Milan)

Raffaella Lanzarotti (University of Milan)

Jianyi Lin (Università Cattolica del Sacro Cuore)

 

 

Scientific Program Committee

============================

Annalisa Barla (University of Genoa)

Davide Boscaini (Bruno Kessler Foundation)

Vittorio Cuculo (University of Milan)

Samuel Feng (Sorbonne University)

Gabriele Gianini (University of Milan)

Alessio Micheli (University of Pisa)

Carlos Oliver (ETH Zürich)

Maurice Pagnucco (University of New South Wales)

Ryan A. Rossi (Adobe Research)

Jean-Yves Ramel (University of Tours)

(others to be confirmed)

 

 

 

Submission Guidelines

=====================

Authors are invited to submit original and unpublished papers of research

and applications for this track. The author(s) name(s) and address(es) must

not appear in the body of the paper, and self-reference should be in the

third person. This is to facilitate double-blind review. Please, visit the

website for more information about submission.

 

SAC No-Show Policy

==================

Paper registration is required, allowing the inclusion of the paper/poster

in the conference proceedings. An author or a proxy attending SAC MUST

present the paper. This is a requirement for the paper/poster to be included

in the ACM digital library. No-show of registered papers and posters will

result in excluding them from the ACM digital library.

Webinar by Dr. Hany Farid on Combating Deep Fakes

The IEEE Biometrics Council invites participants to the upcoming (free)
webinar by Prof. Hany Farid on “Combating Deep Fakes”. Detail on the
webinar are given below:

Title: Combatting Deep Fakes
Speaker: Prof. Hany Farid , University of California, Berkeley, USA
When: 12 October 2022, at 10am PT (1 pm EST, 7pm CEST)
Where: Online (Zoom)
Registration: (free, but required):
https://us06web.zoom.us/webinar/register/WN_516wKq4yR_yp1z9Z6g5Bgw

*** Talk Summary ***
In the early days of the Russian invasion of Ukraine, President
Zelenskyy warned the world that Russia's digital disinformation
machinery would create a deep fake of him admitting defeat. By mid-March
of 2022, a deep fake of Zelenskyy appeared with just this message. This
video was eventually debunked, but not before it spread across social
media and appeared briefly on Ukrainian television. Three months later,
the mayors of Berlin, Madrid, and Vienna collectively spoke for nearly
30 minutes with a deep-fake version of Kiev Mayor Klitschko, before
realizing they were being duped. In addition to adding jet fuel to
disinformation campaigns, this new breed of synthetic media also makes
it easier to deny reality — the so-called liar's dividend — as seen by
the recent baseless claim that video addresses by President Biden are
deep fakes deployed to conceal his death. I will discuss how deep fakes
are made, how they are being weaponized, and how they can be detected.

*** About the Speaker ***
Dr. Hany Farid is a Professor at the University of California, Berkeley
with a joint appointment in Electrical Engineering & Computer Sciences
and the School of Information. His research focuses on digital
forensics, forensic science, misinformation, image analysis, and human
perception. He received his undergraduate degree in Computer Science and
Applied Mathematics from the University of Rochester in 1989, and his
Ph.D. in Computer Science from the University of Pennsylvania in 1997.
Following a two-year post-doctoral fellowship in Brain and Cognitive
Sciences at MIT, he joined the faculty at Dartmouth College in 1999
where he remained until 2019. He is the recipient of an Alfred P. Sloan
Fellowship, a John Simon Guggenheim Fellowship, and a Fellow of the
National Academy of Inventors.

For more information, visit:
https://ieee-biometrics.org/index.php/activities/webinars

DeepLearn 2023 Winter: early registration October 24

8th INTERNATIONAL SCHOOL ON DEEP LEARNING

DeepLearn 2023 Winter

Bournemouth, UK

January 16-20, 2023

https://irdta.eu/deeplearn/2023wi/

***********

Co-organized by:

Department of Computing and Informatics
Bournemouth University

Institute for Research Development, Training and Advice – IRDTA
Brussels/London

******************************************************************

Early registration: October 24, 2022

******************************************************************

SCOPE:

DeepLearn 2022 Winter will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Guimarães, Las Palmas de Gran Canaria and Luleå.

Deep learning is a branch of artificial intelligence covering a spectrum of current exciting research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, health informatics, medical image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, bioinformatics, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience.

Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely.
An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles.

ADDRESSED TO:

Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2023 Winter is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators.

VENUE:

DeepLearn 2023 Winter will take place in Bournemouth, a coastal resort town on the south coast of England. The venue will be:

Talbot Campus
Bournemouth University
https://www.bournemouth.ac.uk/about/contact-us/directions-maps/directions-our-talbot-campus

STRUCTURE:

3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another.

Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event.

KEYNOTE SPEAKERS:

Yi Ma (University of California, Berkeley), CTRL: Closed-Loop Data Transcription via Rate Reduction

Daphna Weinshall (Hebrew University of Jerusalem), Curriculum Learning in Deep Networks

Eric P. Xing (Carnegie Mellon University), It Is Time for Deep Learning to Understand Its Expense Bills

PROFESSORS AND COURSES:

Mohammed Bennamoun (University of Western Australia), [intermediate/advanced] Deep Learning for 3D Vision

Matias Carrasco Kind (University of Illinois, Urbana-Champaign), [intermediate] Anomaly Detection

Nitesh Chawla (University of Notre Dame), [introductory/intermediate] Graph Representation Learning

Seungjin Choi (Intellicode), [introductory/intermediate] Bayesian Optimization over Continuous, Discrete, or Hybrid Spaces

Sumit Chopra (New York University), [intermediate] Deep Learning in Healthcare

Luc De Raedt (KU Leuven), [introductory/intermediate] From Statistical Relational to Neuro-Symbolic Artificial Intelligence

Marco Duarte (University of Massachusetts, Amherst), [introductory/intermediate] Explainable Machine Learning

João Gama (University of Porto), [introductory] Learning from Data Streams: Challenges, Issues, and Opportunities

Claus Horn (Zurich University of Applied Sciences), [intermediate] Deep Learning for Biotechnology

Zhiting Hu (University of California, San Diego) & Eric P. Xing (Carnegie Mellon University), [intermediate/advanced] A “Standard Model” for Machine Learning with All Experiences

Nathalie Japkowicz (American University), [intermediate/advanced] Learning from Class Imbalances

Gregor Kasieczka (University of Hamburg), [introductory/intermediate] Deep Learning Fundamental Physics: Rare Signals, Unsupervised Anomaly Detection, and Generative Models

Karen Livescu (Toyota Technological Institute at Chicago), [intermediate/advanced] Speech Processing: Automatic Speech Recognition and beyond

David McAllester (Toyota Technological Institute at Chicago), [intermediate/advanced] Information Theory for Deep Learning

Abdelrahman Mohamed (Meta), [intermediate/advanced] Speech Representation Learning for Recognition and Generation

Dhabaleswar K. Panda (Ohio State University), [intermediate] Exploiting High-performance Computing for Deep Learning: Why and How?

Fabio Roli (University of Cagliari), [introductory/intermediate] Adversarial Machine Learning

Bracha Shapira (Ben-Gurion University of the Negev), [introductory/intermediate] Recommender Systems

Richa Singh (Indian Institute of Technology Jodhpur), [introductory/intermediate] Trusted AI

Kunal Talwar (Apple), [introductory/intermediate] Foundations of Differentially Private Learning

Tinne Tuytelaars (KU Leuven), [introductory/intermediate] Continual Learning in Deep Neural Networks

Lyle Ungar (University of Pennsylvania), [intermediate] Natural Language Processing using Deep Learning

Bram van Ginneken (Radboud University Medical Center), [introductory/intermediate] Deep Learning for Medical Image Analysis

Yu-Dong Zhang (University of Leicester), [introductory/intermediate] Convolutional Neural Networks and Their Applications to COVID-19 Diagnosis

OPEN SESSION:

An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david@irdta.eu by January 8, 2023.

INDUSTRIAL SESSION:

A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david@irdta.eu by January 8, 2023.

EMPLOYER SESSION:

Organizations searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david@irdta.eu by January 8, 2023.

ORGANIZING COMMITTEE:

Rashid Bakirov (Bournemouth, local co-chair)
Marcin Budka (Bournemouth)
Vegard Engen (Bournemouth)
Nan Jiang (Bournemouth, local co-chair)
Carlos Martín-Vide (Tarragona, program chair)
Sara Morales (Brussels)
David Silva (London, organization chair)

REGISTRATION:

It has to be done at

https://irdta.eu/deeplearn/2023wi/registration/

The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish.

Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event.

FEES:

Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same.

ACCOMMODATION:

Accommodation suggestions are available at

https://irdta.eu/deeplearn/2023wi/accommodation/

CERTIFICATE:

A certificate of successful participation in the event will be delivered indicating the number of hours of lectures.

QUESTIONS AND FURTHER INFORMATION:

david@irdta.eu

ACKNOWLEDGMENTS:

Bournemouth University

Rovira i Virgili University

Institute for Research Development, Training and Advice – IRDTA, Brussels/London

Design by 2b Consult