Call for Papers ScaDL 2022 Workshop

ScaDL 2023: Scalable Deep Learning over Parallel And Distributed
Infrastructure – An IPDPS 2023 Workshop

https://2023.scadl.org

Scope of the Workshop:
Recently, Deep Learning (DL) has received tremendous attention in the research
community because of the impressive results obtained for a large number of
machine learning problems. The success of state-of-the-art deep learning
systems relies on training deep neural networks over a massive amount of
training data, which typically requires a large-scale distributed computing
infrastructure to run. In order to run these jobs in a scalable and efficient
manner, on cloud infrastructure or dedicated HPC systems, several interesting
research topics have emerged which are specific to DL. The sheer size and
complexity of deep learning models when trained over a large amount of data
makes them harder to converge in a reasonable amount of time. It demands
advancement along multiple research directions such as, model/data
parallelism, model/data compression, distributed optimization algorithms for
DL convergence, synchronization strategies, efficient communication and
specific hardware acceleration.

SCADL seeks to advance the following research directions:
– Asynchronous and Communication-Efficient SGD: Stochastic gradient descent is
at the core of large-scale machine learning. Parallelizing SGD gradient
computation across multiple nodes increases the data processed per iteration,
but exposes the SGD to communication and synchronization delays and
unpredictable node failures in the system. Thus, there is a critical need to
design robust and scalable distributed SGD methods to achieve fast error-
convergence in spite of such system variabilities.
High performance computing aspects: Deep learning is highly compute intensive.
Algorithms for kernel computations on commonly used accelerators (e.g. GPUs),
efficient techniques for communicating gradients and loading data from storage
are critical for training performance.

– Model and Gradient Compression Techniques: Techniques such as reducing
weights and the size of weight tensors help in reducing the compute
complexity. Using lower-bit representations such as quantization and
sparsification allow for more optimal use of memory and communication
bandwidth.

– Distributed Trustworthy AI: New techniques are needed to meet the goal of
global trustworthiness (e.g., fairness and adversarial robustness) efficiently
in a distributed DL setting.

– Emerging AI hardware Accelerators: with the proliferation of new hardware
accelerators for AI such in memory computing (Analog AI) and neuromorphic
computing, novel methods and algorithms need to be introduced to adapt to the
underlying properties of the new hardware (example: the non-idealities of the
phase-change memory (PCM) and the cycle-to-cycle statistical variations).

– The intersection of Distributed DL and Neural Architecture Search (NAS): NAS
is increasingly being used to automate the synthesis of neural networks.
However, given the huge computational demands of NAS, distributed DL is
critical to make NAS computationally tractable (e.g., differentiable
distributed NAS).

This intersection of distributed/parallel computing and deep learning is
becoming critical and demands specific attention to address the above topics
which some of the broader forums may not be able to provide. The aim of this
workshop is to foster collaboration among researchers from distributed/
parallel computing and deep learning communities to share the relevant topics
as well as results of the current approaches lying at the intersection of
these areas.

Areas of Interest
In this workshop, we solicit research papers focused on distributed deep
learning aiming to achieve efficiency and scalability for deep learning jobs
over distributed and parallel systems. Papers focusing both on algorithms as
well as systems are welcome. We invite authors to submit papers on topics
including but not limited to:

– Deep learning on cloud platforms, HPC systems, and edge devices
– Model-parallel and data-parallel techniques
– Asynchronous SGD for Training DNNs
– Communication-Efficient Training of DNNs
– Scalable and distributed graph neural networks, Sampling techniques for
graph neural networks
– Federated deep learning, both horizontal and vertical, and its challenges
– Model/data/gradient compression
– Learning in Resource constrained environments
– Coding Techniques for Straggler Mitigation
– Elasticity for deep learning jobs/spot market enablement
– Hyper-parameter tuning for deep learning jobs
– Hardware Acceleration for Deep Learning including digital and analog
accelerators
– Scalability of deep learning jobs on large clusters
– Deep learning on heterogeneous infrastructure
– Efficient and Scalable Inference
– Data storage/access in shared networks for deep learning
– Communication-efficient distributed fair and adversarially robust learning
– Distributed learning techniques applied to speed up neural architecture
search

Workshop Format:
Due to the continuing impact of COVID-19, ScaDL 2023 will also adopt relevant
IPDPS 2023 policies on virtual participation and presentation. Consequently,
the organizers are currently planning a hybrid (in-person and virtual) event.

Submission Link:
Submissions will be managed through linklings. Submission link available at:
https://2023.scadl.org/call-for-papers

Key Dates
Paper Submission: January 15, 2023
Acceptance Notification: February 17th, 2023
Camera ready papers due: February 28th, 2023
Workshop Date: TBA

Author Instructions
ScaDL 2023 accepts submissions in two categories:
– Regular papers: 8-10 pages
– Short papers/Work in progress: 4 pages
The aforementioned lengths include all technical content, references and
appendices.
We encourage submissions that are original research work, work in progress,
case studies, vision papers, and industrial experience papers.
Papers should be formatted using IEEE conference style, including figures,
tables, and references. The IEEE conference style templates for MS Word and
LaTeX provided by IEEE eXpress Conference Publishing are available for
download. See the latest versions at
https://www.ieee.org/conferences/publishing/templates.html

General Chairs
Kaoutar El Maghraoui, IBM Research AI, USA
Daniele Lezzi, Barcelona Supercomputing Center, Spain

Program Committee Chairs
Misbah Mubarak, NVIDIA, USA
Alex Gittens, Rensselaer Polytechnic Institute (RPI), USA

Publicity Chairs
Federica Filippini, Politecnico di Milano, Italy
Hadjer Benmeziane, Université Polytechnique des Hauts-de-France

Web Chair
Praveen Venkateswaran, IBM Research AI, USA

Steering Committee
Parijat Dube, IBM Research AI, USA
Vinod Muthusamy, IBM Research AI, USA
Ashish Verma, IBM Research AI, USA
Jayaram K. R., IBM Research AI, USA
Yogish Sabharwal, IBM Research AI, India
Danilo Ardagna, Politecnico di Milano, Italy

ACM TALLIP – Special Issue on Challenges and Trending Solutions for Cognitive Analytics of Social Multi-modal Text in Asian Indigenous Languages

ACM Transactions on Asian and Low-resource Language Information Processing

Special Issue on Challenges and Trending Solutions for Cognitive Analytics of Social Multi-modal Text in Asian Indigenous Languages

Guest Editors:

With the proliferation of social networks (Twitter, Tumblr, Google+, Facebook, Instagram, Snapchat, YouTube, etc.), users can post and share all kinds of multi-modal text in the social setting using Internet without much knowledge about the Web’s client-server architecture and network topology. Multi-modal text defines a combination of two or more semiotic systems, which studies the visual, linguistic, audio, gestural and spatial signs and symbols to create meaning. Interestingly, the social multi-modal data is estimated to be 90% unstructured further making it crucial to tap and analyze information using contemporary tools. This proffers novel opportunities and challenges to leverage this high-diversity multi-modal data. At the same time, the resource-poor Indigenous languages are very challenging when dealing with NLP tasks and applications because of multiple reasons

Especially, the Asian social networking market dominates the world landscape with the highest consumer penetration rate. Businesses and investors often look for winning strategies to attract consumers to increase revenues from sales, advertisements, and other services offered on social media platforms. Current studies on social media are based on English language sociolinguistic cues and studies in local and regional Asian indigenous languages needs further exploration. Recently, cognitive analytics as a technology-based solution has attracted a lot of attention by both researchers and practitioners. It is a novel approach to information discovery and decision making, which uses multiple intelligent technologies such as machine learning, deep learning, artificial intelligence, natural language processing and image recognition among others to understand data then generate insights. It is touted as the key to unlock big data in the social setting for practical data-driven decision making.

The special issue aims to stimulate discussion on the design, use and evaluation of self-correcting and human cognition for continuous learning as the key knowledge discovery drivers within the socially connected ecosystem especially for NLP tasks pertaining to Asian indigenous languages. We encourage submission of articles describing cognitive models for resource-poor social media analytics to leverage deeper insights from the vast amount of generated data delivering a near real-time intelligence. Concurrently, we also welcome theoretical work and review articles on cognitive social media analytics framework.

Topics

The list of topic includes but not limited to:

  • Cognitive modelling for social media analytics using Asian Indigenous Languages
  • Trend and network analysis in Asian networks
  • Sentiment analysis in Asian Indigenous Languages
  • Monitoring emotion/rumour/bully in social data multi-modalities of Asian Indigenous Languages
  • Speech recognition and language generation using Asian Indigenous Languages
  • Cognitive robots, chat-bots and agents using Asian Indigenous Languages
  • Multi-modal interfaces in cognitive social media systems using Asian Indigenous Languages
  • Conversational AI for Asian Indigenous Languages

 

Important Dates

  • Submission Open: 20th Feb 2023
  • Submission deadline: 30th Aug 2023
  • First-round review decisions: 20th Oct 2023
  • Deadline for revision submissions: 30th Nov 2023
  • Notification of final decisions: 30th Dec 2023
  • Tentative publication: As per journal policy

 

Submission Information

Please refer https://dl.acm.org/journal/tallip/author-guidelines and select “Automated Knowledge Extraction and Natural Language Processing for Lexicography of Low Resource Languages” in the TALLIP submission site, https://mc.manuscriptcentral.com/tallip

 

For questions and further information, please contact Dr. Deepak (dkj@ieee.org)

 

Call for Papers – ICCCS 2023, March 03-04, 2023, India

Dear Researchers,
Greetings of the day! 

We are pleased to announce that the 8th International Conference on Computing, Communication, and Security (ICCCS – 2023) (www.icccs-conf.in/2023) is an annual international event that will be held in Punjab, India this year during March 03-04, 2023. We are requesting you submit the original, unpublished papers, and not submitted concurrently for publication elsewhere; at the conference. All the accepted and presented papers will be submitted for inclusion in Springer in their Communications in Computer and Information Science (CCIS) series. 
Please visit for details: www.icccs-conf.in/2023/cfp.php

IMPORTANT DATES:
  •      Paper Submission: December 31, 2022 
  •      Notification of Acceptance: January 10, 2023 
  •      Camera-Ready Paper Submission: January 31, 2023 
  •      Date of the Conference: March 03-04, 2023

Papers are invited on all topics related to the following tracks:
  • TRACK 1: Computing, Communication & Networking
  • TRACK 2: Big Data Analytics, Data Mining, Machine Learning & AI
  • TRACK 3: IoT & Smart City
  • TRACK 4: VLSI Design, Antenna, Microwave Theory, and Applications
  • TRACK 5: Privacy, Trust & Information Security
  • TRACK 6: e-Learning, e-Commerce, E-Society & e-Governance
  • TRACK 7: Blockchain and Distributed Ledger
  • TRACK 8: Underwater Networks and Systems
  • TRACK 9: Augmented Reality, Virtual Reality, and Simulations
  • TRACK 10: Human Behaviour Understanding


Contact:
Phone: +91-8193098151

DEADLINE EXTENDED (7 December 2022): 5th AccML Workshop at HiPEAC 2023

==================================================================
5th Workshop on Accelerated Machine Learning (AccML)

Co-located with the HiPEAC 2023 Conference
(https://www.hipeac.net/2023/toulouse/)

January 18, 2023
Toulouse, France
==================================================================

Call for Participation: “All things Attention- Bridging different perspectives on attention”

On behalf of the co-organizers, we would like to invite you to attend (in-person or virtually) our NeurIPS workshop on “All things Attention: Bridging Different Perspectives on Attention”. The details of the workshop follow:

The Thirty Sixth Conference on Neural Information Processing Systems (NeurIPS)

Dec 2, 2022

https://attention-learning-workshop.github.io/

When: Dec 2, 2022 9AM – 6PM (local time, UTC-06:00) 

Where: Room 399 (in-person) or (virtually) https://neurips.cc/virtual/2022/workshop/49996 (requires NeurIPS registration)

WORKSHOP DETAILS

The All Things Attention workshop aims to foster connections across disparate academic communities that conceptualize “Attention” such as Neuroscience, Psychology, Machine Learning, and Human-Computer Interaction. Workshop topics of interest include (but are not limited to):

  1. Relationships between biological and artificial attention

    1. What are the connections between different forms of attention in the human brain and present deep neural network architectures? 

    2. Can the anatomy of human attention models provide useful insights to researchers designing architectures for artificial systems? 

    3. Given the same task and learning objective, do machines learn attention mechanisms that are different from humans? 

  1. Attention for reinforcement learning and decision making

    1. How have reinforcement learning agents leveraged attention in decision making?

    2. Do decision-making agents today have implicit or explicit formalisms of attention?

    3. How can AI agents build notions of attention without explicitly baked in notions of attention?

    4. Can attention significantly enable AI agents to scale e.g. through gains in sample efficiency, and generalization?

  2. Benefits and formulation of attention mechanisms for continual / lifelong learning

    1. How can continual learning agents optimize for retention of knowledge for tasks that it already learned? 

    2. How can the amount of interference between different inputs be controlled via attention? 

    3. How does the executive control of attention evolve with learning in humans? 

    4. How can we study the development of attentional systems in infancy and childhood to better understand how attention can be learned?

  3. Attention as a tool for interpretation and explanation

    1. How have researchers leveraged attention as a visualization tool?

    2. What are the common approaches when using attention as a tool for interpretability in AI? 

    3. What are the major bottlenecks and common pitfalls in leveraging attention as a key tool for explaining the decisions of AI agents?

    4. How can we do better?

  4. The role of attention in human-computer interaction and human-robot interaction

    1. How do we detect aspects of human attention during interactions, from sensing to processing to representations?   

    2. What systems benefit from human attention modeling, and how do they use these models?

    3. How can systems influence a user’s attention, and what systems benefit from this capability?

    4. How can a system communicate or simulate its own attention (humanlike or algorithmic) in an interaction, and to what benefit?

    5. How do attention models affect different applications, like collaboration or assistance, in different domains, like autonomous vehicles and driver assistance systems, learning from demonstration, joint attention in collaborative tasks, social interaction, etc.?

    6. How should researchers thinking about attention in different biological and computational fields organize the collection of human gaze data sets, modeling gaze behaviors, and utilizing gaze information in various applications for knowledge transfer and cross-pollination of ideas?

  5. Attention mechanisms in Deep Neural Network (DNN) architectures

    1. How does attention in DNN such as transformers relate to existing formalisms of attention in cogsci/psychology? 

    2. Do we have a concrete understanding of how and if self-attention in transformers contributes to its vast success in recent models such as GPT2, GPT3, DALLE.? 

    3. Can our understanding of attention from other fields inform the progress we have achieved in recent breakthroughs?

CONFIRMED SPEAKERS & PANELISTS

Speakers:

Pieter Roelfsema (Netherlands Institute for Neuroscience)

James Whittington (University of Oxford)

Ida Momennejad (Microsoft Research)

Erin Grant (UC Berkeley)

Henny Admoni (Carnegie Mellon University)

Tobias Gerstenberg (Stanford University)

Vidhya Navalpakkam (Google Research)

Shalini De Mello (NVIDIA)

Panelists:

David Ha (Google Brain)

Pieter Roelfsema (Netherlands Institute for Neuroscience)

James Whittington (University of Oxford)

Ida Momennejad (Microsoft Research)

Henny Admoni (Carnegie Mellon University)

Tobias Gerstenberg (Stanford University)

Shalini De Mello (NVIDIA)

Vidhya Navalpakkam (Google Research)

Erin Grant (UC Berkeley)

Ramakrishna Vedantam (Meta AI Research)

Megan deBettencourt (University of Chicago)

Cyril Zhang (Microsoft Research)

ORGANIZERS

Akanksha Saran (Microsoft Research, NYC)

Khimya Khetarpal (McGill University, Mila Montreal)

Reuben Aronson (Carnegie Mellon University)

Abhijat Biswas (Carnegie Mellon University)

Ruohan Zhang (Stanford University)

Grace Lindsay (University College London, New York University)

Scott Neikum (University of Texas at Austin, University of Massachusetts)

CONTACT

Please reach out to us at attention-workshop@googlegroups.com  if you have any questions. We look forward to receiving your submissions!

Kind Regards,

Workshop Organizers

All things Attention- Bridging different perspectives on attention

Design by 2b Consult