=============================================================== THE FIRST INTERNATIONAL WORKSHOP ON SMALL PRECISION FOR MACHINE LEARNING (SPINAL 2023) In Conjunction with PAKDD 2023 25-28 May, Osaka, Japan http://spinal2023.josueonline.com =============================================================== CALL FOR PAPERS: ---------------- The surge in demand and interest in more potent AI and machine learning applications has fueled the race to build larger models, powerful algorithms and heavy digital infrastructures to ensure production and delivery breakthroughs in machine learning (ML) systems. At the heart of this technological transformation, it is hard to omit the role of dedicated processors for ML applications to accelerate the learning process and speed up computation of machine learning models for both training and inference workloads, and to cope with trends of the increasing complexity of ML applications. As model and dataset size requirements of deep learning applications continue to increase, in order to address the growing compute, storage, and connectivity requirements, scalability of machine learning systems becomes even more critical. Indeed, efficient training of large models, is synonymous with the efficient use of the available compute, power, memory, and networking resources to overcome any physical limitations when training such large models over a large distributed infrastructure. Choice of computer arithmetic is an important part in any accelerator design, as it has a direct impact on the hardware complexity and compute efficiency, as well as the utilization of the available memory and communication bandwidth. In addition, efficient numerical representation is indispensable at it offers prospects of an improved power efficiency achieved through higher compute throughput and better utilization of the communication bandwidth. In addition to model training and tuning, several challenges arise during deployment of large models as a result of the increased computational complexity and tight latency requirements. To overcome these challenges, numerical quantisation for inference is a necessity to utilize the limited compute and memory available in small-powered devices or infrastructure used to serve these workloads. With the slowdown of Moore's law, and the reality that the current silicon technology is no longer able to double the processing speed with every new silicon generation at a constant power, the need for todays and tomorrow's ML processors and accelerators to make more efficient use of the available power is paramount. The demand is expected to grow and catch more interest due to the current physical limitations of cloud, edge, and mobile solutions. This workshop intends to gather researchers and practitioners in the area of low precision arithmetic for machine learning and offer a platform for an exchange of ideas, and to discuss the progress in the area as well as to address emerging future challenges. Areas of interest include, but are not limited to: - Low precision training - Low precision inference - On-chip ring communication - Power management - Benchmarks and frameworks for low precision ML - NLP optimizations - Computer vision optimizations SPECIAL ISSUE -------------- The workshop organizers are exploring special issue opportunities with few journals. More information will be available very soon. IMPORTANT DATES ----------------- Submission deadline: March 7, 2023 Acceptance notification: April 7, 2023 Camera-ready submission: April 25, 2023 WORKSHOP ORGANIZERS: --------------------- - Badreddine Noune, Technology Innovation Institute Abu Dhabi, UAE - Hakim Hacid, Technology Innovation Institute, Abu Dhabi, UAE - Imran Junejo, AMD, Canada SUBMISSION WEBSITE: -------------------- Please submit your contributions on the following link: https://easychair.org/conferences/?conf=spinal2023