ICLR 2023 Workshop on Domain Generalization

 

ICLR 2023 Workshop: What do we need for successful domain generalization?

Website: https://domaingen.github.io/

The real challenge for any machine learning system is to be reliable and robust in any situation, even if it is different compared to training conditions. Existing general purpose approaches to domain generalization (DG) — a problem setting that challenges a model to generalize well to data outside the distribution sampled at training time — have failed to consistently outperform standard empirical risk minimization baselines. In this workshop, we aim to work towards answering a single question: what do we need for successful domain generalization? We conjecture that additional information of some form is required for a general purpose learning methods to be successful in the DG setting. The purpose of this workshop is to identify possible sources of such information, and demonstrate how these extra sources of data can be leveraged to construct models that are robust to distribution shift. Specific topics of interest include, but are not limited to:

* Leveraging domain-level meta-data
* Exploiting multiple modalities to achieve robustness to distribution shift
* Frameworks for specifying known invariances/domain knowledge
* Causal modeling and how it can be robust to distribution shift
* Empirical analysis of existing domain generalization methods and their underlying assumptions
* Theoretical investigations into the domain generalization problem and potential solutions

Submissions are accepted via OpenReview: https://openreview.net/group?id=ICLR.cc/2023/Workshop/DG

Submission deadline: February 3, 2023
Author notifications: March 3, 2023
Meeting: May 5, 2023

SIGCOMM 2023 Call for Papers

 

=====================================================
ACM SIGCOMM 2023 is calling for submissions
https://conferences.sigcomm.org/sigcomm/2023/cfp.html
=====================================================

ACM SIGCOMM CfP *

The ACM SIGCOMM 2023 conference seeks papers describing significant research contributions or significant deployment experience in communication networks and networked systems. SIGCOMM takes a broad view of networking, and welcomes submissions on these topics, among others:

– All types of computer networks, including mobile, wide-area, data center, embedded, home, and enterprise networks.
– All types of wired and wireless technologies, including optics, radio, acoustic, and visible light-based communication.
– All aspects of networks and networked systems, such as network architecture, packet-processing hardware and software, virtualization, mobility, resource management, performance, energy consumption, topology, robustness, security, diagnosis, verification, privacy, economics, evolution, and interactions with applications.
– All parts of the network life cycle, including planning, designing, building, operating, troubleshooting, and migrations.
– All approaches and techniques, including theory, analysis, experiments, and machine learning.

SIGCOMM 2023 will accept submissions in three tracks: research, experience, and panel. Panel submissions are new this year.

Strong research track submissions will significantly advance the state of the art in networking by, for instance, proposing and developing novel ideas or by rigorously evaluating or re-evaluating existing ideas. Strong experience track submissions will present key insights and takeaways found in the course of designing and executing deployments of existing networking techniques, especially in settings that most in the community cannot duplicate (for instance, for reasons of scale). Strong panel submissions will propose a topic and panel of speakers whose ideas and interactions will engage conference attendees. All submissions should discuss the limitations of their work. Survey and tutorial papers are out of scope and will not be reviewed.

Research submissions must be anonymous (not revealing author names). Experience submissions must also be anonymous, but due to their nature, may reveal the name of the deploying organization or deployed system. The authorship of a panel submission may be anonymous, but the proposed panelists must be named in the body of the submission. While no author names can appear in a paper submitted for review, all authors must be listed in HotCRP before the submission deadline so reviewer conflicts are handled properly.

At paper registration time, authors must explicitly indicate in the submission form if their paper is to be considered for the research, experience, or panel track. Each submission will only be considered for the one track identified at submission time.

Submissions *

Submission site: https://sigcomm2023.hotcrp.com

Submissions should be in two-column, 10-point format, and can be up to 12 pages in length with as many additional pages as necessary for references and optional appendices (up to 5 pages for panel submissions).

Submissions and final papers may include appendices (following references, not counting against the 12 pages). Reviewers are not required to read appendices or consider them in their review. Authors should thus ensure that the core paper is complete and self-contained. For example, if the appendix provides details of a proof or experiment, the body should summarize the key result. Appendices may also include non-traditional material, such as videos, datasets, and code, all appropriately anonymized.

The review process may involve multiple rounds of review. Papers that are not selected to proceed may receive early notification, including reviews.

Accepted papers may be shepherded by a member of the program committee to ensure reviewer feedback is appropriately addressed. The shepherd will also review appendices and must approve their necessity.

SIGCOMM 2023 plans to be an in-person event, and the authors of every accepted paper are expected to arrange for an in-person attendee to present the paper and answer questions.

For accepted papers, the official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work. (For those rare conferences whose proceedings are published in the ACM Digital Library after the conference is over, the official publication date remains the first day of the conference.)

Important Dates *

Abstract registration deadline: Wednesday February 8, 2023 23:59 UTC
Paper submission deadline: Wednesday February 15, 2023 23:59 UTC
Paper acceptance notification: Saturday May 20, 2023
Conference: September 10 – 14, 2023

TPC Chairs *

If you have any questions do not hesitate to contact our TPC Chairs:
<dmaltz@microsoft.com>
<kohler@seas.harvard.edu>

 

 

Aaron

IEEE WoWMoM 2023, June 12-15, 2023, Boston, Massachusetts, EXTENDED DEADLINE: January 19, 2023

Conferences: CGI 2023 Call for Papers

CALL FOR PAPERS—— CGI 2023

COMPUTER GRAPHICS INTERNATIONAL, CGI 2023, Shanghai, Aug. 28- Sept. 01, 2023

http://www.cgs-network.org/cgi23/

CGI is one of the oldest annual international conferences on Computer Graphics in the world. Researchers are invited to share their experiences and novel achievements in various fields of Computer Graphics and Virtual Reality. Previous recent CGI conferences have been held in Sydney, Australia (2014), Strasbourg, France (2015), Heraklion, Greece (2016), Yokohama, Japan (2017), Bintan, Indonesia (2018), and Calgary in Canada (2019). CGI has been virtual between 2020 and 2022 due to the COVID pandemic.

This year, CGI 2023 is organized by Shanghai Jiao Tong University and University of Sydney, and supported by the Computer Graphics Society (CGS), CGI 2023 will (hopefully) be held as a hybrid event – allowing both onsite and online participation – in Shanghai. The Visual Computer is the official journal of the Computer Graphics Society.

You are invited to submit your full paper to CGI 2023. As in previous years, CGI 2023 papers can be submitted either on March 10 for possible publication in the Visual Computer journal, or June 12 for possible publication in the CAVW Journal, VRIH journal and CGI Conference Proceedings (LNCSSpringer) .

For the paper submissions for the Visual Computer Journal Publication, you will be able to edit your submission up to the paper submission deadline (GMT 23:59, 10 March 2023).

The main topics of the CGI 2023 conference include (but not limited to):

  *  Rendering Techniques                  

  *  Metaverse (VR/MR/XR)                 

  *  Physically Based Modeling                

  *  Machine Learning for Computer Graphics    

  *  Data Compression for Graphics            

  *  Image Based Rendering and Modeling       

  *  Computer Animation                     

  *  Shape Analysis and Image Retrieval         

  *  Digital Cultural Heritage                  

  *  Image Processing and Analysis             

  *  Global Illumination                       

  *  Digital Humans                         

  *  Stylized Rendering

  *  Geometry Processing and Analysis

  *  Shape and Surface Modeling

  *  Computer Vision for Computer Graphics

  *  Scientific Visualization

  *  Computational Geometry

  *  Computational Photography

  *  Visual Analytics

  *  Volume Rendering

  *  Computational Fabrication

  *  3D Reconstruction

  *  Graphical Human-Computer Interaction

  *  Sketch-based Modelling

  *  Textures

——————————————————————————————————————————————————————

Submission Guidelines:

1. Submission via Easychair System (open for submissions from Jan.01 2023):

https://easychair.org/conferences/?conf=cgi2023

2. Submission timelines

2.1 Paper submissions for Visual Computer Journal:

Submission deadline:                                                       March 10, 2023, GMT 23:59

Preliminary notification:                                                  April 22, 2023

Deadline to Receive Revised Papers from Authors:         May 18, 2023

Final Notification of Revised Papers:                              June 15, 2023

2.2 Submissions for CAVW Journal, VRIH Journal, and CGI LNCS Proceedings:

Submission deadline:                                                      June 12, 2023, GMT 23:59

Paper notification:                                                           July 13, 2023

Camera-ready Version:                                                    August 5, 2023

For all papers calls, paper submissions will consist of 8-12 pages. A template for the full paper submission is available at Microsoft Word (http://www.cgs-network.org/cgi18/wp-content/uploads/2018/01/CGI2018_Word.zip) and Latex(http://www.cgs-network.org/cgi18/wp-content/uploads/2018/01/CGI2018_latex.zip ). Papers should be submitted in PDF format. This PDF paper should NOT contain any name or affiliation (blind paper). You may include videos in MP4, WMV and AVI format in the easychair system for your paper submission. For multiple videos, please use zip file. Please note that there is a maximum file size of 40 MB per submission.

We strongly encourage authors to improve the reproducibility of their research along three directions: open data, open implementations, and appropriate evaluation design and reporting. Where possible, we invite authors to use open data or to make their data and code available for open access by other researchers.

Note that for ALL submissions, the review process is double blind, which requires the paper and all supplemental materials to be anonymous. Ensure that self-referencing is anonymous (refer to your full name rather than “I” or “we”). Avoid providing information that may identify the authors in the acknowledgments (e.g. co-workers and grant IDs) and in the supplemental material (e.g. titles in the movies, or attached papers). Avoid providing links to websites that identify the authors. Violation of any of these guidelines will lead to rejection without review.

——————————————————————————————————————————————————————

Call for CGI 2023 Workshops

The CGI 2023 conference will host a variety of satellite events including workshops, challenges, and tutorials. Workshops have become essential components of CGI conferences, particularly as the field has undergone steady growth and has expanded into a diverse set of areas. The deadline for submission of workshop/special sessions proposals is 25 February 2023. Final decisions on revised proposals will be 8 March 2023. More details can be found in the website:

http://www.cgs-network.org/cgi23/workshops-special-sessions/

——————————————————————————————————————————————————————

Call for CGI 2023 Challenges

Among other satellite events, challenges have become an integral part of CGI 2023. Their aim is to provide a fair and direct comparison of different methodological solutions to a common problem. Challenges should address a well-defined open problem relevant to computer graphics and virtual reality, provide high-quality data for testing / training algorithms, and define a clear assessment procedure. Examples of topics from previous challenges include: rendering, modeling, animation, segmentation, detection and visualization. Proposals related to accessible, fair, responsible, and translational graphics applications are particularly welcome. The deadline for submission of challenge proposals is 25 February 2023.Final decisions on revised proposals will be 8 March 2023. More details can be found in the website:

http://www.cgs-network.org/cgi23/cgi-challenge/

——————————————————————————————————————————————————————

Honorary Conference Chairs

Enhua Wu                                      Chinese Academy of Sciences /University of Macau, China

Dagan Feng                                   The University of Sydney, Australia

Conference Chairs

Nadia Magnenat Thalmann             University of Geneva

Bin Sheng                                      Shanghai Jiao Tong University

Jinman Kim                                   The University of Sydney

Program Chairs

Daniel Thalmann                           École Polytechnique Fédérale de Lausanne (EPFL)

Stephen Lin                                   Microsoft Research

Lizhuang Ma                                 Shanghai Jiao Tong University

Ping Li                                          Hong Kong Polytechnic University  

——————————————————————————————————————————————————————

Contact:

For any questions regarding the CGI 2023 conference, please contact the organizing committee by emailing to: vrar@cs.sjtu.edu.cn

EURASIP Journal on Image and Video Processing Free Webinar on January 12, 2023 (12:30pm CET)

Date&Time: January 12, 2023 at 12:30 p.m. CET [06:30 a.m. New-York] – [12:30 p.m. Paris/Berlin] – [6:30 p.m. Beijing]
Title: Generative Volumetric Video for Interactive Virtual Humans.
Speaker: Peter EISERT (Humboldt University BerlinFraunhofer HHI)

To join the webinar, please register to receive more details on how to connect.
The registration form can be found at: https://forms.gle/9JCc6NBgM1x2kZK6A

or via the website of the journal at: https://jivp-eurasipjournals.springeropen.com/
Contact: Esinu Abadjivor <esinu.abadjivor@springernature.com&gt;

Abstract: Photo-realistic modelling and rendering of humans is extremely important for virtual reality (VR) environments, as the human body and face are highly complex and exhibit large shape variability but also, especially, as humans are extremely sensitive to looking at humans. Further, in VR environments, interactivity plays an important role. While purely computer graphics modeling can achieve highly realistic human models, achieving real photo-realism with these models is computationally extremely expensive. Hence, more and more hybrid methods have been proposed in recent years. In this talk, the creation of high-quality animatable volumetric video content of human performances is addressed. Recent progress in learning human motion and appearance, neural rendering, and implicit function rendering (e.g. NeRF) enable the creation of realistic representation of virtual humans. We combine these approaches with classical models in order to obtain full control for body and face animation. Different concepts ranging from animatable volumetric video to pure data driven representation will be presented that target the realistic animation of virtual humans for different applications.
Speaker Bio: Peter Eisert is Professor for Visual Computing at Humboldt University Berlin and heading the heading the Vision & Imaging Department at Fraunhofer HHI. In 2000, he got his PhD “with highest honors” from the University of Erlangen. In 2001, he worked as a postdoctoral fellow at the Stanford University. He has coordinated and initiatied numerous national and international 3rd party funded projects with a total budget of more than 25 Mio Euros. He has published more than 250 papers and is Associate Editor of the International Journal of Image and Video Processing as well as the Journal of Visual Communication and Image Representation. His research interests include 3D image analysis and synthesis, face body processing, computer vision, computer graphics, as well as deep learning in application areas like multimedia, production, security and medicine.

Webinar videos are available online at https://vimeo.com/showcase/8005816.
Design by 2b Consult