DeepLearn 2022 Summer: regular registration July 22
June 29th, 2022
Daniela Lopez de Luise PAPER SUBMISSION EXTENDED: 16th IEEE International Conference on Application of ICT (AICT2022) | October 12-14 | Washington DC
June 29th, 2022
Daniela Lopez de Luise Competencia ECI:METADATA 2022
June 29th, 2022
Daniela Lopez de Luise Proyección de flujo de fondos (cashflow) en la empresa de préstamos: «Hopp Créditos SA» (Hopp)
La empresa AlixPartners nos propone nuevamente adentrarnos en un caso de negocios. El objetivo es ayudar a una empresa de préstamos al consumo a predecir el flujo de pagos de sus préstamos para los próximos seis meses. La empresa busca entender la salud de su portafolio, lo que le permitirá tomar decisiones acertadas respecto al flujo de fondos futuro, ¿será posible, utilizando series temporales de flujo de fondos históricos, predecir la dinámica de pagos para los créditos activos?
Compartí tus propuestas de soluciones antes del 22 de julio. ¡Te esperamos!
ECCV 2022 Workshop on “Text in Everything” (TiE)
June 29th, 2022
Daniela Lopez de Luise ===================================================
Text in Everything Workshop (TiE)
Tel Aviv, Israel, October 2022
https://sites.google.com/view/tie-eccv2022/home
in conjunction with ECCV 2022
=======================================================
Understanding written communication through vision is a key aspect of human civilization and should also be an important capacity of intelligent agents aspiring to function in man-made environments. Interpreting written information in our environment is essential in order to perform most everyday tasks like making a purchase, using public transportation, finding a place in the city, getting an appointment, or checking whether a store is open or not, to mention just a few. As such, the analysis of written communication in images and videos has recently gained an increased interest, as well as significant progress in a variety of text based vision tasks. While in earlier years the main focus of this discipline was on OCR and the ability to read business documents, today this field contains various applications that require going beyond just text recognition, onto additionally reasoning over multiple modalities such as the structure and layout of documents.
Recent advances in this field have been a result of a multi-disciplinary perspective spanning not only computer vision, but also natural language processing, document and layout understanding, knowledge representation and reasoning, data mining, information retrieval, and more. The goal of this workshop is to raise awareness about the aforementioned topics in the broader computer vision community, and gather vision, NLP and other researchers together to drive a new wave of progress by cross pollinating more ideas between text/documents and non-vision related fields.
The workshop will be a hybrid, full-day event comprising invited talks, oral and poster presentations of submitted papers and a special challenge on Out of Vocabulary scene text understanding.
Keynote speakers
- Xiang Bai (Huazhong University)
- Tal Hassner (Meta AI)
- Aishwarya Agrawal (University of Montreal, DeepMind)
- Sharon Fogel (AWS AI Labs)
Topics of Interest
The workshop welcomes original work on any text-dependent computer vision application, such as:
- Scene text understanding
- Scene text VQA
- Image-text aware cross-modal retrieval
- Image-text for fine-grained classification
- Text in video
- Document VQA
- Document layout prediction
- Table detection
- Information extraction
Challenge on Out-of-Vocabulary Scene Text Understanding
A challenge on Out of Vocabulary Scene Text Understanding (OOV-ST) will be organised in the context of this workshop. The OOV-ST challenge aims to evaluate the ability of text extraction models to deal with out-of-vocabulary (OOV) words, that have NEVER been encountered in the training set of the most common Scene Text understanding datasets to date. The challenge is organised jointly by Amazon Research, Google Research, Meta AI, and the Computer Vision Center.
To participate to the OOV_ST Challenge, please join through the RRC Portal.
Important dates
Paper Submission Deadline: July 17, 2022
Notification to Authors: August 8, 2022
Workshop Camera Ready Due: August 15, 2022
Workshop Date: October 2022
Organisers
Ron Litman, AWS AI Labs
Aviad Aberdam, AWS AI Labs
Shai Mazor, AWS AI Labs
Hadar Averbuch-Elor, Cornell University
Dimosthenis Karatzas, Computer Vision Center / Autonomous University of Barcelona
R. Manmatha, AWS AI Labs
ECCV 2022 :: AIMIA workshop: Digital Pathology & Radiology/COVID19 :: Call for Papers [UPCOMING DEADLINE]
June 29th, 2022
Daniela Lopez de Luise and CT/MRI/X-ray analysis/processing and identify research opportunities in the context of Digital Pathology and Radiology/COVID19.
AIMIA is jointly organised by INESCTEC (Portugal), NTUA (Greece), IMP Diagnostics (Portugal), Radboudumc (The Netherlands), Karolinska Institutet (Sweden), Google Health (USA) and the University
of Lincoln (UK). For more information please visit http://vcmi.inesctec.pt/aimia_eccv
***** IMPORTANT DATES *****
Submission deadline: July 08, 2022
Author notification: August 05, 2022
Camera-ready deadline: August 12, 2022
AIMIA workshop: October 2022 (T.B.D.)
***** KEYNOTE SPEAKERS *****
Dimitri Metaxas, Rutgers University, USA
Inti Zlobec, University of Bern, Switzerland
Henning Müller, HES-SO Vallais-Wallis, Switzerland
***** TOPICS OF INTEREST *****
The AIMIA workshop welcomes works that focus on (but are not limited to):
-
Semi-/weakly-/self-supervised learning methodologies;
-
Detection, classification and segmentation;
-
Disease diagnosis, grading and prognosis;
-
Treatment response prediction;
-
Detection of tissue biomarkers with predictive/prognostic value;
-
Image registration;
-
Explainable AI;
-
Clinical applications;
applied to Digital Pathology (TRACK A) and Radiology/COVID19 (TRACK B).
The workshop also invites submissions to the 2nd COV19D competition, organized within TRACK B: https://mlearn.lincoln.ac.uk/eccv-2022-ai-mia/
***** PAPER SUBMISSION *****
Submitted manuscripts should be anonymised and formatted according to the ECCV style, with a maximum of 14 pages, including images and tables and excluding cited references.
Accepted papers will be published in Springer, as part of the ECCV 2022 proceedings (workshops set).
Do you want to submit your work? Please access https://cmt3.research.microsoft.com/AIMIA2022.
***** CONTACTS *****
Jaime Cardoso (
jaime.cardoso@inesctec.pt)




