CALL FOR BOOK CHAPTER (Adversarial Multimedia Forensics)

We are pleased to invite you to submit a chapter for inclusion in the “Adversarial Multimedia Forensics” book that will be published by Springer – Advances in

Information Security. Chapter submissions should be 15 to 20 pages long, single-spaced, and single-column in latex, and should provide enough information

for readers and professionals in Cybersecurity Applications, particularly with regard to multimedia forensics and security. Multimedia forensics, counter-forensics,

and anti-counter-forensics constitute the three primary sections of the book.

Looking For: We are seeking for chapters that apply security and attack principles, methodologies, and strategies to the multimedia domain i.e., images and

videos as given in the call. As an example, from multimedia forensics, consider source camera identification or fine-grained CFA artifacts assessment for forgery

detection. Using adversarial attacks on medical images or considering the transferability of adversarial attacks on vision transformers are other examples of counterforensics.

The adversary-aware double JPEG-detector via selective training on attacked samples and JPEG compression image contrast manipulation identification

are indeed a few examples of Anti-Counter Forensics against exploratory attacks. In contrast, examples employing Anti-Counter Forensics against causative attacks

would include proposing defenses against poisoning attacks to satellite imagery models. Machine Learning (ML) is becoming the de-facto standard for Multimedia

Forensics (MF) due to its exceptional capabilities. However, the peculiarity of ML architectures leads to new, significant security vulnerabilities that prohibit their

usage in security-critical applications like MF, where it is inconceivable to ignore the potential existence of an adversary. However, given the weakness of the traces

that forensic techniques rely on, disabling forensic analysis turns out to be a simple task. Determining the security of ML-based systems in the presence of an

adversary and developing innovative strategies capable of improving their protection are therefore of utmost relevance. In order to overcome the security constraints

of ML models used as counter-forensics techniques, it has become crucial for MF to develop adversary-aware solutions. This book contributes to the aforementioned

goal by emphasizing on image manipulation detection using ML/DL algorithms for MF in adversarial environments. The main structure of the book is divided into the

following three sections: (I) presents different methodologies in multimedia forensics; (II) and discusses general concepts and terminology in the field of

adversarial machine learning (Adv-ML), with a focus on the concern of counter-forensics (CF), and anti-counter forensics.

Originality: Chapter contributions should contain 25-30% novel content compared to earlier published work by the authors.

Submission: There are no submission or acceptance fees for manuscripts submitted to this book for publication. All manuscripts are accepted based on a

double-blind peer review editorial process. Please send your manuscript *.pdf, *.tex to the e-mail address of one of the editors (ehsan.nowroozi@eng.bau.edu.tr,

Alireza.jolfaei@flinders.edu.au, Kassem.kallas@inria.fr)



Timeline: Expression of interest: 15-Jan-2023 (tentative: chapter title, and abstract): Send by email to editors, Selection of chapters: 30-Jan-2023

(Inform to Authors by email and share Easy Chair link with authors for submission), Deadline for full chapter submission: 30-Feb-2023 (Submit via

Easy Chair), Review of chapters: 30-Mar-2023, Camera-ready version: 20-April-2023 (Submit in Easy chair)

Book Areas

As we mentioned, the core of the book consists of: (I) Multimedia Forensics, (II) Counter-Forensics, and (III) Anti-Counter-Forensics. The tentative table of

contents will be:

(Part-I) Multimedia Forensics: This section discusses machine learning and deep learning techniques for digital image forensics and image tampering detection.

Recent forensic analysis techniques will be covered in this part, including (I) acquisition-based footprints, (II) coding- based footprints, and (III) editing-based

footprints.

(Part-II) Counter-Forensics: This section will explain counter-forensics (CF), which is the counterpart of the detector and refers to any techniques intended to thwart

a forensic investigation. This is also referred to as anti-forensics in the literature. Deep learning and adversarial attacks on a machine learning model in this case can

be divided into exploratory and causative. This part discusses the various methods that have been proposed so far to mitigate a forensic analysis.

Exploratory Attacks: The exploratory attack scenario restricts the adversary's ability to changes to test data and forbids changes to training examples.

Example 1: Adversarial Cross-Modal Attacks from Images to Videos, Example 2: Adversarial attacks on a medical images.

Causative attacks: In causative attacks, the offensive can disrupt the training process to inject a backdoor into the model to be exploited later at inference

time; these attacks are commonly referred to as poisoning, backdoor or Trojan attacks, Example 1: Attacks using backdoors against Vision Transformers,

Example 2: Performing Backdoor Attacks Using Rotation Transformation

(Part-III) Anti Counter-Forensics: To protect the reliability of the forensic analysis, numerous anti- CF techniques have been developed in response to CF. The

majority of these methods are appropriate for particular CF methods. This section will explain recent advances in anti counter-forensics methods.

Defense against Exploratory Attacks: This part will discuss recent methods that have been proposed so far for improving the security of detectors against

exploratory attacks, such as adversary- aware detectors and developing a secure architectures, Example 1: Adversary-Aware Double JPEG-Detector via

Selected Training on Attacked Samples, Example 2: Resistant to JPEG Compression Image Contrast Manipulation Identification.

Defense against Causative Attacks: In this section, we will survey the recent techniques that have been proposed so far for enhancing the security of model

against poisoning attacks, Example 1: Defense against poisoning attacks on satellite imagery models, Example 2: Using Heatmap Clustering to find Deep

Neural Network Backdoor Poisoning Attacks



Book Editors

Dr. Ehsan Nowroozi, Assistant Professor, Bahcesehir University, Istanbul, Turkey (ehsan.nowroozi@eng.bau.edu.tr)

Dr. Alireza Jolfaei, Associate Professor, Flinders University, Adelaide, Australia (Alireza.jolfaei@flinders.edu.au)

Dr. Kassem Kallas, Research Scientist at INRIA, Rennes, France (Kassem.kallas@inria.fr)

Advertisement: https://enowroozi.com/call-for-book-chapter-adversarial-multimedia-forensics/

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult