Latest papers

5 papers
defense BigData Congress Dec 9, 2025 · Dec 2025

Secure and Privacy-Preserving Federated Learning for Next-Generation Underground Mine Safety

Mohamed Elmahallawy, Sanjay Madria, Samuel Frimpong · Washington State University · Missouri University of Science and Technology

Defends FL in underground mining against gradient inversion and membership inference attacks using Decentralized Functional Encryption

Model Inversion Attack Membership Inference Attack federated-learningtimeseries
PDF
attack arXiv Oct 22, 2025 · Oct 2025

Can You Trust What You See? Alpha Channel No-Box Attacks on Video Object Detection

Ariana Yi, Ce Zhou, Liyang Xiao et al. · Mission San Jose High School · Missouri University of Science and Technology +1 more

No-box adversarial attack exploiting RGBA alpha channel blending in video to fool object detectors and VLMs with 100% success rate

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF
benchmark arXiv Sep 12, 2025 · Sep 2025

When Your Reviewer is an LLM: Biases, Divergence, and Prompt Injection Risks in Peer Review

Changjia Zhu, Junjie Xiong, Renkai Ma et al. · University of South Florida · Missouri University of Science and Technology +2 more

Evaluates LLM peer reviewer bias and susceptibility to indirect prompt injection via covert instructions embedded in academic paper PDFs

Prompt Injection nlp
PDF
defense arXiv Aug 13, 2025 · Aug 2025

Detecting Untargeted Attacks and Mitigating Unreliable Updates in Federated Learning for Underground Mining Operations

Md Sazedur Rahman, Mohamed Elmahallawy, Sanjay Madria et al. · Missouri University of Science and Technology · Washington State University

Defends federated learning against Byzantine sign-flipping and additive noise attacks in underground mining sensor networks

Data Poisoning Attack federated-learningtimeseries
PDF Code
survey arXiv Aug 7, 2025 · Aug 2025

Guardians and Offenders: A Survey on Harmful Content Generation and Safety Mitigation of LLM

Chi Zhang, Changjia Zhu, Junjie Xiong et al. · University of South Florida · Missouri University of Science and Technology

Surveys LLM jailbreaking attacks, unintentional toxicity, multimodal exploits, and safety mitigations including RLHF and alignment

Prompt Injection nlpmultimodal
PDF