survey arXiv Oct 7, 2025 · Oct 2025
Asif Shahriar, Md Nafiu Rahman, Sadif Ahmed et al. · BRAC University · Qatar Computing Research Institute
First holistic survey of LLM agentic security covering 160+ papers across applications, threats, and defenses
Prompt Injection Excessive Agency Insecure Plugin Design nlp
In this work we present the first holistic survey of the agentic security landscape, structuring the field around three fundamental pillars: Applications, Threats, and Defenses. We provide a comprehensive taxonomy of over 160 papers, explaining how agents are used in downstream cybersecurity applications, inherent threats to agentic systems, and countermeasures designed to protect them. A detailed cross-cutting analysis shows emerging trends in agent architecture while revealing critical research gaps in model and modality coverage. A complete and continuously updated list of all surveyed papers is publicly available at https://github.com/kagnlp/Awesome-Agentic-Security.
llm BRAC University · Qatar Computing Research Institute
attack arXiv Nov 13, 2025 · Nov 2025
Saadat Rafid Ahmed, Rubayet Shareen, Radoan Sharkar et al. · BRAC University
Proposes adversarial text attacks on NLP transfer models using obfuscated high-perplexity examples, including Bangla language
Input Manipulation Attack nlp
Advancements in Machine Learning & Neural Networks in recent years have led to widespread implementations of Natural Language Processing across a variety of fields with remarkable success, solving a wide range of complicated problems. However, recent research has shown that machine learning models may be vulnerable in a number of ways, putting both the models and the systems theyre used in at risk. In this paper, we intend to analyze and experiment with the best of existing adversarial attack recipes and create new ones. We concentrated on developing a novel adversarial attack strategy on current state-of-the-art machine learning models by producing ambiguous inputs for the models to confound them and then constructing the path to the future development of the robustness of the models. We will develop adversarial instances with maximum perplexity, utilizing machine learning and deep learning approaches in order to trick the models. In our attack recipe, we will analyze several datasets and focus on creating obfuscous adversary examples to put the models in a state of perplexity, and by including the Bangla Language in the field of adversarial attacks. We strictly uphold utility usage reduction and efficiency throughout our work.
transformer BRAC University