NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment
Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to produce incorrect predictions or behaviors. In this regard, AI-Scan provides LLM adversarial defense capability assessment, allowing users to select an adversarial attack assessment template for one-click task assignment and generate an adversarial defense capability assessment report. […]