What is an Adversarial Attack?
An Adversarial Attack is a deliberate attempt to manipulate or deceive a
machine learning model by introducing carefully crafted input data. Adversarial attacks aim to exploit vulnerabilities and cause the model to misclassify or produce incorrect outputs.