LLMs can be easily manipulated using adversarial attacks. This project systematically tests LLMs against adversarial attacks, measure their robustness, and propose defenses
To run Mistral models locally via Hugging Face, follow the steps below:
If you haven’t already, you’ll need a Hugging Face account.
🔗 Sign up or log in here: https://huggingface.co/join
Some models like Mistral require you to:
➡️ Visit the model page (e.g., Mistral-7B-Instruct-v0.1) and click "Access repository" if needed.
To authenticate and download models programmatically, you’ll need a personal access token.
🔗 Generate a token here: https://huggingface.co/settings/tokens
When running inference locally (e.g., loading models with transformers
or AutoModelForCausalLM
), you’ll be prompted to enter the token or add it programmatically.
Enter this token when prompted by the following command: !huggingface-cli login