AI Red-Team Assessment CLI toolkit for evaluating adversarial robustness of locally-hosted language models
darkarts added to PyPI
The 'darkarts' project was added to PyPI as an AI Red-Team Assessment CLI toolkit for evaluating adversarial robustness of locally-hosted language models. This toolkit could be exploited by attackers to test and potentially exploit vulnerabilities in AI systems, posing a risk to organizations using AI models. Immediate review and verification of installed packages are recommended to prevent supply chain attacks.