Scroll
AI red teaming
A practice where AI systems are subjected to adversarial testing by expert teams (red teams) to identify vulnerabilities, biases, or potential failures in order to improve the safety and robustness of the models.
A practice where AI systems are subjected to adversarial testing by expert teams (red teams) to identify vulnerabilities, biases, or potential failures in order to improve the safety and robustness of the models.