This repository serves as a comprehensive resource hub for AI Governance. The rapid advancement of AI has introduced complex technical vulnerabilities and societal risks, underscoring the pressing need for a holistic governance framework. This collection aims to bridge the gap between technical research, policy-making, and real-world application by curating not only academic papers but also key policies and regulations, impactful news and case studies, practical technical tools, and crucial datasets.
Our organizing principle is derived from the framework in our survey, "Never Compromise to Vulnerabilities: A Comprehensive Survey on Al Governance," which categorizes AI Governance into three pillars:
- Intrinsic Security: Internal system reliability and robustness.
- Derivative Security: External, real-world harms from AI deployment.
- Social Ethics: Value alignment, accountability, and societal impact.
This repository aims to provide researchers, engineers, policymakers, and the public with a structured guide to navigating the multifaceted landscape of AI Governance.
🌻 We welcome contributions! Please feel free to open a pull request or issue to add more resources. For academic papers, please use the format below. For other resources like policies or tools, the "Classification" column can be adapted accordingly (e.g., "EU AI Act", "Detection Tool").
Title | Link | Code/Source | Venue/Issuer | Classification | Model | Tag |
---|---|---|---|---|---|---|
Paper Title | arxiv | github | Venue'YY | 1.1 Adversarial Vulnerability | LLM | Defense |
- [2025.09.11] 📠 New "News & Case Studies" Section Launched! This section will curate important news and case studies related to AI safety and governance, specifically focusing on incidents of AI misuse, its negative impacts, and ethical controversies, aiming to provide researchers, policymakers, and the public with a real-time understanding of developments.
- [2025.08.08] 🚀 Repository launched! This is a comprehensive hub for AI Governance, including papers, policies, tools, and datasets, with a detailed taxonomy based on our TPAMI 2025 survey.
- Policies & Regulations
- Surveys
- Books
- News & Case Studies
- Papers
- 1. Intrinsic Security
- 2. Derivative Security
- 3. Social Ethics
- Datasets & Benchmarks
- 1. Intrinsic Security
- 2. Derivative Security
This repository is maintained by the authors of the survey "Never Compromise to Vulnerabilities: A Comprehensive Survey on Al Governance" (submitted to TPAMI). The taxonomy and organization are derived directly from this work. We are inspired by the open-source spirit of repositories like Awesome-LM-SSP, LLM Security, and Awesome LLM Security.