You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Specifically note constant-time comparisons and auth flow (#957)
Extend the AI guidance to specifically discuss constant-time
comparisons. It's common to fail to do this, so
give specific instructions to counter this tendency.
Signed-off-by: David A. Wheeler <[email protected]>
Copy file name to clipboardExpand all lines: docs/Security-Focused-Guide-for-AI-Code-Assistant-Instructions.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,8 @@ User inputs should be checked for expected format and length.
34
34
Always validate function arguments and use parameterized queries for database access.
35
35
Escape special characters in user-generated content before rendering it in HTML.
36
36
When generating output contexts such as HTML or SQL, use safe frameworks or encoding functions to avoid vulnerabilities.
37
-
Use secure authentication flows and enforce role-based access checks where appropriate.
37
+
Never include API keys, passwords, or secrets in code output, and use environment variables or secure vault references instead. Use secure authentication flows (for instance, using industry-standard libraries for handling passwords or tokens) and to enforce role-based access checks where appropriate.
38
+
Use constant-time comparison when timing differences could leak sensitive information, such as when comparing session identifiers, API keys, authentication tokens, password hashes, or nonces.
38
39
When generating code, handle errors gracefully and log them, but do not expose internal details or secrets in error messages
39
40
Use logging frameworks that can be configured for security.
40
41
Prefer safe defaults in configurations – for example, use HTTPS by default, require strong encryption algorithms, and disable insecure protocols or options.
@@ -99,7 +100,7 @@ If you see an issue in specific results, ask something like:
99
100
One of the first sections in your instructions should reinforce general secure coding best practices. These principles apply to all languages and frameworks, and you want the AI to **always** keep them in mind when generating code:
100
101
101
102
***Input Validation & Output Encoding:** Instruct the AI to treat all external inputs as untrusted and to validate them. *Example: "user inputs should be checked for expected format and length"*. Any output should be properly encoded to prevent injection attacks such as SQL injection or cross-site scripting (XSS). *Example: "Always validate function arguments and use parameterized queries for database access"* and *"Escape special characters in user-generated content before rendering it in HTML"*. Similarly, specify that when generating output contexts such as HTML or SQL, the assistant should use safe frameworks or encoding functions to avoid vulnerabilities. [[swaroopdora2025b]](#swaroopdora2025b)[[wiz2025b]](#wiz2025b)[[haoyan2025b]](#haoyan2025b)
102
-
***Authentication, Authorization & Secrets Management:** Emphasize that credentials and sensitive tokens must never be hard-coded or exposed. Your instructions can say: *"Never include API keys, passwords, or secrets in code output, and use environment variables or secure vault references instead"*. Also instruct the AI to use secure authentication flows (for instance, using industry-standard libraries for handling passwords or tokens) and to enforce role-based access checks where appropriate. [[hammondpearce2021a]](#hammondpearce2021a)[[neilperry2022a]](#neilperry2022a)[[swaroopdora2025c]](#swaroopdora2025c)
103
+
***Authentication, Authorization & Secrets Management:** Emphasize that credentials and sensitive tokens must never be hard-coded or exposed, use secure authentication flows, and use constant-time comparisons when appropriate. Your instructions could say: *"Never include API keys, passwords, or secrets in code output, and use environment variables or secure vault references instead. Use secure authentication flows (for instance, using industry-standard libraries for handling passwords or tokens) and to enforce role-based access checks where appropriate. Use constant-time comparison when timing differences could leak sensitive information, such as when comparing session identifiers, API keys, authentication tokens, password hashes, or nonces."*[[hammondpearce2021a]](#hammondpearce2021a)[[neilperry2022a]](#neilperry2022a)[[swaroopdora2025c]](#swaroopdora2025c)
103
104
***Error Handling & Logging:** Guide the AI to implement errors securely by catching exceptions and failures without revealing sensitive info (stack traces, server paths, etc.) to the end-user. In your instructions, you might include: *"When generating code, handle errors gracefully and log them, but do not expose internal details or secrets in error messages".* This ensures the assistant's suggestions include secure error-handling patterns (like generic user-facing messages and detailed logs only on the server side). Additionally, instruct the AI to use logging frameworks that can be configured for security (e.g. avoiding logging of personal data or secrets). [[swaroopdora2025d]](#swaroopdora2025d)
104
105
***Secure Defaults & Configurations:** Include guidance such as: *"Prefer safe defaults in configurations – for example, use HTTPS by default, require strong encryption algorithms, and disable insecure protocols or options".* By specifying this, the AI will be more likely to generate code that opts-in to security features. Always instruct the AI to follow the principle of least privilege (e.g. minimal file system permissions, least-privileged user accounts for services, etc.) in any configuration or code it proposes. [[wiz2025c]](#wiz2025c)[[swaroopdora2025e]](#swaroopdora2025e)
105
106
***Testing for Security:** Encourage the AI to produce or suggest tests for critical code paths including negative tests that verify that what shouldn't happen, doesn't happen. In your instructions, add: *"When applicable, generate unit tests for security-critical functions (including negative tests to ensure the code fails safely)"*. [[anssibsi2024c]](#anssibsi2024c)[[markvero2025b]](#markvero2025b)
0 commit comments