🎯 Workshop Overview
Don’t rely on heroics. Encode security into policy and detection — so the workflow itself prevents mistakes.
| Workshop | Focus | |
|---|---|---|
| 🛡️ | WS1 — Trust Boundary & Platform Trust | WHO can access WHAT, and WHERE does code live? |
| 🔒 | WS2 — Secure by Design Guardrails (YOU ARE HERE) | WHAT prevents bad code from landing in production? |
| 🔗 | WS3 — Supply Chain Integrity & Code-to-Cloud Visibility | Can we TRUST the pipeline and every artifact it produces? |
| 🔄 | WS4 — Operational Response & Continuous Improvement | WHEN something goes wrong, how do we detect, respond, and improve? |
This workshop delivers all three layers of the guardrails pyramid from the Agentic DevSecOps presentation:
┌─────────────┐
│ Design │ ← Warm-Up: THREAT-MODEL.md + data classification
├─────────────┤
│ Policy │ ← Exercise 1: Rulesets, required status checks
├─────────────┤
│ Detection │ ← Exercise 2: Push protection, CodeQL, dependency review
└─────────────┘
+
AI Remediation ← Exercise 3: Autofix quality + security campaigns
Driving Question
“WHAT prevents bad code from landing in production?”
Key Insight
Guardrails are not scans bolted on after the fact. They are policy + detection embedded in the workflow, enhanced by AI remediation.
| Attribute | Value |
|---|---|
| Duration | 30–45 minutes |
| Exercises | Warm-Up + 3 Exercises (Design → Policy → Detection → AI Remediation) |
| Target Audience | Developers, platform engineers, security champions |
| NIST SSDF Group | PW — Produce Well-Secured Software |
Shared Scenario
All workshops in the series use a common scenario: a development team working in a GitHub Enterprise Cloud with Data Residency (Japan) environment, building and deploying a web application through a GitHub-managed CI/CD pipeline.
📚 Learning Objectives
By the end of this workshop, you will be able to:
- Explain the 3-layer guardrails model: Design → Policy → Detection, and why each layer is necessary
- Configure org-level rulesets and required status checks as policy guardrails that block merges regardless of human agreement
- Experience push protection, code scanning (CodeQL), and dependency review — three distinct detection mechanisms operating at different points in the workflow
- Evaluate Copilot Autofix suggestions critically — rating fixes as ✅ Correct, ⚠️ Partially correct, or ❌ Incorrect
- Use security campaigns to manage security debt at scale with Copilot coding agent
🔐 NIST SSDF Alignment
| SSDF Practice | Description | Workshop Coverage |
|---|---|---|
| PW.1.1 | Design software to meet security requirements; use threat modeling | Warm-Up: THREAT-MODEL.md |
| PW.6.1 | Use automated tools in the build process to check for vulnerabilities | Exercise 2: CodeQL, dependency review |
| PW.6.2 | Configure tools to treat detected vulnerabilities as build failures | Exercise 1: Required status checks |
| PW.7.1 | Review and verify code for security issues | Exercise 3: Copilot Autofix + human review |
| PW.9.1 | Configure the software to have secure settings by default | Exercise 1: Org-level rulesets |
📋 Curriculum
| Step | Title | Time |
|---|---|---|
| Setup | Environment Setup | ~10 min |
| 1 | Warm-Up: Threat Model | ~3 min |
| 2 | Exercise 1: Policy Guardrails | ~10 min |
| 3 | Exercise 2: Detection Guardrails | ~12 min |
| 4 | Exercise 3: AI-Assisted Remediation | ~12 min |
💬 Discussion Prompts
Use these questions to deepen understanding after the exercises:
-
Push protection bypass: “What happens if a developer bypasses push protection with a business justification? How would you detect that, and what process should follow?”
-
Threat modeling cadence: “NIST SSDF PW.1.1 requires threat modeling and attack surface mapping at design time. How often does your team update its threat model? Is it a living document or a one-time artifact?”
-
AI fix quality: “If Copilot Autofix suggests an incorrect fix, what’s the worst-case scenario? How do you catch it before it reaches production?”
-
Alert prioritization: “You have 200 low-severity dependency alerts and 3 high-severity CodeQL findings. How would you prioritize? How do security campaigns change your approach?”
🚀 Optional Extensions
Extension A: Copilot Code Review
- Open a PR with a code change (can be a fix from Exercise 3)
- Request Copilot Code Review on the PR
- Compare Copilot’s review comments with your own human assessment
- Discuss: Where does Copilot add value? Where does it miss context?
Extension B: Custom Rulesets for Different Repo Categories
- Create multiple rulesets targeting different repository patterns:
production-*→ Strict: 2 approvals, all checks required, no bypassexperimental-*→ Moderate: 1 approval, CodeQL requiredlibrary-*→ Standard: 1 approval, dependency review required
- Discuss: How does this map to your organization’s risk tiers?
Extension C: Push Protection Bypass Audit
- Navigate to Organization Settings → Audit log
- Filter for
secret_scanning_push_protection.bypassevents - Review: Who bypassed? What justification was given? Was it legitimate?
- Discuss: What automated alerting would you set up for these events?
Extension D: MCP Server Integration
- Configure a Model Context Protocol (MCP) server that connects an internal security checking tool to Copilot
- Test: Can Copilot reference your organization’s security policies when suggesting fixes?
- Discuss: How would MCP-powered context improve Autofix quality for your codebase?
📖 References
GitHub Documentation
| Resource | Link |
|---|---|
| GitHub Advanced Security overview | docs.github.com |
| Repository rulesets | docs.github.com |
| Push protection for secret scanning | docs.github.com |
| CodeQL code scanning | docs.github.com |
| Dependency review | docs.github.com |
| Copilot Autofix for code scanning | docs.github.com |
| Security campaigns | docs.github.com |
| Copilot coding agent | docs.github.com |
NIST Standards
| Resource | Link |
|---|---|
| NIST SP 800-218 — Secure Software Development Framework (SSDF) | csrc.nist.gov |
| SSDF Practice PW.1.1 — Risk modeling (threat modeling, attack surface) | See SSDF Table: PW.1.1 |
| SSDF Practice PW.6.1 — Automated vulnerability checks in build process | See SSDF Table: PW.6.1 |
| SSDF Practice PW.6.2 — Treat detected vulnerabilities as build failures | See SSDF Table: PW.6.2 |