Code-to-Runtime Correlation
Exercise 3: Code-to-Runtime Correlation (~15 min)
Goal: Trace a runtime vulnerability back to its source commit, assess the blast radius, and understand why BOTH code scanning AND runtime monitoring are needed.
π Open
docs/code-to-runtime-trace.mdβ document your investigative trace through each of the 4 layers.
Framing: Why Both GHAS AND Defender?
Without code-to-runtime correlation, how would you find the source of a production vulnerability? Grep through 500 repositories? Check every deployment manifest? Ask around on Slack?
Decision-makers ask: βWe already have GHAS scanning code. Why do we also need Defender for Cloud?β
| What | GHAS (WS2) Catches | Defender (WS3) Catches |
|---|---|---|
| SQL injection in YOUR code | β At PR time | β |
| Vulnerable dependency in package.json | β At PR time | β At runtime (if deployed) |
| Base image vulnerability (NOT your code) | β Not in your source | β At runtime |
| Configuration drift after deployment | β Scans code, not runtime | β Detects live state |
| Newly disclosed CVE on running image | β Only at scan time | β Continuous monitoring |
GHAS protects CODE. Defender protects PRODUCTION. They are complementary β you need both. Defender catches what GHAS cannot: vulnerabilities that emerge AFTER code passes all guardrails.
Step 1: Deploy to AKS
Deploy the attested container image to the test AKS cluster:
# Get AKS credentials
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
# Deploy the application
kubectl create deployment supply-chain-demo \
--image=${ACR_NAME}.azurecr.io/supply-chain-demo:latest
kubectl expose deployment supply-chain-demo \
--port=80 --target-port=3000 --type=LoadBalancer
# Wait for external IP
kubectl get svc supply-chain-demo --watch
Step 2: Wait for Defender Scan
Defender for Cloud needs time to discover and scan the running container:
# Verify the pod is running
kubectl get pods -l app=supply-chain-demo
# Wait ~5-10 minutes for Defender to scan
echo "Defender scanning in progress... Check Azure Portal in 5-10 minutes."
Note: Initial container scanning by Defender may take 5β15 minutes. Use this time for a break or to review the discussion prompts.
Step 3: Navigate the Investigative Trace
Open Azure Portal β Microsoft Defender for Cloud β Recommendations (or Security alerts).
Follow the code-to-runtime mapping through four layers:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π RUNTIME β
β Running container: supply-chain-demo β
β Cluster: aks-devsecops-ws3 | Namespace: default β
β Vulnerability: CVE-XXXX-YYYY (from base image) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β π¦ REGISTRY β
β Image: acrdevsecopsws3.azurecr.io/supply-chain-demo:latest β
β Digest: sha256:abc123... β
β Pushed: 2025-01-15T10:30:00Z β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β βοΈ PIPELINE β
β Workflow: build-deploy-oidc.yml β
β Run: #42 | Trigger: push to main β
β Attestation: β
Verified β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β π SOURCE β
β Repository: org/agentic-devsecops-supplychain-integrity β
β Commit: a1b2c3d | Author: developer@example.com β
β File: Dockerfile (base image selection) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Step 4: Trace Each Layer in Defender
- Runtime β Click the vulnerability alert β see which pod/container is affected and the blast radius
- Registry β Click the image reference β see the registry, tag, and digest
- Pipeline β Click the pipeline link β see the GitHub Actions workflow run that built this image
- Source β Click the source link β navigate to the exact commit and file in GitHub
Step 5: Assess the Blast Radius
After tracing the vulnerability, assess its IMPACT β not just whatβs vulnerable, but HOW MUCH is affected:
| Blast Radius Factor | What to Check | Your Finding |
|---|---|---|
| Containers affected | How many pods run this image? | Β |
| Clusters affected | Is this image deployed to multiple clusters/environments? | Β |
| Internet exposure | Is the affected service externally accessible? | Β |
| Data sensitivity | Does this service handle PII, credentials, or financial data? | Β |
β οΈ Blast radius determines URGENCY, not just severity. A HIGH severity vulnerability on 1 internal dev pod may be less urgent than a MEDIUM severity vulnerability on 50 internet-facing production pods handling customer data.
Production-aware prioritization:
- Traditional (GHAS only): βHIGH severity β fix firstβ
- Production-aware (GHAS + Defender): βHIGH severity + internet-facing + 50 pods β CRITICAL β fix NOWβ
Step 6: Create a GitHub Issue from Defender
If Defender offers the option to create a GitHub issue:
- Click βCreate GitHub Issueβ (or note the details to create one manually)
- Observe that the auto-created issue contains:
- Runtime context: which cluster, which pod, blast radius assessment
- Source context: which repo, which commit, which file
- Recommended remediation: specific action to resolve the vulnerability
# Alternatively, create the issue manually with full context
gh issue create \
--title "Runtime vulnerability traced to base image" \
--body "## Trace
- **Runtime**: supply-chain-demo pod on aks-devsecops-ws3
- **Registry**: ${ACR_NAME}.azurecr.io/supply-chain-demo:latest
- **Pipeline**: build-deploy-oidc.yml run #42
- **Source**: Dockerfile base image selection
## Action
Update base image to patched version."
Step 7: The Complete DevSecOps Loop β Hand-off to WS4
The Defender trace you just completed connects ALL 4 workshops:
WS1 π‘οΈ Trust Boundary β WHERE development happens (Data Residency, EMU)
β
WS2 π Guardrails β WHAT prevents bad code (GHAS, rulesets, Copilot)
β (code passes guardrails, enters pipeline)
WS3 π Exercise 1 (OIDC) β HOW we deploy securely (no static credentials)
WS3 π Exercise 2 (Attestation)β HOW we prove integrity (cryptographic provenance)
WS3 π Exercise 3 (Defender) β HOW we SEE from runtime to source β YOU ARE HERE
β (something goes wrong in production)
WS4 π Response β HOW we detect, respond, and improve
The Defender trace IS the starting point for WS4βs operational response. When SRE Agent detects an incident in Workshop 4, Defenderβs code-to-runtime mapping shows WHERE the issue originated β the same trace you just performed.
But what happens when something goes WRONG in production β actively, urgently, right now? Thatβs operational response β and thatβs Workshop 4.
Key Insight
Code-to-runtime correlation means you never ask βwhere did this vulnerability come from?β again. You trace from production impact to source commit in one view. Combined with blast radius assessment and production-aware prioritization, you fix the most impactful issues first β not just the highest severity ones.
Tip: Run
scripts/verify-exercise3.shto validate your Exercise 3 completion.