JustAppSec

Threat Modelling Without the Ceremony

Practical threat modelling that fits into the way teams actually work.

0:00

Threat modelling has a reputation problem. Many developers associate it with week-long workshops, giant diagrams, and documents nobody reads. It does not have to be that way. This lesson covers practical threat modelling that fits into a normal development workflow.

What threat modelling actually is

Threat modelling is the practice of asking four questions about a system:

  1. What are we building?
  2. What can go wrong?
  3. What are we going to do about it?
  4. Did we do a good enough job?

That is it. Everything else — STRIDE, PASTA, attack trees, data flow diagrams — is just a structured way of answering those four questions. Pick whatever structure works for your team. The questions matter more than the framework.

When to threat model

The highest-value moment is before you write the code — during design, when changes are cheap. A 15-minute conversation at the start of a feature can prevent weeks of remediation later.

Good triggers:

  • A new feature that handles sensitive data (auth, payments, PII)
  • A new integration with an external service
  • A change to trust boundaries (a new API endpoint, a new user role)
  • A significant architecture change (adding a queue, splitting a service, moving to serverless)

You do not need to threat model every bug fix or CSS change.

Lightweight threat modelling in practice

Step 1: Draw the system

Grab a whiteboard, a shared doc, or a napkin. Draw the components involved in the feature: the user, the browser, the API, the database, any third-party services. Draw arrows for data flow and label them (HTTP, gRPC, SQL, etc.).

This does not need to be a formal DFD (data flow diagram). It just needs to show:

  • Where data enters the system (trust boundaries)
  • Where data is stored
  • Where data leaves the system

Step 2: Ask "what can go wrong?"

Walk through each component and each arrow. For each one, ask:

  • Can someone spoof their identity here? (e.g., forged JWTs, missing auth)
  • Can someone tamper with the data in transit or at rest? (e.g., modified request body, unsigned data)
  • Can someone access data they should not see? (e.g., IDOR, missing authorisation checks)
  • Can someone deny their actions? (e.g., no audit log, unsigned transactions)
  • Can someone disrupt the service? (e.g., rate limiting absent, resource exhaustion)
  • Can someone escalate their privileges? (e.g., admin-only endpoint accessible to regular users)

This is a simplified version of STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), but you do not need to memorise the acronym. Just ask the questions.

Step 3: Decide what to do

For each threat, choose one of four responses:

  • Mitigate — add a control (the most common response)
  • Accept — acknowledge the risk and move on (valid for low-impact/low-likelihood threats)
  • Transfer — push the risk to someone else (e.g., use a managed auth service instead of building your own)
  • Avoid — change the design so the threat no longer applies

Document the decision. A simple table works:

ThreatLikelihoodImpactResponseNotes
User can modify another user's order by changing the order ID in the URLHighHighMitigateAdd ownership check in the API handler
Admin panel accessible without MFAMediumCriticalMitigateRequire MFA for admin role
DDoS on public search endpointMediumMediumAcceptRate limiting in place; scaling is handled by CDN

Step 4: Review

Come back after implementation and verify: did we actually implement the mitigations? Are there new components or flows that were not in the original model? This is a 5-minute check, not a second workshop.

Fitting it into your workflow

  • Design reviews — add a standing 10-minute "what can go wrong?" section
  • Pull requests — for features involving auth, data access, or new endpoints, the reviewer asks: "was this threat modelled?"
  • Sprint planning — if a story involves a new trust boundary, flag it for lightweight threat modelling

The goal is not a perfect document. It is a conversation that happens at the right time.

Common mistakes

  • Boiling the ocean. You do not need to threat model the entire system at once. Model the thing you are changing.
  • Only thinking about external attackers. Insiders, compromised dependencies, and misconfigured infrastructure are also threats.
  • Stopping at identification. Finding threats is only useful if you decide what to do about them and follow through.
  • Making it a one-time event. Threat models are living artefacts. Revisit them when the system changes.

Summary

Threat modelling is a conversation, not a ceremony. Draw the system, ask what can go wrong, decide what to do about it, and verify you followed through. It takes 15–30 minutes for most features. The return on that small investment is enormous — it catches design-level flaws that no amount of testing will find after the code is written.


This training content is AI-assisted and reviewed by our team, but issues may be missed and best practices evolve rapidly. Send corrections to [email protected].