Cross-Site Request Forgery (CSRF)
Overview
Cross-Site Request Forgery (CSRF) is a web security vulnerability where an attacker tricks a victim’s browser into performing unwanted actions on a web application in which the victim is authenticated. In a typical CSRF attack, a malicious website or email causes a user’s browser to send a forged HTTP request (including the user’s session cookies or other credentials) to a target application’s protected endpoint, without the user’s intent or awareness (cheatsheetseries.owasp.org). Because browsers automatically include credentials like session cookies with requests to the legitimate site, the vulnerable application may process the request as if it were a legitimate action by the user (cheatsheetseries.owasp.org). CSRF thus exploits the trust that a site places in a user’s browser, leveraging the fact that the server has no simple way to distinguish a forged request from a legitimate one without additional safeguards. This attack is sometimes described as session riding or a one-click attack, emphasizing that a single click or visit to a malicious page can silently trigger unauthorized operations on the user’s behalf. CSRF has been recognized as a distinct class of vulnerability (classified as CWE-352 by MITRE) and has historically been considered a high-impact issue, even appearing among the OWASP Top Ten web risks in earlier years. It remains an important threat to mitigate despite modern browsers’ improvements, especially for any application that relies on session-based authentication.
Threat Landscape and Models
In the CSRF threat model, the attacker does not directly infiltrate the target application or steal credentials; instead, the attacker leverages the victim’s established authenticated session with the target. The essential preconditions for a CSRF attack are: (1) the victim is logged into the target web application (or has an active session cookie or auth token stored in the browser), and (2) the target application uses credentials (like cookies or HTTP Basic auth headers) that are automatically included by the browser in requests. Given these conditions, an attacker can host malicious content (a web page, script, or even an HTML email) that issues an HTTP request to the target site’s URL. For example, a malicious page might contain a hidden HTML form that auto-submits to https://bank.example/transferFunds with parameters for an unauthorized transaction, or an image tag (<img src="https://bank.example/transfer?amount=1000&to=attacker">) that causes a GET request to the same URL (developer.mozilla.org). When the victim’s browser loads that content, it will dutifully send the request to the bank’s server, including the victim’s session cookie, thus making the request appear authenticated.
From an attacker’s perspective, CSRF is attractive because it often requires minimal user interaction beyond visiting a page or clicking a link. The attacker does not need to steal the victim’s credentials or session; they exploit the fact that the browser automatically attaches those credentials to requests. The attack can be carried out by any malicious site or third-party that the user visits while authenticated on the target site. Commonly, attackers use social engineering to lure victims—sending an email with an enticing link or embedding an attack in a popular website’s forum or ad. When the victim’s browser makes the cross-site request, the target server sees a valid session cookie and assumes the user intentionally initiated the action. Importantly, the same-origin policy does not prevent these cross-site requests—SOP stops the malicious site from reading the response, but it doesn’t stop the browser from sending the request with the user’s credentials. This means the attacker cannot directly see the outcome or data returned by the forged request, but they often don’t need to; the harm is done by the side effects (funds transferred, settings changed, etc.).
A classic threat scenario involves high-value transactions: for instance, a banking application where a user is logged in to their account. An attacker crafts a hidden form on a rogue webpage that targets the bank’s transfer endpoint and sets the recipient account to one controlled by the attacker. If the victim visits that page and the form auto-submits (via a snippet of JavaScript or a timed event), the bank may execute the transfer, thinking the legitimate user authorized it. Another scenario is changing a user’s account details (email or password) via CSRF, potentially allowing the attacker to take over the account indirectly (e.g., change the email then perform a password reset). In more subtle cases, CSRF can be used to trick users into performing actions like upvoting content, subscribing to services, or unknowingly sending messages. The threat landscape thus includes any state-changing request that an attacker can predict and trigger behind the scenes. As a result, when modeling threats, developers should assume that any endpoint which changes server state or performs a sensitive action could be targeted by CSRF if not properly defended. Standard threat models categorize CSRF as a confused deputy problem: the web application (deputy) is tricked into using the authority of the victim to perform actions for the attacker.
Common Attack Vectors
CSRF attacks can be delivered through various vectors, all of which involve an attacker-supplied web context causing the victim’s browser to send a request. One common vector is a malicious HTML form. The attacker crafts a form targeting a protected endpoint (e.g., an account deletion URL) and sets default values for all necessary parameters. Using JavaScript or an HTML attribute like autofocus and a bit of social engineering, the form can submit automatically when the victim visits the page. For example, the attacker’s page might include:
<body onload="document.forms[0].submit()">
<form action="https://app.example.com/deleteAccount" method="POST">
<input type="hidden" name="userid" value="victim123">
<!-- other hidden inputs for required parameters -->
</form>
</body>
As soon as the victim’s browser loads this page, the form auto-submits a POST request to the target domain, including the victim’s session cookie. Another simple vector is using image tags or other elements that cause GET requests. Although well-designed applications should not use GET requests for state-changing actions, some do. An attacker can embed an <img> or <iframe> pointing to a URL like https://store.example.com/purchase?item=100&quantity=1 – the browser, attempting to fetch the image, will issue the GET request and possibly trigger a purchase action. Yet another vector is via AJAX and CORS: if a target API endpoint is lenient with cross-origin requests (for example, by allowing requests from any origin or supporting JSONP), an attacker might exploit that. However, standard browsers block cross-origin XMLHttpRequest calls unless the target explicitly allows them via CORS, so the most straightforward CSRF vectors use mechanisms like forms or image loads that the browser permits cross-site without special headers.
It’s important to note that the attacker doesn’t need to directly observe the response. The harm is in the side effect on the server. For instance, an attacker could embed a request to trigger a password change for the victim’s account (setting a password known to the attacker). Even though the attacker cannot see the server’s response page (due to same-origin policy), the password will have been changed. In some cases, attackers chain CSRF with other issues: for example, if an application doesn’t confirm an action by user (no re-auth or confirmation step), CSRF can fully automate exploitation. Another subtle vector is login CSRF, where an attacker causes a user’s browser to log into an attacker-controlled account on the target site. This sounds counterintuitive, but by doing so the attacker might exploit the trust model of an application (for example, to later steal sensitive data or link the victim’s actions to an attacker’s account). While not as directly harmful as transaction CSRF, login CSRF can lead to session fixation or confusion attacks and is also worth guarding against.
Modern browsers have introduced the SameSite cookie attribute and other mechanisms to curb CSRF, which we will discuss later. However, absent those protections, any predictable and side-effect-inducing endpoint (especially if it uses only cookies for auth) is a potential target. Attackers often try simple techniques first, like sending a link with the malicious query parameters or hosting a fake “Click here to get a prize” button that is actually a form submitting to the target. Multi-step attacks (where two or more requests are needed) are harder to execute with CSRF alone unless the attacker can fit all steps into one user interaction or exploit some script on the client side. Therefore, CSRF usually focuses on single HTTP requests that have immediate effect.
Impact and Risk Assessment
The impact of a successful CSRF attack can be severe, as it effectively lets an external adversary harness a victim’s established privileges on the target system. Unlike cross-site scripting (XSS), which compromises the user’s interaction and data visibility, CSRF compromises the integrity of actions performed. In the worst-case scenario, CSRF can result in unauthorized fund transfers, purchase orders, changed login credentials, or other critical state changes, all executed under legitimate user accounts. The harm is constrained only by what actions the vulnerable web application makes available to the user. If the victim is an administrator, the attacker could induce admin-level actions (e.g., creating or deleting users, changing configurations), potentially compromising the entire application or data store. If the victim is a regular user, the attacker may still steal or corrupt that user’s data (for example, changing the shipping address on an e-commerce site, leading to goods being delivered to the attacker’s address, or posting unwanted content on behalf of the user).
From a confidentiality perspective, CSRF typically does not directly leak information to the attacker (since the response is not visible cross-domain), but it can indirectly lead to data loss or exposure. For example, the attacker might use CSRF to change the victim’s email to an address controlled by the attacker, then initiate a password-reset, thereby gaining future access to the account and its data. Thus, CSRF can be a stepping stone to broader account takeover. In terms of availability, CSRF might log a user out or delete their account, acting as a form of denial-of-service for that user. The risk level of CSRF vulnerabilities is generally high for any sensitive functionality. Industry standards reflect this: the OWASP Application Security Verification Standard (ASVS) treats CSRF defenses as a required control even at ASVS Level 1 (the most basic level) (owasp-aasvs4.readthedocs.io). In other words, every web application that maintains user state is expected to have protections against CSRF. This is because the likelihood of exploitation is moderate to high (CSRF attacks are relatively easy to execute if defenses are absent) and the impact can be critical. A CSRF attack often has a Low attack complexity: the attacker merely needs to trick the user into a normal web interaction and does not need advanced exploits. The privileges required by the attacker are none (they piggyback on the victim’s privileges), and user interaction needed is typically just one click or visit. According to typical risk scoring (e.g., CVSS), this often yields a high or critical severity for a missing CSRF defense on important functionality.
It’s worth noting that the prevalence of CSRF vulnerabilities has diminished in some newer applications, largely due to frameworks adopting built-in defenses and browsers adding the SameSite cookie protections by default. However, this can lead to complacency. If an application has disabled or not configured these protections correctly, it may silently become exposed. Security teams assessing risk should consider not just whether tokens are present, but also whether they are implemented correctly and whether any state-changing endpoints have been overlooked. During risk assessment, one should catalog all actions that change state or perform sensitive tasks, and verify that each is protected. An unprotected endpoint that is thought to be “low value” can sometimes be combined with social engineering or other logic to cause bigger issues (for example, CSRF on a “subscribe/unsubscribe” action might be harmless individually, but if automated at scale, could spam-regulate or disrupt service for many users).
Defensive Controls and Mitigations
To robustly defend against CSRF, developers should employ multiple layers of defense. The primary and most well-established mitigation is the use of anti-CSRF tokens (also known as synchronizer tokens). With this approach, the server generates an unpredictable token and associates it with the user’s session (either stored server-side or via a cryptographic token sent to the client). This token is then embedded in every sensitive HTML form or request that performs a state-changing operation. The token is typically included as a hidden form field (or sometimes in a custom header for AJAX calls). When the form is submitted, the server expects the token value and rejects the request if the token is missing or mismatched. Since an attacker from another site cannot read the token (due to same-origin policy) and cannot guess it if it’s sufficiently random, they are unable to construct a valid request that will pass the token check (developer.mozilla.org) (developer.mozilla.org). Most web development frameworks have built-in support for CSRF tokens. For example, frameworks like Django enable CSRF protection by default and automatically insert tokens into forms rendered in templates, and Spring Security in Java also includes CSRF defenses that are on by default for state-changing methods. It is important to use these mechanisms rather than inventing a custom solution; OWASP specifically recommends using vetted, built-in CSRF protection frameworks or libraries wherever possible (cheatsheetseries.owasp.org), to avoid the pitfalls of rolling out a flawed custom scheme.
Another common mitigation pattern is the double-submit cookie approach. In this pattern, the server issues a random value to the client by setting it in a cookie (typically a separate cookie from the session cookie) and also expects that value to be present in a request parameter (or header) on sensitive requests. Since the browser will send the cookie automatically, but the attacker’s page cannot read the cookie value (due to same-origin restrictions), the only way for the attacker to have the correct value in the request parameter is if they somehow guessed it. As long as the value is cryptographically random, guessing is infeasible. The server compares the cookie value and the request parameter; if they don’t match, the request is rejected. This double-submit technique can be useful for applications where storing server-side state for tokens is undesirable (e.g., stateless REST APIs that use cookies). However, an important caveat is that if not implemented carefully, it can be bypassed. OWASP recommends signing the token in the cookie (for example, using an HMAC of the session ID or a secret) so that an attacker who cannot read the cookie also cannot fabricate a valid token by any other means (cheatsheetseries.owasp.org) . A naive double-submit (where the token is an unsynchronized cookie value without signing) is discouraged (cheatsheetseries.owasp.org), since an attacker might exploit any weakness in cookie scope (such as setting a subdomain cookie or observing a token leak) to defeat the check. In summary, if using the double-submit pattern, treat the cookie token as a bearer value that should be integrity-protected or validated on the server to ensure it’s genuine.
Modern web applications have an additional browser-side defense: the SameSite cookie attribute. When a session cookie is set with SameSite set to either Lax or Strict, the browser will restrict when that cookie is sent in cross-site requests (cheatsheetseries.owasp.org) (cheatsheetseries.owasp.org). In Strict mode, the browser will never send the cookie if the navigation originated from a different site (even if the user clicks a link to the site, the cookies are withheld, requiring the user to log in again). In Lax mode (which has become the default in modern browsers since 2020 (cheatsheetseries.owasp.org)), the browser will send the cookie for top-level navigations (for example, if the user clicks a link to the site), but will not send it on subordinate requests such as images or iframes loaded from another site. This provides a substantial mitigation against certain CSRF techniques. For instance, a hidden image or an auto-submitted form from an external site will usually not include cookies if the session cookie is SameSite=Lax and the request is not a top-level navigation. However, SameSite is not a complete defense on its own. It can be bypassed in scenarios where the user actively clicks a link (which is considered a top-level navigation) or if the application has endpoints that use GET for state-changing actions (developer.mozilla.org). Additionally, SameSite’s protection is defined in terms of “site” (effective top-level domain + 1) rather than full origin, so requests between subdomains or sister domains might still be considered same-site in some cases (developer.mozilla.org). Also, a determined attacker might find edge cases (e.g., some browsers’ implementation quirks or using POST with certain conditions) to still send cookies. Therefore, SameSite should be viewed as a defense-in-depth measure [27†L1-L4]. It significantly reduces the attack surface and thwarts unattended CSRF attempts (like those embedded in third-party content that loads silently), but critical applications should still implement explicit CSRF tokens or other server-side checks. All the same, it is a best practice to set the SameSite attribute on session cookies to Lax or Strict as appropriate; this mitigates a large class of CSRF attempts with minimal effort, and it complements the token-based defenses. It’s also worth noting that as of recent standards, if a cookie is intended to be sent cross-site (SameSite=None for scenarios like third-party integrations), it must be marked Secure, which improves overall security.
In addition to tokens and cookie flags, another mitigation approach involves verifying the origin of requests. Servers can check the Origin header (or Referer header as a fallback) of incoming requests to ensure that the request originates from the same site. For example, if your application is app.example.com, you can verify that Origin: https://app.example.com is present for any state-changing POST request. If the header is missing or does not match your domain, the server can reject the request. This approach can effectively stop CSRF attacks because a malicious cross-site request will typically have an Origin reflecting the attacker’s domain (or no Origin at all, in the case of some form submissions or older browsers). Origin checking is a useful secondary defense, especially for APIs or scenarios where adding tokens is difficult. However, it is not foolproof by itself. Some requests (like <form> submissions from HTTPS to HTTP endpoints, which is an unusual scenario, or certain redirects) might not carry a referer or origin due to user privacy settings or browser quirks. In practice, for an all-HTTPS application, the absence of Origin/Referer is rare for modern browsers, so this check is quite reliable. Many frameworks and security libraries implement referer/origin validation as part of their CSRF defenses (for instance, as an additional check after token verification). Developers implementing this should ensure they consider allowed subdomains or known legitimate sources (for example, if your app is served on multiple domains).
For modern single-page applications and APIs that rely on JavaScript, two additional techniques are relevant. First, custom request headers: by requiring a custom header (such as X-CSRF-Token or even something generic like X-Requested-With: XMLHttpRequest), the server can differentiate normal browser navigations from scripts. Browsers do not allow third-party sites to set arbitrary headers on cross-origin requests without CORS pre-approval. So if your server mandates, say, an X-CSRF-Token header with a specific value (perhaps matching a token set in a cookie or local storage), an attacker’s plain HTML form or image tag cannot include that header. Attempts to use XHR/fetch to add the header from another origin will trigger a preflight (because the custom header makes it a non-simple request) which will be refused by the server unless explicitly allowed. This effectively foils naive cross-site submissions. A variant of this is requiring JSON payloads (application/json content type) instead of form-encoded data for state-changing endpoints. A malicious <form> cannot easily send JSON with correct content type without using XHR, and again that XHR would be blocked by the same-origin policy unless CORS is enabled. This technique—ensuring requests are not “simple” in the HTML5 Fetch sense—means the browser’s own security policies become an ally in preventing CSRF (developer.mozilla.org). Developers can design their APIs such that any request that alters state requires a content type like JSON or a custom header, thereby sandboxing those endpoints from cross-site usage by default. It’s an elegant mitigation as it leverages inherent browser behavior, but it may complicate clients (you then need JavaScript on the front-end to set headers or send JSON). Many frameworks (AngularJS, for instance) historically used an X-Requested-With header for AJAX calls by default; while this header can be added by attacker-controlled JavaScript on the client-side, if the target server rejects requests that lack it (assuming the attacker cannot bypass the absence of CORS), it’s a partial mitigation. A stronger modern approach uses Fetch Metadata: browsers send headers like Sec-Fetch-Site, Sec-Fetch-Mode, and Sec-Fetch-Dest with each request, indicating the context (same-site vs cross-site, initiated by user click vs script, etc.) (developer.mozilla.org). For example, Sec-Fetch-Site: cross-site will be present on requests initiated from a different site. Servers can choose to drop any request that comes with Sec-Fetch-Site: cross-site (or same-origin vs same-site distinctions, depending on policy). This provides a robust way to enforce that only same-site requests are allowed for sensitive actions, without managing tokens. Implementing Fetch Metadata checking is an emerging best practice—Google has advocated it and frameworks are beginning to support it as an option. It’s wise to implement it as a defense-in-depth measure: even if you have tokens, adding an allowlist for Sec-Fetch-Site (only allow same-site and same-origin) can stop certain classes of CSRF and also some side-channel attacks. Keep in mind older browsers might not send these headers, so your implementation should have a safe fallback (e.g., treat absence of Sec-Fetch-Site as suspicious or fall back to token check).
In summary, the strongest CSRF defense is achieved by combining approaches. Synchronizer tokens (or their double-cookie equivalent) are a proven primary defense. SameSite cookies add an external layer of protection that is effective against many opportunistic attacks. Origin checking and Fetch Metadata provide additional verification of request legitimacy. By employing multiple controls, you guard against scenarios where one mechanism might be bypassed or absent (for example, maybe a certain integration requires SameSite=None cookies, but you still have tokens protecting you). It is also essential to cover all sensitive state-changing endpoints; a common mistake is to protect most forms but then accidentally leave one critical API unprotected. A recommended practice is to make CSRF protection a default in your framework or base controller, so new endpoints have to explicitly opt out (and there should be very few valid reasons to opt out). For instance, many frameworks throw an exception or return a 403 Forbidden if a POST request lacks a valid CSRF token, thereby making the developer immediately aware if they forgot to include a token in the form.
Secure-by-Design Guidelines
Preventing CSRF should start early in the design phase of an application. A secure-by-design approach means architecting the application in such a way that CSRF is inherently mitigated or at least easy to consistently address. One guiding principle is to avoid practices that make CSRF possible in the first place. For example, design your web API such that state-changing operations are not authenticated with only cookies. If you instead require a token to be present in a header (like a bearer API token or an OAuth token), then by default, a random third-party website cannot trigger authorized requests because it has no way to add that header from a browser context without the proper CORS handshake. Many modern single-page applications have adopted this pattern: the user’s credential is a JWT or similar stored in browser storage, and it’s sent explicitly in an Authorization header. This approach essentially sidesteps CSRF (since the browser will not automatically send the token to another site, and the attacker cannot read it to do so manually). However, this pattern introduces other considerations (like protecting the token from XSS, since it might reside in JavaScript-accessible storage), so it’s not a universal solution—just one design option that affects CSRF risk.
Another design guideline is to make intent explicit. That is, whenever a sensitive action is performed, make sure the design requires something that an attacker can’t fake. Traditional web apps accomplish this with hidden CSRF tokens in forms, which is a way of making the user’s intention explicit with a piece of data only their genuine client would know. Beyond tokens, you can design critical actions to require user re-confirmation or multi-step flows. For instance, to mitigate CSRF on a money transfer, the application could require the user to enter their password again or complete a CAPTCHA on the confirmation page. An attacker who can force one click might not be able to force the user to solve a CAPTCHA or re-enter credentials without tipping them off. Such measures reduce the likelihood that a single forged request can cause irreparable harm. They are not a substitute for token defenses, but they complement the overall design by making automated misuse harder.
Framework selection and configuration is also a crucial part of secure design. Teams should prefer frameworks that automatically include CSRF protection out-of-the-box. If using Django, for example, the CSRF middleware is on by default; design your templates and AJAX calls to work with it (ensuring every form has the {% csrf_token %} template tag, and that any JavaScript requests include the token from the cookie/header). In Spring or ASP.NET, be mindful not to disable the default CSRF features during project setup. It’s often seen that developers turn off CSRF checks during initial development (to avoid the “inconvenience” of setting up tokens when testing with tools) and then forget to turn them back on—secure design means consciously deciding never to turn off such a feature unless absolutely necessary. If you must support a use case like a public third-party integration (where CSRF tokens can’t be easily shared), consider isolating that endpoint and protecting it with alternative mechanisms (like requiring a custom header or referrer checks for that specific case, or segregating it behind an API gateway that performs additional auth).
Consistency in design is key. All state-changing requests should follow a uniform pattern regarding how they are invoked in the UI and how they are protected. For example, if your design is that every form post includes a CSRF token, then all forms (account settings, transactions, comments, etc.) should use a common form component or template that automatically injects the token. This reduces the chance of a developer forgetting to include the token for some new feature. Similarly, if your design uses SameSite cookies, apply that flag to all session cookies in all services consistently. Don’t let one subdomain or microservice use a cookie without SameSite, because attackers could target that weakest link (perhaps via a CNAME or a subdomain cross-site scenario). From the outset, document these requirements in the project’s security requirements: for instance, include a statement like “The application shall enforce CSRF protection on all non-idempotent requests (ASVS 4.0 req 4.2.2) and session cookies shall be set with SameSite=Lax or Strict.” By having such requirements formally in the design, engineers and architects treat CSRF mitigation as a fundamental feature, not an optional add-on.
Lastly, consider the user experience impact of CSRF defenses in the design. Security measures can sometimes clash with usability. For example, Strict SameSite cookies can break flows where a user is coming from an external site (like an SSO login or a payment gateway redirect back to the app). During design, if you identify such flows, plan for safe exceptions (maybe those specific cookies can be set to Lax for the integration, or use an alternate mechanism like a one-time token in the redirect URL that establishes a session). Another scenario: single-page apps obtaining CSRF tokens—if an API call is needed to get a token, ensure this is done seamlessly on app load so that the user doesn’t encounter errors. The bottom line is, secure design integrates CSRF protections into the architecture so that they do not feel bolted on; instead, they become a natural part of how the application works.
Code Examples
To illustrate CSRF vulnerabilities and their mitigations, we will look at code snippets in several languages and frameworks. Each sub-section presents a vulnerable (bad) implementation followed by a secure (good) implementation, along with explanations.
Python
Consider a Python web application using a minimalist framework (like Flask) for a bank transfer feature. In the insecure version below, the application does not implement any CSRF token. The server trusts the session cookie alone for authentication, so any cross-site request that includes the user’s cookie will be processed:
# BAD: Flask endpoint without CSRF protection
from flask import Flask, request, session, render_template, abort
app = Flask(__name__)
app.secret_key = 'secret!123' # Needed for session cookies
@app.route('/transfer', methods=['GET', 'POST'])
def transfer():
if request.method == 'POST':
# No CSRF token check — vulnerable to CSRF
receiver = request.form['account']
amount = request.form['amount']
# Assume user is authenticated via session
perform_transfer(session['user_id'], receiver, amount)
return "Transfer completed"
else:
# Render the transfer form (for simplicity, not showing HTML here)
return render_template('transfer_form.html')
In this bad example, the /transfer endpoint will execute a fund transfer on any POST request, as long as the session cookie in session['user_id'] identifies a logged-in user. An attacker could easily create an HTML form targeting this endpoint, and the server would perform the action because it’s not verifying the request’s origin or intent. There is no mechanism to distinguish a legitimate user-initiated request from a forged one.
Now, here is a secure version using a synchronizer token pattern. We generate a CSRF token when rendering the form and store it in the session (or it could be stored in a server-side cache or database associated with the session). The form includes this token as a hidden field. When the form is submitted, the server checks for the token and compares it with the expected value in the session:
# GOOD: Flask endpoint with CSRF token implementation
import secrets
from flask import Flask, request, session, render_template, abort
app = Flask(__name__)
app.secret_key = 'secret!123' # Secret key for sessions
@app.route('/transfer', methods=['GET', 'POST'])
def transfer():
if request.method == 'POST':
token = request.form.get('csrf_token')
if not token or token != session.get('csrf_token'):
abort(403) # Invalid or missing CSRF token
receiver = request.form['account']
amount = request.form['amount']
perform_transfer(session['user_id'], receiver, amount)
return "Transfer completed"
else:
# Generate a new CSRF token for the form
token = secrets.token_hex(16)
session['csrf_token'] = token
# Render form with hidden input: <input type="hidden" name="csrf_token" value="{{ csrf_token }}">
return render_template('transfer_form.html', csrf_token=token)
In the good example, the critical addition is the generation and verification of a CSRF token. The token is a cryptographically secure random string (secrets.token_hex(16) generates 16 bytes hex, i.e., 32 hex chars, which is effectively 128-bit randomness). This token is stored server-side (in the user’s session data) and also sent to the client (embedded in the form). On form submission, the handler retrieves the token from the form (request.form.get('csrf_token')) and compares it to the session’s token. If they don’t match or the token is missing, the server rejects the request with an HTTP 403 Forbidden (in a real app, you might show an error page or message). Only if the token is valid does the server proceed to call perform_transfer. This ensures that a random site cannot forge the request, because the attacker would have to know this unpredictable token. Notably, the token is regenerated on each GET request here, meaning each form has a one-time token; this could also be set up to rotate less frequently (e.g., one token per session or per login), but one-time tokens provide an extra layer of safety against replay.
Many Python frameworks and libraries can automate this. For instance, Flask developers often use Flask-WTF, which provides CSRF protection by generating tokens and validating them on form submission automatically. In Django, CSRF protection is built-in via middleware and template tags – you include {% csrf_token %} in your form template, and Django checks the token on the server side. The above code is a low-level illustration of the underlying principle, which is consistent across frameworks.
JavaScript (Node.js)
In a Node.js/Express scenario, consider an endpoint that updates a user’s email address. In the insecure version, the code trusts the presence of a session cookie to authenticate the user and directly performs the update with no CSRF checks:
// BAD: Express.js route without CSRF protection
const express = require('express');
const session = require('express-session');
const app = express();
app.use(session({ secret: 'keyboard cat', resave: false, saveUninitialized: true }));
app.use(express.urlencoded({ extended: true })); // to parse form body
app.post('/update_email', (req, res) => {
if (!req.session.user) {
return res.status(401).send("Not logged in");
}
const newEmail = req.body.email;
// No CSRF validation here
updateUserEmail(req.session.user.id, newEmail);
res.send("Email updated");
});
In this bad example, as long as the user’s session is active (indicated by req.session.user), a POST to /update_email will change the email. An attacker could exploit this by having the user’s browser submit a form to https://your-app.com/update_email with an email field. The Express server will see the session cookie and perform the update, since it isn’t verifying anything beyond the session.
Now, we introduce CSRF protection. A popular solution in Express is to use the csurf middleware. This middleware generates a token and stores it (by default in the session, or as needed) and expects a matching token in the request. Here’s a secure implementation using csurf:
// GOOD: Express.js setup with CSRF protection using csurf
const express = require('express');
const session = require('express-session');
const csrf = require('csurf');
const bodyParser = require('body-parser');
const app = express();
app.use(session({ secret: 'keyboard cat', resave: false, saveUninitialized: true }));
app.use(bodyParser.urlencoded({ extended: true }));
app.use(csrf()); // initialize CSRF protection middleware
// Route to serve the form (including the CSRF token)
app.get('/update_email', (req, res) => {
if (!req.session.user) {
return res.status(401).send("Login required");
}
// csrfToken() method is provided by csurf to fetch the token for this session
const token = req.csrfToken();
// Render an HTML form (simplified here as a string for demonstration)
res.send(`
<form action="/update_email" method="POST">
<input type="hidden" name="_csrf" value="${token}">
<input type="email" name="email" placeholder="New email">
<button type="submit">Update Email</button>
</form>
`);
});
// Protected POST route to handle the form submission
app.post('/update_email', (req, res) => {
if (!req.session.user) {
return res.status(401).send("Not logged in");
}
// If CSRF token is missing or invalid, csurf will automatically throw an error
const newEmail = req.body.email;
updateUserEmail(req.session.user.id, newEmail);
res.send("Email updated");
});
// Error handler to catch CSRF errors (optional, for graceful error messages)
app.use((err, req, res, next) => {
if (err.code === 'EBADCSRFTOKEN') {
// CSRF token validation failed
res.status(403).send("Invalid CSRF token");
} else {
next(err);
}
});
In this secure example, we added the csrf() middleware. This middleware works in conjunction with sessions (or cookies) to generate a unique token. In the GET /update_email route, we retrieve the token via req.csrfToken() and embed it as a hidden field named _csrf in the form. The app.post('/update_email') route doesn’t need to manually check the token; the csurf middleware does that implicitly before our handler runs. If the token is missing or incorrect, the request will not reach the handler – csurf will trigger an error (which we catch in the error handler that checks for err.code === 'EBADCSRFTOKEN'). In effect, the presence of the correct token in the form (and thus in the POST request body) is required for the update to succeed.
Any malicious request from another site would not have the correct _csrf value. Even if an attacker viewed the HTML of our form (by using the site legitimately), the token value is tied to the user’s session and typically changes over time – the attacker cannot guess it for another user. Additionally, notice that we must include the token for every form or relevant request; csurf can generate a new token per request or reuse tokens per session, depending on configuration, but the safe assumption is to always fetch a fresh req.csrfToken() when rendering a form.
Also important: the example uses bodyParser.urlencoded to parse form data and places the token in req.body._csrf. If you were using JSON APIs, you might set up csurf to check the token in a header instead. The main idea is the same: require an unguessable token in a place where the browser will send it only for legitimate requests.
Java
In a Java web application context, frameworks like Spring MVC/Spring Boot provide CSRF protection by default (specifically, Spring Security enables CSRF protection for state-changing methods unless you explicitly disable it). Let’s illustrate with a Spring MVC example for changing a password. First, consider a misconfigured (insecure) scenario where CSRF protection is turned off, and the application’s form does not include any token:
// BAD: Spring Security config with CSRF disabled (vulnerable configuration)
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable() // CSRF protection is explicitly disabled - NOT secure
.authorizeRequests()
.anyRequest().authenticated();
// ... other security configs like login, etc.
}
}
With the above configuration, the application will not require CSRF tokens for POST requests. If we have a controller like:
@Controller
public class AccountController {
@PostMapping("/changePassword")
public String changePassword(@RequestParam String newPassword, Principal principal) {
userService.changePassword(principal.getName(), newPassword);
return "passwordChanged";
}
}
And a form in the JSP/Thymeleaf like:
<form action="/changePassword" method="POST">
<input type="password" name="newPassword" placeholder="New Password">
<button type="submit">Change Password</button>
</form>
This setup is vulnerable. There is no <input type="hidden" name="_csrf" ...> token field, and CSRF checks are off. An attacker could craft a request to POST /changePassword on behalf of the user, and it would go through as long as the session cookie is present.
Now, the secure version: we will enable CSRF (the default behavior) and ensure the form includes the token. Spring’s mechanism provides a token automatically via the view. In Thymeleaf templates, for example, one can use the ${_csrf.token} expression along with ${_csrf.parameterName} to include the proper fields. Here’s how it looks:
// GOOD: Spring Security default (CSRF enabled) and form with CSRF token
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
// By default, CSRF protection is enabled in Spring Security.
http.authorizeRequests()
.anyRequest().authenticated()
.and()
.csrf().requireCsrfProtectionMatcher(new AntPathRequestMatcher("**/changePassword"));
// The above line (requireCsrfProtectionMatcher) is often not needed
// as Spring secures all state-changing methods by default.
}
}
And the form in the view (Thymeleaf example):
<form action="/changePassword" method="POST" th:object="${passwordChangeForm}">
<!-- CSRF token field: Spring will auto-add CsrfToken object to model -->
<input type="hidden" th:name="${_csrf.parameterName}" th:value="${_csrf.token}" />
<input type="password" name="newPassword" placeholder="New Password" />
<button type="submit">Change Password</button>
</form>
With this setup, when the form is rendered, Spring’s CSRF protection mechanism (through CsrfToken in model) inserts a hidden input named something like &_csrf (the default parameter name) with a long random value. When the form is submitted, Spring Security’s filter intercepts the request and looks for the CSRF token in the request parameters (or headers). If the token is missing or doesn’t match the one expected for the user’s session, Spring will reject the request with a 403 status and not invoke the controller. Only if the token is present and correct will the changePassword controller method execute. We haven’t had to write any explicit CSRF handling code in the controller – it’s all handled by the framework, which is ideal.
A few important points for Java developers: never disable CSRF protection unless you have a very specific reason and alternative protection. If using frameworks like JAX-RS or custom servlets without built-in CSRF, you must implement token checks manually (similar to the Python example, generating a token and storing it server-side or in the session, and then verifying on submit). Also, protect all mutative actions: in Spring Security, GET requests are not CSRF-protected by default (as they should be read-only if following HTTP spec). If your application has sensitive actions performed via GET (which is a design smell, but some legacy apps do), you may need to explicitly guard those (either by disallowing GET for those or by adding custom checks, since CSRF tokens are primarily for POST/PUT/DELETE by default). Spring’s default CSRF implementation sends the token in a cookie named XSRF-TOKEN as well, for frameworks like AngularJS to read, but the server still expects the token in the request body or header – double-check your version’s documentation for specifics.
.NET/C#
In ASP.NET Core (and historically in ASP.NET MVC), anti-forgery tokens are a standard defense against CSRF. Let’s consider an ASP.NET Core Razor Pages or MVC scenario where a user can change their account settings. The insecure version might look like this:
// BAD: .NET Core Controller without anti-forgery
public class AccountController : Controller
{
[HttpPost]
public IActionResult UpdateEmail(string email)
{
if (!User.Identity.IsAuthenticated)
return Unauthorized();
// No [ValidateAntiForgeryToken] attribute here
userService.UpdateEmail(User.Identity.Name, email);
return Ok("Email updated");
}
}
Corresponding Razor view (CSHTML) might be:
<form asp-controller="Account" asp-action="UpdateEmail" method="post">
<input type="email" name="email" />
<button type="submit">Update</button>
</form>
In this bad example, the controller action UpdateEmail does not have the anti-forgery validation attribute, and the form does not include an anti-forgery token. By default, in an ASP.NET Core application with the default anti-forgery setup, this might still fail because in Startup, the app might be adding an antiforgery token globally. But if a developer has not configured it, or explicitly disabled validation (or in older ASP.NET, not included the token in the form), the endpoint would be vulnerable. An attacker could create a POST request to /Account/UpdateEmail with a parameter [email protected], and if the user’s session cookie is present, the email change would be processed.
Now the secure pattern in .NET involves using the [ValidateAntiForgeryToken] attribute on the controller action (or globally via a filter) and including a token in the form. In ASP.NET Core’s Razor views, one can include the token by simply using a tag helper or HTML helper. Here’s the fixed version:
// GOOD: .NET Core Controller with Anti-Forgery token enforcement
public class AccountController : Controller
{
[HttpPost]
[ValidateAntiForgeryToken] // This attribute enables CSRF token validation
public IActionResult UpdateEmail(string email)
{
if (!User.Identity.IsAuthenticated)
return Unauthorized();
// If the CSRF token is invalid or missing, the framework will automatically
// abort the request before executing this line.
userService.UpdateEmail(User.Identity.Name, email);
return Ok("Email updated");
}
}
And the Razor view:
<form asp-controller="Account" asp-action="UpdateEmail" method="post">
@Html.AntiForgeryToken() <!-- Generates a hidden input with the token -->
<input type="email" name="email" />
<button type="submit">Update</button>
</form>
When the Razor view is rendered, @Html.AntiForgeryToken() outputs a hidden input named __RequestVerificationToken with a value that is tied to the user’s session (and a corresponding cookie named .AspNetCore.Antiforgery... is also set). The [ValidateAntiForgeryToken] attribute on the controller tells the framework to expect that token on POST. If an incoming POST lacks the token or has a mismatched one, the request will not be processed (the framework will usually return HTTP 400 with an antiforgery validation error). If everything is in order (user’s cookie and the form’s token align), then the action method executes and the email gets updated. In ASP.NET Core, this mechanism is enabled by default for any views that use the form tag helpers (they include the token automatically unless you opt out). But if you are manually constructing forms or building API endpoints, you need to ensure you’re using the Antiforgery service.
A common pitfall in .NET is when developers turn off validation globally, perhaps for API endpoints. If you do disable it for APIs that use cookies, you must then implement another CSRF defense (like requiring an API key header or using SameSite cookies) — simply disabling it across the board opens you up to CSRF. The best practice is to leave it on, and if you have a specific API that should be exempt (maybe because it’s meant to be called cross-site intentionally), then isolate that carefully and document why it’s safe.
Pseudocode
To solidify understanding, here’s a conceptual pseudocode example highlighting the contrast between a CSRF-vulnerable implementation and a CSRF-safe implementation:
Vulnerable Pseudocode:
function handleChangePassword(request, user) {
if (user.isAuthenticated && request.method == "POST") {
// No CSRF check -- vulnerable
user.password = request.getParam("newPassword");
saveUser(user);
return "Password changed successfully";
} else {
return "Unauthorized or invalid request";
}
}
In this vulnerable version, as long as the user is authenticated (determined by a session cookie or similar) and the request is a POST, the password change is executed. There is nothing to ensure the POST came from the legitimate site’s form. An attacker only needs to cause the user’s browser to send a newPassword field to this endpoint.
Secure Pseudocode:
function handleChangePassword(request, user) {
if (user.isAuthenticated && request.method == "POST") {
token = request.getParam("csrf_token");
if (!token || token != user.session.csrfToken) {
log("CSRF token missing or invalid for user " + user.id);
return errorResponse(403, "Invalid request");
}
user.password = request.getParam("newPassword");
saveUser(user);
rotateCsrfToken(user.session); // optional: invalidate token after use
return "Password changed successfully";
} else {
return "Unauthorized or invalid request";
}
}
In the secure pseudocode, the server expects a parameter csrf_token and compares it against a value stored in the user’s session (here user.session.csrfToken). If the token is absent or doesn’t match, the function logs the incident (which could be useful for intrusion detection) and returns an error response without performing the action. Only if the token is verified does it proceed to change the password. It also optionally rotates the CSRF token (some implementations generate a new token per request or per significant action to limit reuse). The presence of this check means a random attacker cannot successfully invoke the function because they would not know the token that user.session.csrfToken holds.
This pseudocode mirrors what frameworks do under the hood. The key elements are: an unpredictable token associated with the user’s session, inclusion of that token in the legitimate request by the frontend, and verification on the backend.
Detection, Testing, and Tooling
Discovering CSRF vulnerabilities in a web application involves checking whether state-changing requests are sufficiently protected. From a defender’s perspective (such as an AppSec engineer or penetration tester), testing for CSRF typically starts with identifying all the points in the application where actions occur (form submissions, links that change state, API endpoints for modifications) and then examining how those requests are authorized.
A straightforward manual test is to see if an authenticated request can be replayed or forged from another origin. For example, suppose the application has a form to update a profile. A tester would authenticate as a user in their browser, then craft an HTML page on their local disk (or a separate domain) that issues the same request (using a form or script) and see if the action goes through. If the profile update happens without any complaint, it indicates missing CSRF protection. On the other hand, if the application requires a token, the forged request will likely fail (perhaps nothing happens, or the server returns a 403 or shows an error about a missing token).
Security testing tools can automate parts of this process. Burp Suite (a popular web security testing tool) has an active scanner that can detect potential CSRF issues by analyzing forms and responses. It will flag forms that lack an apparent random token or that have predictable token values (like static or repeated values). Burp can also attempt to send a request with a missing or incorrect token to see if it gets through. Similarly, OWASP ZAP can scan for CSRF problems and includes a script-driven tool called the “CSRF Scanner” that identifies forms without tokens. While automated tools are useful, they sometimes err on the side of caution (for instance, they might flag a form as possibly vulnerable when in reality the app checks the Origin header in the background – something a scanner might not recognize). Therefore, a combination of tool-driven and manual analysis is recommended.
Another useful approach is code-assisted review. If you have access to the codebase, you can search for key indicators: for instance, in a Java Spring app, check if http.csrf().disable() appears (a red flag); in a .NET codebase, see if controllers are decorated with [ValidateAntiForgeryToken] or if the antiforgery service is being used. Similarly, look at templates: in a Jinja2 (Flask) app, do forms include {{ csrf_token() }}? In an Express app, is the csurf middleware set up? Static analysis tools sometimes have checks for missing CSRF tokens. For example, some linters or SAST (Static Application Security Testing) tools can detect patterns like an HTML form with a POST action that doesn’t have a corresponding token input, or a controller method that is state-changing without a CSRF filter. These automated code scans can quickly highlight areas to inspect manually.
Apart from direct CSRF defenses, testers and defenders should verify cookie settings and other passive defenses. Using browser developer tools, one can inspect cookies to see if SameSite is set on session cookies. If it’s not present or set to None without a good reason, that’s an issue. Also, check if sensitive actions have any referrer/origin validation. This can sometimes be observed by intentionally stripping the Origin header in a test and seeing if the server rejects the request (though replicating that might require a tool; normally the browser will always send Origin for cross-site POST, but you could use a tool like cURL or Burp to drop it).
There are specialized tools and scripts for CSRF as well. One classic tool is OWASP’s older CSRFTester, which allowed a tester to generate a PoC (Proof-of-Concept) page for a given request. While somewhat dated and simplistic, the concept is to automate creating an HTML or JavaScript snippet that, when loaded in a browser, will issue the target request. Penetration testers often prepare a malicious HTML page that includes forms or scripts to target multiple endpoints and then have a test user (or the victim in a controlled environment) load that page to see which actions succeed. If any succeed, it indicates a vulnerability.
It’s also important to test the boundaries and less obvious state changes: not just financial transactions or profile updates, but things like “logout” (some apps didn’t protect logout with CSRF tokens, enabling attackers to log users out – minor, but still a nuisance/security issue especially if logout triggers some state change or if done repeatedly could be a form of denial-of-service). Similarly, test idempotent-looking actions like saving preferences. If they’re unprotected, they’re still vulnerabilities (just perhaps lower priority). A thorough test involves going through each function in the UI that performs a change and verifying the presence of a token or equivalent defense.
When an application does implement tokens, testers might also attempt to bypass them. For example, if the token is predictable or not tied to the user’s session, an attacker might be able to reuse their own token for another user. This is rare in modern frameworks but could happen in a flawed custom implementation. A scenario: the token is just a static hidden field (like a constant value or something derived from time). If a tester spots that, they’ll exploit it by using the known token value in the attack. Another bypass might involve checking multi-step processes: suppose changing an email requires two steps (enter email, then confirm). If the first step has a token but the second step doesn’t re-verify it, an attacker might target the second step directly. Testing should cover those nuances.
On the tooling front, beyond scanners and proxies, some browser extensions can help simulate CSRF. For instance, there are extensions that let you craft and send forms, or you can simply use the browser console to issue fetch requests to test CSRF (though beware of CORS – using the browser console might be blocked by CORS policy if not same-site). Often, testers will run a little local web server (with a simple HTML file) that forms cross-site requests to the target – effectively a DIY exploit page – because that mimics an actual attack more closely than a direct tool.
Finally, the OWASP Web Security Testing Guide (WSTG) provides a structured approach to testing CSRF (owasp.org) (owasp.org). It recommends verifying if the application uses only cookies for session tracking and if so, trying both GET and POST methods for actions to see if they can be triggered cross-site. The WSTG suggests constructing a proof-of-concept exploit form and confirms whether the state change occurs. Following such a guide ensures testers don’t miss less common cases, such as CSRF in requests other than form submissions (e.g., link-based CSRF for GET requests, or even CSRF in Flash or other contexts if relevant).
In summary, detecting CSRF issues involves: inspecting the presence and quality of anti-CSRF tokens, checking cookie settings like SameSite, attempting cross-origin requests to sensitive endpoints, and using both automated scanners and manual techniques. You want to be confident that every action requires something an attacker can’t forge. If any action does not, that’s a finding that should be addressed.
Operational Considerations (Monitoring and Incident Response)
From an operational security standpoint, CSRF poses a challenge because if an attack succeeds, it often leaves behind what looks like legitimate logs. After all, the action was performed by the user’s own session. However, there are still strategies to monitor and respond to CSRF attempts.
One proactive monitoring measure is to log details about the source of requests for sensitive operations. For example, an application might log the Referer or Origin of incoming requests that have security relevance. Under normal conditions, one would expect that state-changing requests have an Origin/Referer pointing to the same site (since the user was navigating within the app). If the logs show an origin from an external site or no origin when one is typically present, that could indicate a CSRF attempt. This requires some baseline knowledge: e.g., in an all-HTTPS application, a missing Origin on a POST is unusual – that might warrant an alert or at least a deeper look. Some security teams set up alerts for spikes in certain actions or for patterns that match known CSRF exploit attempts. For instance, if suddenly many users are triggering the “transfer money” function to the same account or performing the same unusual transaction, and especially if those requests have some common referer (maybe a particular malicious site URL if referer is not stripped), that would be a big red flag.
Another operational consideration is integrating CSRF defenses with incident response. Suppose your monitoring does catch something suspicious – say a large number of CSRF token validation failures (like lots of 403 errors for invalid token). That could mean an attacker is attempting a CSRF exploit en masse (perhaps via a malicious ad or a widely shared link) and failing due to your token checks. Those events should be logged and possibly tracked. They might help identify the malicious source if the referer is present, or at least that an attempt was made. If you detect such a pattern, an incident response might involve investigating how users might have been lured (did someone post a malicious link on a forum?), and informing users if necessary.
In the event that a CSRF attack does succeed (perhaps because a particular endpoint was unprotected and got exploited), incident response should focus on scoping and damage control. The application logs will often show the actions taken (e.g., all the transactions that were performed, or accounts changed). However, identifying them as malicious might require correlating timing (did many different users all perform the same action within a short timeframe? That’s unlikely to be normal usage). Once identified, you would need to remediate the consequences: for instance, if it involved fraudulent transactions, maybe reverse them or assist users in recovery. If it changed account data, you might have to help users restore that data.
A specific example: imagine a CSRF attack that changed the payout bank account for a bunch of users to the attacker’s account. Operationally, you’d notice a series of account updates in the logs all setting bank details to a certain account number. Responding to that would involve freezing those transactions, notifying those users, etc. Moreover, it’s critical to fix the vulnerability immediately (e.g., enable CSRF protection on that endpoint) and likely expire all sessions or relevant cookies if needed, to ensure that any ongoing attack is halted.
From a monitoring perspective, one can also leverage application performance management (APM) or security analytics tools that are configured to watch for unusual requests. Some intrusion detection systems (IDS) or web application firewalls (WAFs) can be tuned to detect anomalies like an internal endpoint being called from external referrers. For example, a WAF could drop any requests that lack a proper CSRF token or have an unexpected origin (this is somewhat advanced and not foolproof, but possible). At minimum, the WAF could be configured to block known patterns (like an img tag invocation on a URL that should only ever be accessed via form POST).
Incident response should also consider user communication. If users were affected by CSRF (say their account was used to send messages without consent or some action was taken), be transparent and advise them. If credentials or sessions might have been compromised in a related way (not typically direct in CSRF, but if combined with other issues), you might force a logout for all users or reset certain tokens. In many CSRF cases, the fix is straightforward (add the missing protection), and the main damage control is ensuring any changes made by the attacker are undone.
A crucial point in operational preparedness is ensuring logs actually contain information needed. As mentioned, logging origin headers for critical actions is useful. Also, logging user agent and IP for those actions can help differentiate if the same client triggered many actions (though with CSRF it would usually be the user’s own IP and agent, unfortunately, since it’s their browser doing it). If you notice an unusual user agent or script, that might hint at something (for instance, if the attacker used a particular automation that left a trace). There’s also the possibility to log token validation failures explicitly – e.g., “CSRF token validation failed for session X from IP Y, origin Z”. This helps in post-incident analysis to see how often and from where attacks came.
Finally, consider drills or tabletop exercises: how would your team handle a CSRF exploit that went live? Practicing that scenario can highlight any gaps in monitoring. For instance, if you realized you don’t have enough detail in logs to investigate a suspected CSRF, you can improve logging proactively. Another angle is monitoring browser-side: sometimes security teams use Content Security Policy (CSP) or other browser-reporting features to catch cross-site scripting; for CSRF, there’s no direct CSP analog, but you could possibly use Reporting-API or network error logging to catch certain failures. That’s fairly bleeding edge and not commonly done yet.
In summary, while CSRF attacks may not trip the obvious alarms (since they piggyback on authorized sessions), a combination of smart logging, anomaly detection, and quick response to unusual patterns is key. Incorporating CSRF-related events into your Security Incident and Event Management (SIEM) system – such as spikes in 403 CSRF errors or odd referer values – can give you an early warning. If a real CSRF incident occurs, isolate the functionality, apply a fix (like enabling tokens), and assess the impact on users’ data to guide recovery actions.
Checklists (Build-Time, Runtime, and Review)
Build-Time Security Considerations: During development and build, it’s crucial to integrate CSRF protections from the very beginning. For every feature that involves state-changing requests (form submissions, API calls that modify data), developers should ensure that the mechanism for CSRF defense is in place. For example, while building a new form, a developer should include the anti-CSRF token field in the form template as a matter of habit. Many teams create a secure framework or library that automatically adds these tokens, so developers simply follow established patterns. At build-time, one can also include automated checks: incorporate a static analysis or linting rule that flags any new HTML form lacking a token, or any server endpoint that appears to modify data but doesn’t have CSRF protection. If using a framework with default CSRF protection, verify in the build that it hasn’t been unintentionally turned off. For instance, an automated test could attempt a sample CSRF attack on a test instance of the application to ensure that it fails (this can be part of integration testing: e.g., spin up the app, simulate a cross-origin request to a critical endpoint, and assert that it is blocked). Build and CI pipelines can also run security tests (like OWASP ZAP in a baseline scan mode) to catch missing CSRF tokens in common pages. Essentially, treat CSRF like a required feature: the build should fail if CSRF defenses are absent where they should exist.
Runtime Protections and Configuration: In production, certain configurations and settings ensure CSRF defenses remain robust. One checklist item is cookie configuration: ensure that the session cookies (and any authentication cookies) are set with the appropriate SameSite attribute (Lax or Strict as policy dictates, and Secure if SameSite is None). This often is a configuration in the web framework or server. It’s something that might differ between environments (development vs production), so double-check it in production environment variables or settings. Another runtime consideration is ensuring that any load balancers or proxies do not inadvertently disable CSRF checks. For example, if you have a distributed architecture and use sticky sessions, that’s usually fine; but if not, and your CSRF tokens are stored server-side, you need to ensure the token check works across servers (maybe by using signed tokens instead). Also, monitor that your CSRF protection is actually preventing what it should: for instance, you might keep an eye on metrics like the rate of CSRF validation failures – a sudden drop to zero might indicate the check isn’t working at all, whereas some baseline of a few failures could be normal. Another runtime checklist item is CORS policy: ensure that, unless necessary, your APIs are not allowing credentials from arbitrary origins. The default should be to deny cross-origin requests that include credentials, which indirectly helps prevent CSRF exploitation via XHR. If you must allow some cross-origin (for a public API, perhaps), enforce that those endpoints don’t use cookie auth (maybe they use API keys or other auth not vulnerable to CSRF). Keep your framework and libraries updated as well; for example, older versions of CSRF libraries might have known bypasses or bugs, so part of runtime maintenance is applying patches to the security mechanisms themselves.
Security Code Review and Testing Checklist: When reviewing code (or performing peer review), the reviewer should systematically verify CSRF defenses. This means checking that every form-generation view includes an anti-CSRF token and that every corresponding form-handling endpoint validates the token. If reviewing a single-page app, ensure that any state-changing fetch/AJAX call is either using a proper token (often frameworks like Angular handle this by reading a cookie and sending a header) or that the endpoint is otherwise protected (non-simple request requirement or explicit checks). Reviewers should be wary of any custom CSRF solutions – they should scrutinize if the custom implementation truly covers all bases (is the token random enough? tied to user? not leaked anywhere? one-time use or appropriately scoped?). If a developer has disabled CSRF in config, that should raise a big discussion: why is it disabled? Is there an alternative control in place? Often you’ll find it was disabled out of convenience without a good alternative – then the review should mandate turning it back on. During security testing (which might be toward the end of development or part of QA), testers should go through a checklist like: For each user-modifiable action, attempt an unauthorized request from another origin. They should confirm that actions cannot be taken without the token. It’s also wise to test token strength: ensure the token changes across sessions (to avoid one user’s token working for another), and maybe even across requests (if that’s the design). Another review item is ensuring the token isn’t exposed in places it shouldn’t be (for example, not in URLs or logs).
Deployment and Post-Deployment: After the application is deployed, perform a quick verification in the live environment: pick a couple of key forms and ensure that the HTML has tokens and that they work. Sometimes a misconfiguration might cause the token not to render (for instance, if a template was forgotten). Also, verify that cookies have the intended flags by inspecting an HTTP response (perhaps using curl or browser dev tools on the live site). During deployment, some teams also run a mini security regression test to make sure nothing critical like CSRF got broken by the new release.
In a checklist format (conceptually): at build time, ensure inclusion of CSRF tokens is part of definition-of-done for any feature. At runtime, ensure cookies and frameworks are configured correctly (no global CSRF disable flags, SameSite attributes in place). At code review, verify the presence of CSRF mitigation on every relevant endpoint. At testing, actively try to break CSRF and ensure it holds up.
Common Pitfalls and Anti-Patterns
Implementing CSRF protections is straightforward with modern frameworks, but there are still common mistakes that teams make. One pitfall is disabling security features for convenience. Developers might disable CSRF protection in the development environment because “it was causing my POST requests to fail in Postman” and then forget to re-enable it in production. This is unfortunately common. It’s an anti-pattern to turn off CRSF defenses rather than figuring out how to include the tokens in your testing or to configure your API calls properly. Instead, one should use the tools (like including the CSRF token header when using Postman) or utilize development mode configurations that still simulate the token. Relying on disabled CSRF protection even temporarily can lead to it being permanently overlooked.
Another anti-pattern is using predictable or static tokens. This often happens in homemade implementations. For example, a developer might insert a hidden field <input type="hidden" name="token" value="12345"> and on the server just check that some token exists (or worse, not check at all). A static token that never changes and isn’t tied to a user provides essentially no security – an attacker can view source once and then forever include that token in their malicious requests. Similarly, using something like the user’s session ID as the token value (and not removing it from cookies) would be a flaw: if an attacker can in some way guess or obtain that (maybe via XSS or other means), then they can forge requests easily. The token needs to be high entropy and ideally separate from things like session identifiers.
A related pitfall is exposing the token in unsafe ways. For example, putting the CSRF token in the URL (query string) of forms or navigation links is discouraged. Tokens in URLs can end up in browser history, server logs, or referer headers to other sites – potentially leaking them. The best practice is to keep tokens in hidden form fields or in headers, where they aren’t as easily logged or leaked. If you use cookies for tokens (double-submit cookie pattern), mark them HttpOnly if possible? Actually, if you mark the CSRF token cookie HttpOnly, then client-side scripts can’t read it to add to headers – so typically the CSRF token cookie is not HttpOnly by design (for frameworks like Angular, it needs to be readable by JS). That’s fine, but be aware that if XSS is present, an attacker could steal the CSRF token cookie. The pitfall here is sometimes a false sense of security: developers might think, “We have CSRF tokens, so even if XSS happened, we’re safe.” Not true – XSS can bypass CSRF easily by reading token or directly making authorized requests. So an anti-pattern is not addressing XSS because you think CSRF tokens cover it; they don’t. CSRF tokens defend against cross-site attacks, not same-site script attacks.
Another common pitfall is protecting only some endpoints and not others. Sometimes an application will have diligently added tokens to form endpoints but forgot that there are also JSON endpoints or an API that the web app calls via AJAX. If those endpoints accept the session cookie and don’t have their own token or header check, they become an attack vector. Attackers will look for the “weakest link” – maybe the main forms are secure, but the mobile version of the API or an administrative interface is not. It’s an anti-pattern to think “I only need CSRF tokens on forms” – any state-changing request, via any route, needs protection. WebSocket handshake endpoints have been known to be vulnerable if they use cookies and don’t check origin (though WebSocket protocol includes origin header, but if not validated, can be a CSRF-like issue). Similarly, don’t overlook logout or less critical actions; while not as harmful, they should ideally be covered or intentionally left if harmless and documented.
A nuance to mention is the login CSRF pitfall: Many sites historically didn’t protect the login form with a CSRF token, reasoning that “the worst that can happen is a user gets logged in to a stranger’s account”. But that can be problematic if, say, a user is logged into an attacker’s account (the attacker can later see what they did, or if the user saved something personal into attacker’s account etc.). Modern thinking suggests adding CSRF token to login forms as well. It’s a pitfall to consider login or logout forms as not needing CSRF defense. Same with other non-idempotent but not obviously dangerous actions – best practice is to cover them all, to not create special cases in your security implementation.
An anti-pattern in some designs is relying solely on HTTP methods for protection. For example, developers might think “We only allow GET for safe actions and POST for unsafe, so we’re okay: as long as we have no side effects on GET, we won’t have CSRF”. This is misleading: while it’s true you should use proper HTTP verbs (GET for fetch, POST/PUT for changes), CSRF isn’t automatically solved by that. Attackers can and will use POST (via forms, or a script that creates a form and submits it, or even an <iframe> targeting a form endpoint). The OWASP Testing Guide notes that simply using POST doesn’t stop CSRF (owasp.org). Yet some developers mistakenly think their API being “RESTful” or “POST-only for changes” is enough. It’s not – you still need tokens or other measures. So the pitfall is a false sense of security from correct HTTP verb usage.
Another anti-pattern: assuming same-origin policies or CORS will save you without explicit measures. For example, a team might say: “Our API is JSON and doesn’t allow cross-origin, so we’re safe.” But if they still rely on cookies for auth, a simple form post from another site could still hit them if they accept Content-Type: application/x-www-form-urlencoded. Or if they accept JSON, the attacker could try a <form> with JSON in a hidden field and a content-type of text/plain or something unusual to bypass preflight (there have been creative exploits to confuse content-type detection). The safer approach is explicitly requiring a token or header, rather than hoping CORS will block it. CORS helps because by default it disallows reading responses, but it doesn’t necessarily prevent the request from being made (it will prevent a complex request but not a simple form GET/POST). So it’s an anti-pattern to rely on default browser restrictions entirely.
One subtle pitfall is related to the double-submit cookie pattern: Some developers implement it incorrectly by not actually comparing the cookie and the request token on the server. They might set a cookie and also expect the same value in the request, but if they forget the comparison and just assume its presence means okay, that’s obviously broken. It sounds silly, but misordering or forgetting a check is possible in custom code. Always verify both sides: cookie vs param. Another pitfall with double-submit is not considering subdomain interactions. If your application sits on app.example.com and you set the CSRF token cookie for .example.com (all subdomains), a clever attacker might have a malicious site at evil.example.com (if they somehow controlled a subdomain, or via user content hosting etc.) and that could read or set the cookie in some scenarios. It’s a niche case, but to avoid it, you usually set cookies scoped tightly to the host. Setting cookies at top domain is often unnecessary unless you truly need it across subdomains; doing so could open CSRF or other issues (like session fixation, as OWASP Cheat Sheet warns (cheatsheetseries.owasp.org)). So the anti-pattern: overly broad cookie scope can undermine CSRF protections.
Another common mistake is not updating CSRF defense when the application evolves. For example, an app that originally was server-rendered (with CSRF tokens in forms) might later add a single-page component or mobile app usage. If they don’t adjust the CSRF strategy (maybe issuing tokens via an endpoint for the SPA), they might inadvertently leave that new portion unprotected. The pattern of forgetting to include CSRF in new features is common – it’s less a pattern in code and more in process.
Lastly, a pitfall on the operational side: ignoring browser cookie warnings. Modern scanning tools might flag if your cookies lack SameSite. Some devs might see that and ignore it “because we have tokens anyway”. But missing SameSite is worth fixing as a defense-in-depth. It’s easy and should be done; not doing it is an anti-pattern given how straightforward it is to add in most frameworks (often one liner configuration).
In conclusion, the anti-patterns include disabling or bypassing established frameworks, using weak or static tokens, partial coverage of endpoints, and misunderstanding the underlying web behaviors. Avoid these by sticking to framework defaults, comprehensively covering all state changes, and using well-tested patterns rather than ad-hoc solutions.
References and Further Reading
OWASP Cross-Site Request Forgery Prevention Cheat Sheet – An official OWASP guide detailing multiple strategies to defend against CSRF, including secure usage of anti-CSRF tokens (synchronizer tokens, double-submit cookies with HMAC), the role of SameSite cookie attribute, and best practices for implementation.
OWASP Application Security Verification Standard 4.0 – The OWASP ASVS is a standard for web application security requirements. CSRF is highlighted in section 4.2 (Session Management) as a required control for all verification levels. This reference underlines the necessity of strong anti-CSRF mechanisms in any secure application.
Mozilla Developer Network – Cross-Site Request Forgery (CSRF) Explainer – MDN provides an overview of CSRF attacks and modern defenses. It explains how CSRF works, and discusses defensive techniques like CSRF tokens, SameSite cookies, and the use of Fetch Metadata request headers, in a developer-friendly manner.
OWASP Web Security Testing Guide – Testing for CSRF – A comprehensive guide for security testers on how to evaluate web applications for CSRF vulnerabilities. It covers both manual testing techniques and things to look for (like absence of tokens in forms), helping ensure that all endpoints are properly evaluated.
MITRE CWE-352: Cross-Site Request Forgery – The Common Weakness Enumeration entry for CSRF provides a formal definition, examples of the weakness, and references to its occurrences. It’s useful for understanding CSRF in the context of vulnerability taxonomy and for seeing related variants or mitigations as cataloged by security researchers.
This content is authored with assistance from OpenAI's advanced reasoning models (classified as AI-assisted content). Material is reviewed, validated, and refined by our team, but some issues may be missed and best practices evolve rapidly. Please use your best judgment when reviewing this material. We welcome corrections and improvements.
Send corrections to [email protected].
We cite sources directly where possible. Some elements may be derived from content linked to the OWASP Foundation, so this work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. You are free to share and adapt this material for any purpose, even commercially, under the terms of the license. When doing so, please reference the OWASP Foundation where relevant. JustAppSec Limited is not associated with the OWASP Foundation in any way.
