Cross-Origin Resource Sharing (CORS)
Overview
Cross-Origin Resource Sharing (CORS) is a browser mechanism that allows web services to relax the same-origin policy in a controlled fashion. By default, the web’s same-origin policy permits scripts loaded on one origin (a combination of scheme, host, and port) to access resources from the same origin only. This restriction is a fundamental security feature to isolate web content by origin. However, modern applications often need to request resources or APIs hosted on a different domain or port, such as a front-end application calling a back-end API on another domain. CORS provides a standardized way for servers to declare which cross-origin requests are safe to serve, enabling legitimate integrations (for example, between a web app and an API on different domains) while maintaining the security boundary enforced by browsers MDN Web Docs – CORS.
CORS is implemented via specific HTTP response headers that a server returns, indicating what origins, methods, and headers are allowed. When a web page script tries to fetch a resource from another origin (using XMLHttpRequest, the Fetch API, etc.), the browser automatically adds an Origin header to the request identifying the source origin. The server can then decide whether to allow the request. If allowed, the server responds with appropriate Access-Control-* headers (such as Access-Control-Allow-Origin) granting permission for the browser to proceed and expose the response to the requesting script. For simple cross-origin GET requests, the presence of a valid Access-Control-Allow-Origin header in the response is sufficient for the browser to permit the sharing of the resource. For more complex requests (such as those using non-standard HTTP headers or methods beyond GET/POST, or those including credentials), the browser will perform a “preflight” OPTIONS request to the server. This preflight is used to check allowed methods, headers, and other policies before the actual request is sent. Only if the server’s response to the preflight indicates approval (e.g., with headers like Access-Control-Allow-Methods and Access-Control-Allow-Headers) will the browser proceed with the actual request. Through this handshake, CORS aims to ensure that only explicitly allowed cross-origin interactions occur, preventing unauthorized access by default while enabling necessary cross-domain functionality OWASP WSTG – Testing CORS.
In summary, CORS is essentially a contract between a web client and server, where the server declares permissible cross-origin behavior. It allows scenarios like a JavaScript frontend on https://domainA.com to securely request data from an API on https://api.domainB.com, as long as api.domainB.com’s responses include headers stating that domainA.com is trusted. Absent those headers, the browser would block access to the response data as a same-origin policy violation. This controlled relaxation of origin isolation is crucial for building modern web architectures (such as single-page applications with separate API backends) and for integrating third-party services without compromising security. The importance of proper CORS configuration lies in striking the right balance: enabling legitimate cross-origin use cases while not inadvertently exposing sensitive data to malicious sites. Missteps in this configuration can undermine the same-origin policy and lead to serious vulnerabilities, which is why AppSec engineers and developers must deeply understand and correctly implement CORS.
Threat Landscape and Models
The threat landscape for CORS revolves around the scenario in which a malicious website (or script) from an untrusted origin attempts to interact with a target web application in ways that would normally be prohibited by the same-origin policy. Under the default model, a script from attacker.com running in the user’s browser cannot read data directly from victim.com because the browser blocks cross-origin reads. CORS changes this landscape by introducing a policy layer where victim.com can explicitly allow certain cross-origin requests. The primary threat actor in this context is an external origin (controlled by an attacker) trying to abuse overly permissive CORS settings on the target application. In a typical threat model, the attacker’s goal is to bypass the browser’s built-in restrictions and gain read access to sensitive data or interactions that belong to another domain. The attacker might host a malicious script on a domain they control and entice users (through phishing, malicious ads, or cross-site scripting on a third site) to execute that script in their browser. If the target application’s CORS policy is misconfigured to trust undesired origins, the attacker’s script can silently retrieve protected data from the target (using the victim user’s credentials or session) and potentially send it back to the attacker.
From a threat modeling perspective, CORS misconfigurations are essentially failures in defining trust boundaries. A web application that unintentionally authorizes arbitrary origins (or an overly broad set of origins) effectively breaks the isolation that the same-origin policy provides. An important aspect of the model is that browsers enforce CORS on behalf of the server — meaning if the server mistakenly allows a malicious origin, the browser will comply and share the data with that origin’s script. Conversely, if the server does not allow an origin, the browser will block that request’s response from being accessed. Attackers understand that they cannot directly bypass CORS from the client side (since the browser’s enforcement is built-in and cannot be disabled by scripts), so they focus on exploiting weaknesses in the server’s declared policy. They will probe the target application by sending requests with various Origin header values to see if the server responds with permissive Access-Control-Allow-Origin headers. For example, an attacker might try using their own domain in the Origin header to check if the server echoes it back in Access-Control-Allow-Origin – a sign that the server reflects origins and might trust any domain. The threat model also considers that attackers could register crafty domain names to fool simplistic origin checks (for instance, obtaining a domain name that ends with a trusted string). If a server’s policy validation is naively implemented (such as endsWith("trusted.com")), an attacker using eviltrusted.com or trusted.com.evil.org as a domain might slip through the filter. Thus, a CORS threat model must include both outright wildcard allowances and subtle logic bugs in origin validation.
Another dimension of threats in CORS involves the use of user credentials and cookies. Browsers, by default, will not include cookies or HTTP authentication in cross-origin requests unless the request is made with XMLHttpRequest.withCredentials = true or equivalent. Even then, the server must explicitly allow credentials via the Access-Control-Allow-Credentials: true header. If an attacker can exploit a CORS policy that allows their origin and also permits credentials, the impact escalates significantly: the malicious script can not only issue requests to the victim application but also automatically include any session cookies or auth tokens that the user’s browser has for the target domain. In the threat model, this scenario effectively turns a cross-origin request into an authenticated request on behalf of the user, with the response delivered to the attacker’s script. The browser’s protections (same-origin policy and cookie scoping) normally prevent an evil site from doing this, but a misconfigured CORS setting defeats those protections. Therefore, part of the threat landscape includes what is essentially a form of cross-site request forgery combined with unauthorized data disclosure. The attacker’s website becomes a proxy that can perform privileged actions or data retrieval in the context of the victim user’s session on the target site, something known as a CORS-based attack or CORS-enabled CSRF. Security researchers have demonstrated numerous real-world examples of this attack pattern, underscoring that whenever a site’s CORS policy is too broad, any data or action accessible by a user’s session could be exposed to external origins PortSwigger Research – Exploiting CORS Misconfigurations.
In building a threat model for an application, engineers should identify where CORS is used and consider the assets at risk. Typical assets include sensitive REST API endpoints (user information, financial data, etc.) or privileged actions (like changing account settings) that the web application provides. If these endpoints are protected by authentication (cookies or tokens), an attacker needs CORS misconfiguration plus an active user session to exploit. If some endpoints are intentionally public (no auth needed), a wildcard CORS policy might seem benign – but even then, it might enable abuse (like an attacker’s site mass-scraping the public data or using the victim’s browser as an unwitting proxy). The threat model must also consider internal origins or subdomains if the policy allows them; sometimes a CORS policy is meant to allow a company’s other domains, but if one of those domains is compromised or less secure, it could become a stepping stone for an attacker (a concept called pivoting across trusted origins). In summary, the threat landscape for CORS misconfigurations includes external malicious sites targeting user data, internal domain interactions that could be abused, and logic flaws where the server’s definition of “trusted origin” can be tricked by cunning inputs.
Common Attack Vectors
Common attack vectors for exploiting CORS configurations typically involve an attacker-controlled environment (usually a web page or script on a domain the attacker owns) and a target application with an overly permissive CORS policy. One of the most straightforward vectors is the Origin reflection attack. In this scenario, the server is coded to take whatever value is sent in the Origin request header and blindly reflect it in the Access-Control-Allow-Origin response header. This often occurs when developers use convenience libraries or simplistic code snippets that set Access-Control-Allow-Origin to the incoming origin without an allowlist. An attacker can take advantage of this by sending a request from their malicious site (say evil.com) with Origin: evil.com. If the server responds with Access-Control-Allow-Origin: evil.com (and typically Access-Control-Allow-Credentials: true if credentials are needed), then the attacker’s JavaScript running on evil.com can now read the response from the target. This vector is essentially the “holy grail” for an attacker – it means the server will trust any origin, as the reflection implies no effective restriction. Numerous real-world vulnerabilities follow this pattern, as it only takes a single insecure configuration to expose an entire API. For example, if an API endpoint on api.victim.com is configured to reflect origins for CORS, a script on attacker.com could fetch api.victim.com/userData and the browser will allow it, leaking the user’s data to the attacker’s page.
Another common attack vector involves overly broad whitelists or wildcard configurations. A classic mistake is using the wildcard * to allow all origins. While the CORS specification forbids using * in conjunction with credentials (browsers will block such responses), there are still dangerous cases. If Access-Control-Allow-Origin: * is enabled on an API that does not require authentication (for example, a public API that returns some user-specific data based on tokens or other means), any website can access that data simply by issuing requests – this might not violate the user’s session integrity in the cookie sense, but it could allow unanticipated third-party use of an interface in ways that lead to abuse or privacy issues. More critically, some servers mistakenly allow * even when they do require credentials, or they attempt to circumvent the browser restriction by dynamically setting specific origins but effectively allowing all. A known vector is a server that sets Access-Control-Allow-Origin to * for authenticated responses due to a misconfiguration. The browser will indeed block the response from being accessed by scripts if Access-Control-Allow-Credentials: true is also present (since * with credentials is disallowed), but such a misconfiguration can cause other issues, and it signals a misunderstanding that an attacker might leverage differently (for instance, by finding an endpoint where credentials aren’t needed or by exploiting the misconfiguration after triggering the server to omit the credential flag). Alternatively, misconfigurations like allowing all subdomains via an unsupported wildcard (e.g., setting Access-Control-Allow-Origin: *.example.com thinking it will trust any subdomain) are sometimes seen. Since the CORS standard does not support partial wildcards like that in the header, browsers will likely ignore such a header, possibly leaving the resource effectively unshared. Attackers might not directly exploit *.example.com (because it doesn’t work), but the confusion it creates could lead to developers inserting fallback logic (like reflection of origins containing “example.com”) which then becomes exploitable.
A more subtle vector is exploiting pattern-matching weaknesses in CORS validations. Imagine a server that intends to allow only https://trusted.com and https://trusted.net. Instead of a robust check, a developer might implement a naive substring check: if origin contains "trusted" then allow. An attacker could register a domain like https://nottrusted.com or even https://trusted.com.attacker.org and bypass this filter. One example documented in security testing guides is appending an allowed domain as a subdomain of an attacker’s domain (e.g., Origin: https://trusted.com.attacker.com). If the check uses indexOf("trusted.com") or similarly flawed logic, it will pass, and the server will echo back that origin in the response OWASP WSTG – Testing CORS. The attacker’s origin trusted.com.attacker.com is not actually related to the real trusted.com, but because the validation was incomplete (not anchoring the match to the end of the string, for instance), it slips through, leading to unauthorized access. This kind of vulnerability is essentially an input validation error specific to CORS logic and is a favored trick in CORS exploitation playbooks.
Another vector to consider is the handling of the null origin. The origin null is a special case that can arise in certain contexts (for example, when a resource is loaded from a local file, sandboxed iframe, or data URL, the Origin can be null). Some servers, in an attempt to be broadly permissive, may unintentionally allow Origin: null in their CORS policy. An attacker could exploit this by convincing a user to open a malicious local HTML file or use a data URI, which then has a null origin and can make requests to the target. If the server responds with Access-Control-Allow-Origin: null, the malicious script (running from the local file or sandbox) could access the response. While this is a less common scenario, it is a known vector in advanced CORS exploitation. Attackers might deliver a downloadable HTML file or an email attachment that, when opened, runs scripts with a null origin to target an application that trusts null. This technique circumvents the need for a registered domain and demonstrates how even edge-case origins must be considered in a robust CORS policy.
In summary, the common attack vectors boil down to leveraging any misstep where the server declares a request as safe when it shouldn’t. Whether it’s the universal wildcard, careless reflection of caller origins, incomplete domain pattern checks, or special-case origins like null, attackers will try all possible variations. The attacker’s objective in each case is to trick the server into including the attacker’s domain in the Access-Control-Allow-Origin response (and typically to also allow credentials or relevant headers) so that the attacker’s JavaScript can break the cross-domain barrier. With that barrier down, the attacker can proceed to use the victim’s browser as a tool to extract data or perform actions on the target site. It’s worth noting that these attacks are silent from the user’s perspective – unlike a phishing site that visibly imitates another site, a CORS attack just uses an invisible script. The only traces might be browser console messages or network logs, which regular users don’t check. This makes detection by victims extremely unlikely, further incentivizing attackers to seek out and exploit any CORS misconfiguration they can find.
Impact and Risk Assessment
The impact of insecure CORS settings can be severe, often comparable to critical vulnerabilities like cross-site scripting or cross-site request forgery, since it can lead to unauthorized access to sensitive information and functionalities. When a web application’s CORS policy is misconfigured to allow untrusted origins, the worst-case scenario is a data breach: an attacker can retrieve any information that a legitimate user (with an active session) could access. For example, consider an online banking API that mistakenly allows Access-Control-Allow-Origin: * and Access-Control-Allow-Credentials: true for account data endpoints. An attacker who lures a logged-in user to a malicious site could silently invoke the banking API via the victim’s browser and extract account balances, transaction history, personal details, etc. This effectively bypasses the same-origin policy’s protection of those assets. Even though the banking site might have strong authentication controls, they are rendered moot because the attacker is piggybacking on the user’s authenticated session via the allowed cross-origin request. In terms of risk, such a scenario would likely be rated as High or Critical severity, since it violates confidentiality and possibly integrity (if the attacker can also perform state-changing actions and read the results).
From a CVSS (Common Vulnerability Scoring System) perspective, a CORS misconfiguration that allows any origin with credentials often scores high on Confidentiality impact (complete breach of data confidentiality is possible). The Attack Vector is “Network” (attacker can exploit it over the web), the Attack Complexity is low (no special conditions required beyond the misconfig), and privileges required are none (the attacker doesn’t need their own account, just a victim who is logged in). User interaction is required (the victim needs to visit the attacker’s web page), which might reduce the score slightly, but since that’s a very achievable condition (phishing, malicious ads, or even hidden attacks in legitimate sites via XSS), it doesn’t provide much mitigation. Thus, organizations like OWASP consider permissive CORS a serious issue — indeed, it falls under the OWASP Top 10 category of “Security Misconfiguration” or “API vulnerabilities” depending on context. It’s also explicitly covered by security standards; for instance, the OWASP Application Security Verification Standard (ASVS) includes requirements to verify that the application’s cross-domain policy is properly configured and does not allow unauthorized origins OWASP ASVS 4.0. The existence of CWE entries such as CWE-942 (Overly Permissive Cross-Domain Policy) and CWE-346 (Origin Validation Error) underscores that the industry recognizes and catalogs these misconfigurations as distinct security weaknesses with significant impact.
One important facet of risk assessment is determining what data or functionality is exposed by the misconfiguration. If an application’s CORS policy is wide open but the application only serves public, non-sensitive data, the risk might be deemed low. For example, if a public website has an open API that returns the current weather or stock prices (data that’s not confidential), then allowing all origins to access it isn’t a vulnerability per se — it’s by design, and the data can be considered public. Many content delivery networks (CDNs) and public APIs intentionally set Access-Control-Allow-Origin: * to allow broad use of their resources. The key difference is that no credentials or user-specific secrets are involved in such cases. Therefore, part of the risk assessment is understanding context: CORS misconfiguration is risky primarily when it involves protected or sensitive resources. So, an assessor should identify if the endpoints with broad CORS permissions require authentication or serve user-specific data. If they do, and the policy is broad, that’s a critical flaw. If not, the risk might be mitigated by the nature of the data.
Another aspect to consider is brand and user trust impact. If a vulnerability allows an attacker to read a user’s data, it could erode users’ trust in the application. In regulated industries (like healthcare or finance), such a data leakage might trigger legal penalties and breach disclosure requirements. An attacker exploiting CORS could potentially chain it with other attacks: for instance, if they can read sensitive data via CORS, they might use that data to escalate privileges or perform targeted social engineering. There’s also a less obvious integrity angle: suppose an API has an endpoint that generates some kind of action or transaction, and returns a result or an ID (e.g., “transfer money and return new transaction ID”). Normally CSRF alone could cause the action but not let the attacker see the response (like confirming a transfer succeeded or getting the resultant data). With CORS misconfig, the attacker could both trigger the action and view the response, making the attack more effective (they can confirm and log what happened, possibly chaining into another step). Thus, CORS misconfiguration can turn what would have been a one-way attack into a full two-way compromise.
In summary, the risk of insecure CORS ranges from negligible to critical depending on what is exposed. At the high end, it is effectively an open door for cross-domain attackers to harvest data and perform actions on behalf of users – a situation akin to having a cross-site scripting vulnerability that doesn’t require injecting script into the target site because the target site has already allowed the attacker’s site to become a script proxy. On the lower end, if an application’s CORS config is broad but truly nothing sensitive is available or the site doesn’t use cookies (and requires tokens that an attacker can’t obtain), the impact might be more limited (perhaps abuse of functionality or scraping of publicly available data). However, caution is warranted: many developers think certain data is “not sensitive” until it’s combined or at scale (e.g., an open endpoint listing user IDs might seem harmless until someone correlates it with other info). From a prudent AppSec standpoint, any unnecessary relaxation of the same-origin policy is an increase in attack surface and should be justified by a valid use case and accompanied by other controls. The worst-case impact is significant enough that industry guidelines strongly advise avoiding broad CORS policies; for example, the OWASP API Security Top 10 highlights misconfiguration issues like these as common pitfalls to avoid. Therefore, risk assessment should err on the side of caution: treat permissive CORS as a likely high-severity issue unless proven otherwise by context.
Defensive Controls and Mitigations
Defending against CORS-related vulnerabilities primarily means configuring your application’s CORS policy in a safe, precise manner. The cornerstone of a secure CORS configuration is the principle of least privilege applied to allowed origins. In practice, this means you should explicitly specify the exact origins that need access to your resources, and no more. Instead of using * (allow all) or reflecting arbitrary origins, maintain an allowlist of trusted domains. For example, if your web API at api.example.com is only consumed by your single-page app at app.example.com, then set Access-Control-Allow-Origin: https://app.example.com in responses (and no other domain). By doing so, even if an attacker’s site tries to make a request, the browser will see that the attacker’s origin is not in the allowlist and will block the response. Practically, implementing this might involve configuration in your web framework or server: many frameworks allow you to set a list of permitted origins (often via a CORS middleware or filter). This is far safer than writing custom logic to parse and match origins, which as we saw can be error-prone. Using exact string matches for allowed origins or well-tested pattern matching (like enforcing a known suffix with a preceding dot, etc. if subdomains are allowed) is essential. Dynamic reflection of the Origin header should only be done if it’s paired with a check against a known list of allowed origins. For instance, the server can check “is the Origin header value a member of my allowed set? If yes, echo it; if not, respond with no CORS headers (or an error).” That way, unauthorized origins are never echoed back. This approach addresses the common reflection vulnerability by adding the missing validation step.
Another important control is to restrict the allowed HTTP methods and headers to only those necessary. CORS policies include headers like Access-Control-Allow-Methods and Access-Control-Allow-Headers which inform the browser what methods (GET, POST, PUT, DELETE, etc.) and request headers are permitted in cross-origin requests. Following best practices, you should not simply allow all methods if your API only ever expects (for example) GET and POST. By explicitly allowing only what’s needed (and likewise only the custom headers your clients will send), you reduce the risk surface. This way, even if an attacker finds an unexpected cross-origin endpoint, they are limited to the methods that have been deemed safe/necessary. Likewise, the response header Access-Control-Max-Age is used to indicate how long the browser can cache the preflight response – setting it to a reasonable duration (to minimize needless preflights) is fine, but keep in mind that if you need to revoke or tighten CORS policy, cached preflight results could temporarily allow old behavior. Generally, that’s a minor consideration and typically short (a few minutes or hours) cache times are used, or it’s omitted so that each session re-checks relatively often.
Credential handling is another critical aspect. If the application does not need to support cross-origin requests that include cookies or HTTP authentication, do not set Access-Control-Allow-Credentials: true. By default, browsers will not send cookies on cross-origin requests (and will ignore any response data if credentials were withheld and the server required them). Only set Allow-Credentials: true when you explicitly intend to allow the browser to include user credentials in the cross-origin call, and even then, ensure that Access-Control-Allow-Origin is not a wildcard. In fact, the CORS specification and modern browsers enforce that when credentials are allowed, the allowed origin must be an explicit origin (the value “*” is invalid in that scenario) MDN: Access-Control-Allow-Credentials. This means if you mistakenly configure Access-Control-Allow-Origin: * alongside Access-Control-Allow-Credentials: true, the policy will not work as intended — browsers will reject the response, often logging an error like “Cannot use wildcard in Access-Control-Allow-Origin when credentials flag is true.” While that may prevent exploitation, it’s effectively breaking functionality and might go unnoticed by developers (or worse, they may try to workaround it incorrectly). The secure approach is: if credentials are needed, always tie the allow-origin to a specific origin (or dynamically validated origin) and never *. If credentials aren’t needed, do not send Allow-Credentials: true at all, and then using a wildcard might be acceptable for truly public data. Also, for credentialed cross-origin scenarios, consider additional CSRF protections: for instance, even though CORS allows the cross-site request with cookies, the server can still enforce anti-CSRF tokens or same-site cookie attributes to ensure that only its intended front-end can actually use a session cookie. Combining those techniques can provide defense in depth.
Proper server-side handling of preflight (OPTIONS) requests is also a defensive measure. Ensure that your server responds correctly to the initial OPTIONS request for resource and method combinations that you do intend to allow. For example, if you allow PUT from a certain origin, the preflight for PUT should respond with Access-Control-Allow-Methods: PUT (and the relevant origin allowed). Sometimes developers forget to configure their server or framework to handle OPTIONS, leading them to quickly allow “OPTIONS *” from all origins which, if done without constraint, could inadvertently allow more than intended. The defense here is to implement the preflight logic in alignment with your policy: many frameworks do this for you when you configure allowed origins and methods, but if writing manual logic, treat the OPTIONS path with equal scrutiny. Only return the allowed origins/methods/headers combination that matches the request’s Origin and Access-Control-Request-Method, rather than blanket allowing everything. A robust implementation might check that the Access-Control-Request-Method header in the preflight is one of the permitted methods and only then echo the appropriate allow headers. If the method is not allowed for that origin, the server should either not set CORS headers or explicitly deny it (some frameworks simply return a 403 for disallowed preflights or just no CORS headers, causing the browser to deny).
Another mitigation beyond direct CORS settings is to employ additional access control on sensitive data. For instance, even if you have a proper CORS allowlist, you might want to require an API key or an OAuth token for certain endpoints in addition to cookies. This way, even if an attacker somehow fools CORS or finds a mistake, they would still lack a second factor to actually retrieve data. This is more of a general API security measure but can compensate for CORS mistakes. However, one must not be complacent: if your design relies solely on CORS for security, that’s a red flag. CORS should be one layer of control, not the only control, because a determined attacker might find ways to exploit logic bugs or conditions where CORS is not correctly applied. Always authenticate and authorize requests on the server as if CORS did not exist, because CORS doesn’t authenticate — it just gates browser disclosure. For example, if an API endpoint should only be used by certain clients, consider embedding a client identifier or expecting a particular header (and explicitly allow that header via Access-Control-Allow-Headers) to positively identify the calling application. Such measures, when combined with CORS, make exploitation much harder: an attacker would not only need to get past CORS but also guess or obtain the additional secret or key.
Finally, implement the Vary: Origin header on responses that include CORS headers. This is a subtle, often overlooked control that has to do with caching. If your responses are cached by an intermediate proxy or CDN, and you serve different Access-Control-Allow-Origin values depending on the request’s Origin, you want the cache to know that variations in the Origin header result in different responses. The Vary: Origin header instructs caches to keep separate copies of the response for each Origin. Without this, a response cached for one origin might be served to a request from another origin, potentially causing either a functional failure or a security issue (e.g., a cached response might incorrectly have Access-Control-Allow-Origin: trusted.com, which is useless if served to a client from another-site.com, or vice versa, it might erroneously allow something it shouldn’t). Many web frameworks auto-add Vary: Origin when you use their CORS facilities properly. Ensuring it is present is a defense against weird corner-case bugs and ensures that your carefully set policy is uniformly enforced even with caching layers in play.
In summary, the defensive strategy for CORS is about precision and explicitness: explicitly specify who (which origins) can access what (which methods, headers, etc.), avoid blanket allowances, and be mindful of the context (credentials and caches). Always test your configuration thoroughly. Using automated tests or scanners to verify that only the intended origins are indeed allowed can help catch mistakes early. And keep policies up to date; if an origin should no longer be allowed (for instance, an integration is decommissioned), remove it promptly so that stale permissions do not linger. By combining strict CORS policies with standard authentication/authorization and diligent coding practices, you can achieve cross-origin integrations that are robust and secure.
Secure-by-Design Guidelines
Secure-by-design for CORS means incorporating cross-origin considerations right from the architecture and design phase of a web application or API, rather than treating CORS as an afterthought or just a deployment configuration tweak. A key guideline is to minimize the need for CORS in the first place. If possible, design your application so that critical interactions happen within the same origin. For example, hosting your front-end and back-end under the same domain (even if on different subdomains or paths) can eliminate the need for CORS entirely. Many applications avoid CORS complexities by using reverse proxies or same-site deployments (for instance, serving the front-end from www.example.com and API under www.example.com/api or api.example.com with proper domain cookies). If you control the environment, this design decision can greatly reduce the cross-domain attack surface. However, in modern architectures where microservices and separate front-end domains are common, CORS is often unavoidable. In those cases, the design should explicitly document which origins need access to which services. Treat the list of allowed origins as a part of your security requirements specification. During threat modeling exercises, include cross-origin interactions as entry points and ask “Which origins should legitimately be able to talk to this service, and what data can flow as a result?” By being explicit at design time, you avoid the pitfall of developers later using a catch-all solution like Access-Control-Allow-Origin: * just to “make things work”.
When cross-origin access is indeed necessary by design, prefer a centralized approach to CORS configuration. Rather than each developer or each microservice setting their own CORS policy ad-hoc, establish a common pattern or library that all services use to define allowed origins. This ensures consistency and ease of review. For example, an organization might create a configuration file or service that lists all trusted origins and each application pulls from that list. Then, if you need to add or remove an origin (say you have a new partner integration or you retire one), it can be done in one place and propagated. This approach also makes audits easier: security teams can review the allowed origins in one consolidated view and verify they are all expected. Many secure design frameworks encourage externalizing such configuration. Additionally, leverage framework features: for instance, in Spring Boot (Java), you can use global CORS configuration via a filter or the WebMvcConfigurer, which centralizes policy for all endpoints. In .NET Core, you define named CORS policies during startup that apply application-wide. By using these features, you reduce the risk that a single endpoint is misconfigured differently from others because developers didn’t all follow the same practice.
Another design guideline is to segregate sensitive functionality onto domains that are not exposed via CORS at all. If you have parts of your application that never need to be accessed by JavaScript from other origins (for example, administration endpoints or purely internal APIs), make sure those endpoints do not even include CORS headers. Essentially, design the deployment so that only the intended cross-domain integration points have CORS enabled, and everything else stays under same-origin policy protection. This can mean having separate subdomains or hostnames for public API vs private API. The private ones would simply not set any CORS headers, ensuring the browser will block any attempt to call them from external contexts. By design, this limits the exposure. It’s analogous to having different firewall rules for different services; here the browser’s SOP is the firewall and by not poking a hole (via CORS) for certain services, you keep them isolated.
One should also consider user experience and safety in the design. For example, if your application is going to allow third-party domains to embed certain content or make calls (perhaps via an open API for an ecosystem), think about how you will restrict and monitor that. You might design a system where third-party integrations must register and you explicitly add their origin to an allowlist (possibly dynamically). Such a registration process (even if informal) means you are consciously deciding “yes, we will allow partnerX.com to access resource Y.” Additionally, you might decide at design time to implement an authorization layer per origin – for instance, maybe partnerX’s requests include an origin-specific token. That way, even if another origin somehow sneaks into the CORS allow list, it wouldn’t have the token and still couldn’t get data. This is moving beyond pure CORS and more into the realm of API design, but it stems from designing with the principle that cross-origin requests are dangerous by default and need layered safeguards.
Designing for security also means anticipating misconfiguration and having failsafes. One secure-by-design principle is secure defaults: ensure that unless configured otherwise, your server code does not send any CORS headers (which means it defaults to secure, denying all cross-origin access). Then explicitly enable only what’s needed. Many frameworks already do this (they won’t send CORS headers unless you add configuration). The danger is when developers in a rush set something broad globally to fix an issue – as a design mitigation, code reviews and gating should catch if someone attempts to set Access-Control-Allow-Origin: * in a commit. If you integrate that into your design and development guidelines, developers will know not to do it without approval. In projects following OWASP ASVS, for example, the verification requirements for configuration might include “Verify that cross-origin resource sharing is only enabled for specific trusted domains” OWASP ASVS 4.0. A secure design would bake that requirement into the Definition of Done for a feature that introduces cross-domain communication.
Lastly, a secure design will consider future-proofing and maintainability. Document within your architecture which services or endpoints have CORS enabled and why. Provide this documentation to new team members and include it in security reviews. This way, if someone proposes to open up an endpoint to a new origin, it goes through a design review process. The design should also include how you will test and verify CORS (for instance, as part of integration testing, you might have a test origin make a request expecting failure to ensure disallowed origins are indeed disallowed). Including such tests is part of secure-by-design because it treats the CORS policy as an integral part of the system’s behavior, not just an ops setting. Given the complexity and the stakes, building security in at the design phase for CORS will save a lot of headaches down the line, and it will ensure that the resulting implementation is both functional and secure.
Code Examples
In this section, we illustrate how to configure CORS securely and how insecure configurations might look, across several programming languages. Each example highlights common implementation patterns, contrasting a flawed approach (that could introduce vulnerabilities) with a recommended secure approach. The examples include comments and explanations for clarity. These are simplified snippets meant to demonstrate concepts – in a real application, details might differ (for instance, you might use environment configuration for allowed origins, or more complex logic), but the core ideas of what to do or avoid remain the same.
Python (Flask example)
In Python web frameworks like Flask, developers often use the Flask-CORS extension or manual header setting to enable CORS. An insecure implementation might allow all origins and credentials indiscriminately, whereas a secure implementation restricts origins explicitly.
Insecure Flask example – allowing all origins with credentials (dangerous):
from flask import Flask, jsonify
from flask_cors import CORS
app = Flask(__name__)
# Insecure: This will allow any origin to make requests, and even allow cookies.
CORS(app, supports_credentials=True) # Access-Control-Allow-Origin: * (by default), with credentials
@app.route("/user_data")
def user_data():
# Example sensitive data
return jsonify({"name": "Alice", "email": "[email protected]"})
In the above Flask code, using CORS(app, supports_credentials=True) without specifying allowed origins means the extension will by default allow all origins (*). While Flask-CORS might internally handle the credentials+* case by not sending * when credentials are true (depending on version), in effect this code is intended to allow any domain to access the /user_data endpoint and include user credentials (cookies). This is insecure because any website could potentially issue a request to this Flask app and retrieve user_data (if the user’s session cookie is present), compromising sensitive information. The developer’s intent might have been to solve a CORS error quickly, but this blanket approach opens a security hole.
Secure Flask example – allowing only a specific trusted origin:
from flask import Flask, request, jsonify, make_response
app = Flask(__name__)
# Secure: define an explicit allowlist of origins
ALLOWED_ORIGINS = {"https://app.trusteddomain.com"}
@app.route("/user_data")
def user_data():
user_info = {"name": "Alice", "email": "[email protected]"}
origin = request.headers.get("Origin")
response = make_response(jsonify(user_info))
if origin in ALLOWED_ORIGINS:
# Only echo back the origin if it's in the allowlist
response.headers["Access-Control-Allow-Origin"] = origin
response.headers["Access-Control-Allow-Credentials"] = "true"
response.headers["Access-Control-Allow-Methods"] = "GET"
return response
In this secure example, we maintain an ALLOWED_ORIGINS set (with one trusted origin in this case). When serving the /user_data endpoint, the code checks the Origin header of the request. If the origin is one of the trusted ones, it sets the CORS headers to allow that origin and includes credentials (assuming we do want the user’s cookies to be included for authentication). We explicitly specify allowed methods as well (only GET here, since this endpoint only needs to handle GET). If the origin is not trusted, the code simply returns the response without any CORS headers, meaning the browser will not allow the calling script to access the data. By doing this, even though the endpoint is publicly reachable, only scripts from https://app.trusteddomain.com will be able to consume the response in a browser. This design prevents malicious sites from obtaining the data. Note that in a real Flask application, you could achieve the same result using Flask-CORS by passing a list of origins (e.g., CORS(app, supports_credentials=True, origins=["https://app.trusteddomain.com"])), which would automate this check internally. The key is that we are not allowing the universal wildcard for a sensitive endpoint.
JavaScript (Node.js with Express)
In a Node.js environment, the Express framework is commonly used alongside either custom middleware or the official cors package to configure CORS. A naive implementation might simply allow every origin for convenience, whereas a proper implementation restricts origins.
Insecure Express example – open to all origins and credentials:
const express = require('express');
const app = express();
// Insecure: Allow any origin and credentials (cookies) for all routes
app.use((req, res, next) => {
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Credentials', 'true');
res.setHeader('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS');
res.setHeader('Access-Control-Allow-Headers', 'Content-Type');
// Note: '*' with credentials is not spec-compliant; browsers will block it.
next();
});
app.get('/account/details', (req, res) => {
// Sensitive account info
res.json({ accountNumber: '123456', balance: 1000 });
});
This Express middleware demonstrates a dangerously permissive CORS policy. It unconditionally sets Access-Control-Allow-Origin to *, meaning it intends to allow any domain. It also sets Access-Control-Allow-Credentials: true, intending to allow cookies. This combination is fundamentally insecure (and in fact, as comments note, will be ignored by compliant browsers – the developer might not realize their configuration is broken). If a non-compliant client or misconfigured browser were to honor it, any site could read the /account/details data. Even though modern browsers won’t allow * with credentials, the presence of this configuration often indicates the developer tried to disable CORS safeguards entirely. Attackers could exploit such a scenario by observing that maybe the developer will next try to “fix” it by reflecting origins or some other flawed approach, given that this doesn’t work. In any case, the intent here was to allow everything, which is what we want to avoid.
Secure Express example – using an allowlist of origins and proper middleware:
const express = require('express');
const app = express();
const allowedOrigins = ['https://app.trusteddomain.com', 'https://admin.trusteddomain.com'];
app.use((req, res, next) => {
const origin = req.headers.origin;
if (allowedOrigins.includes(origin)) {
res.setHeader('Access-Control-Allow-Origin', origin);
res.setHeader('Access-Control-Allow-Credentials', 'true');
res.setHeader('Access-Control-Allow-Methods', 'GET,POST,OPTIONS');
res.setHeader('Access-Control-Allow-Headers', 'Content-Type');
// Optionally, set Access-Control-Max-Age for caching preflight
}
// If origin is not allowed, we set no CORS headers (browser will block it)
next();
});
app.get('/account/details', (req, res) => {
// Sensitive account info
res.json({ accountNumber: '123456', balance: 1000 });
});
In the secure Express snippet, we maintain an array of allowed origins. The middleware checks if the incoming request’s Origin header is in this allowlist. If it is, we set the CORS response headers to explicitly allow that origin, permit credentials, and restrict methods to those needed (GET, POST in this case, plus OPTIONS which is used for preflight checks). We also specify allowed headers – here just Content-Type as an example (this would be expanded if the client needed to send custom headers). If the origin is not in the list, the code does nothing special, meaning no Access-Control-Allow-Origin is sent. The browser, upon not seeing the header, will enforce the default same-origin policy and block the calling script from accessing the response. This approach ensures only known, trusted domains (like your main app and maybe an admin portal in this example) can use the API. By not using *, we allow credentials safely. This kind of logic could also be achieved using the cors npm package by supplying an origin function or list that performs the same check – for instance:
app.use(require('cors')({
origin: function(origin, callback) {
if (!origin || allowedOrigins.indexOf(origin) !== -1) {
callback(null, true);
} else {
callback(new Error('Not allowed by CORS'));
}
},
credentials: true
}));
Using the official package can reduce errors, but under the hood, the principle remains: check the origin against a whitelist and only allow if permitted.
Java (Spring Boot with Spring Web MVC)
Java applications often use Spring’s built-in facilities to handle CORS. Spring Boot (and the underlying Spring Web MVC) allows both annotation-based and global configuration. An insecure setup might inadvertently allow all origins, whereas a secure one will explicitly define allowed origins.
Insecure Spring Boot example – overly permissive @CrossOrigin usage:
import org.springframework.web.bind.annotation.*;
@CrossOrigin(origins = "*", allowCredentials = "true", allowedHeaders = "*", methods = {RequestMethod.GET, RequestMethod.POST})
@RestController
@RequestMapping("/api")
public class UserController {
@GetMapping("/profile")
public Profile getProfile() {
// Returns sensitive user profile data
return getAuthenticatedUserProfile();
}
}
In this Java example, the @CrossOrigin annotation is applied at the class (or method) level. It explicitly sets origins = "*", intending to allow any domain, and allowCredentials = "true", allowing cookies. It also allows all headers and GET/POST methods for cross-origin. This is insecure for the same reasons discussed: any origin would be permitted to access the /api/profile data including session cookies. In fact, Spring will internally handle the * with credentials by not sending * literally (Spring’s implementation will either error or force origin reflection when credentials are true), but the developer’s configuration here indicates an attempt to be completely open. If this were honored, it would constitute a critical vulnerability, as any website could fetch user profile data. This kind of annotation might sneak into code because a developer is testing something or doesn’t know the exact origin at development time and uses * for convenience. It’s a dangerous configuration for any endpoint returning sensitive information.
Secure Spring Boot example – restricted origins via configuration:
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.CorsRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
@Configuration
public class WebSecurityConfig implements WebMvcConfigurer {
@Override
public void addCorsMappings(CorsRegistry registry) {
// Apply CORS rules only to the /api/** endpoints
registry.addMapping("/api/**")
.allowedOrigins("https://app.trusteddomain.com")
.allowedMethods("GET", "POST")
.allowedHeaders("Content-Type")
.allowCredentials(true);
// .maxAge(3600) can be set to cache preflight responses for 1 hour
}
}
This Spring Boot configuration class demonstrates a secure CORS setup. Instead of using @CrossOrigin with a wildcard directly in the controller, we define a global CORS policy. We restrict it to URL paths under /api/** (assuming those are the endpoints that need cross-origin access). The allowedOrigins is set to a specific trusted domain (app.trusteddomain.com). Only GET and POST methods are allowed cross-origin, which matches what our API should support. Only the Content-Type header is allowed from the client (if, for example, the client sends JSON, it will set Content-Type: application/json, which we allow; other headers like authentication tokens could be added here if needed). We enable credentials, meaning if the user is logged in on that domain, cookies will be accepted – but crucially, because we are not using *, this is safe and in line with the spec. Spring will ensure that the response includes Access-Control-Allow-Origin: https://app.trusteddomain.com (and also automatically add a Vary: Origin header when multiple origins are possible, which is helpful for caching). If a request comes from any other origin, it will simply not get these CORS headers, and the browser will block the response. This configuration is easy to manage and clearly outlines the expected integration domain. If we needed to add another allowed origin (say a new front-end at https://mobile.trusteddomain.com), we can just add it to the list in one place. By using Spring’s WebMvcConfigurer, we also centralize CORS logic separate from the business controllers, which reduces the likelihood of accidental inconsistency or forgetting to secure one endpoint.
.NET/C# (ASP.NET Core)
In ASP.NET Core (the modern .NET framework for web applications/Web APIs), CORS is typically configured in the Startup class or via the dependency injection container. An insecure setup might attempt to allow all origins freely, whereas a secure one will use a named policy with specific origins.
Insecure ASP.NET Core example – misconfigured AllowAnyOrigin with credentials:
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddPolicy("OpenCORS", builder =>
{
builder.AllowAnyOrigin()
.AllowAnyHeader()
.AllowAnyMethod()
.AllowCredentials();
});
});
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseCors("OpenCORS");
// ... other middleware and endpoint mappings
}
In this .NET Core snippet, we add a CORS policy named "OpenCORS" that calls AllowAnyOrigin() along with AllowAnyHeader() and AllowAnyMethod(), and then also calls AllowCredentials(). This code is insecure in intent – it’s trying to allow any origin to access the API with no restrictions, including allowing credentials. In fact, ASP.NET Core’s CORS middleware will reject this configuration at runtime: by design, AllowAnyOrigin and AllowCredentials cannot be used together in ASP.NET Core because it knows that violates the CORS specification. The developer would encounter an error or warning and might then “fix” it by specifying origins or removing one of the calls. However, the snippet is representative of an insecure approach (trying to completely open CORS). If this were forced to run (imagine a hypothetical scenario where the framework didn’t protect you), it would be a huge vulnerability: any website could send requests to your API endpoints and receive responses including cookies or auth tokens. The presence of AllowAnyHeader and AllowAnyMethod makes it even more permissive by not restricting what can be sent. The correct approach is not to do this, but rather to define specific allowed origins.
Secure ASP.NET Core example – using a restricted CORS policy:
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddPolicy("RestrictedCORS", builder =>
{
builder.WithOrigins("https://app.trusteddomain.com")
.WithHeaders("Content-Type")
.WithMethods("GET", "POST")
.AllowCredentials();
});
});
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseCors("RestrictedCORS");
// ... other middleware and endpoint mappings
}
In the secure ASP.NET Core setup, we define a policy named "RestrictedCORS". Instead of allowing any origin, we use WithOrigins to specify the exact origin that is trusted (app.trusteddomain.com). We limit the allowed request headers to those needed (Content-Type in this example) with WithHeaders, and the allowed methods to GET and POST via WithMethods. We do allow credentials so that cookies or Windows auth, etc., can be included from that origin – but because only one origin is specified, ASP.NET will handle that by echoing that origin in the Access-Control-Allow-Origin header and it’s fully in line with the spec. The UseCors("RestrictedCORS") call in Configure globally applies this policy to all requests (you could also apply it to specific endpoints or controllers if desired). With this configuration, any request coming from https://app.trusteddomain.com will get the appropriate CORS headers in the response, and the browser will allow the calling script to read the response. Requests from other origins won’t match the policy, so the middleware will simply not add any CORS headers, meaning those requests will not be allowed by the browser. This ensures a tight cross-origin policy. It’s also easy to extend: if you need to allow more origins in the future, you can add them to the WithOrigins list (e.g., .WithOrigins("https://app.trusteddomain.com", "https://admin.trusteddomain.com")). The clarity of this code makes it evident what the policy is, and the strong typing of the CORS configuration in ASP.NET Core helps prevent mistakes (as noted, it wouldn’t let you combine any-origin with credentials). The outcome is a secure default that only the specified site can invoke the API cross-origin.
Pseudocode (Generalized Pattern)
To reinforce the general approach in a language-agnostic way, consider the following pseudocode patterns for implementing CORS. The insecure version shows the wrong approach of reflecting origins without checks or allowing all, and the secure version demonstrates a proper allowlist check and response.
Insecure pseudocode – unconditional reflection of Origin:
# Pseudocode for an insecure CORS handling
allowed_origins = "*"
function handle_request(request):
origin = request.headers.get("Origin")
if origin is not null and allowed_origins == "*":
# Reflect the origin without any validation
response.headers["Access-Control-Allow-Origin"] = origin
response.headers["Access-Control-Allow-Credentials"] = "true"
response.headers["Access-Control-Allow-Methods"] = "GET, POST, PUT, DELETE"
response.headers["Access-Control-Allow-Headers"] = "*"
# ... process the request and generate response body ...
return response
This pseudocode epitomizes what not to do. The allowed_origins is set to "*" meaning essentially “no restriction”. When any request comes in with an Origin header, this code reflects that origin straight into Access-Control-Allow-Origin. It also indiscriminately allows credentials and all methods/headers. There is no check against a list of trusted origins. If this pattern is used, any requesting origin gets carte blanche access. It’s basically the logic behind many common vulnerabilities: Access-Control-Allow-Origin becomes equal to whatever the client sent, without asking “should we allow this origin?”. The allowed_origins == "*" is a flag here that the developer intended universal access. In reality, if credentials are involved, sending * as a literal value might not work (so one might see code that instead sets allowed_origins = "*" but then does Allow-Origin = origin if credentials are needed, as above). In either case, the problem is the lack of an explicit check. This pseudocode would allow a malicious origin to steal data, as it will always satisfy the condition and insert the malicious origin in the response headers.
Secure pseudocode – explicit allowlist validation:
# Pseudocode for a secure CORS handling
allowed_origins = {"https://app.trusteddomain.com", "https://partner.example.com"}
function handle_request(request):
origin = request.headers.get("Origin")
if origin is not null and origin in allowed_origins:
response.headers["Access-Control-Allow-Origin"] = origin
response.headers["Access-Control-Allow-Credentials"] = "true"
response.headers["Access-Control-Allow-Methods"] = "GET, POST"
response.headers["Access-Control-Allow-Headers"] = "Content-Type"
# Optionally, if origin is not in allowlist and an Origin header exists:
# you might decide to return an error or simply omit CORS headers.
# Here we do nothing (omit headers), causing browser to block disallowed origin.
# ... process the request ...
return response
The secure pseudocode maintains a set of allowed_origins. When handling a request, it retrieves the Origin header. Only if the origin is in the predefined set does it add the CORS headers to the response. Those headers include a specific Access-Control-Allow-Origin matching the request’s origin (no wildcards unless the allowlist explicitly contained a "*" which ours does not), and necessary flags like credentials (true in this case because we assume these endpoints require authentication and we want to allow cookies). The allowed methods and headers are explicitly listed as needed. If the origin is not allowlisted, the code path simply doesn’t attach any CORS headers. The effect is that the browser will not allow the client to read the response (the request might still technically hit the server and even get a 200 OK, but the browser will not make the response available to the JavaScript calling it). In some designs, one might choose to actively reject such requests (for example, return a 403 Forbidden if an untrusted origin attempts to access a protected resource). That can be an added layer: it informs an attacker immediately that their origin is not allowed. However, security-wise, it’s not strictly necessary because absence of CORS headers has the same effect on the browser side. Some implementations do a redirect or an error on disallowed origins to avoid wasting server resources on a request that the client won’t use. In any case, the essential part is the allowlist check. This pseudocode will ensure that only app.trusteddomain.com and partner.example.com (as examples) are honored. If in the future, say, partner.example.com should no longer be allowed, the developers would remove it from the allowed_origins set, and it would automatically be refused from then on. Conversely, adding a new origin is a conscious decision to insert into the allowlist. This design makes CORS policy explicit and easy to reason about, rather than buried in some implicit wildcard or reflection logic.
The secure pseudocode also delineates that only GET and POST are allowed (for cross-origin calls) and only the Content-Type header is allowed. If a malicious script attempted to use a different method (say DELETE) or include a custom header (say X-Admin: true), the preflight request from the browser would not get an Access-Control-Allow-Methods: DELETE or that custom header in Access-Control-Allow-Headers. Therefore, the browser would disallow the actual request. This shows how fine-grained policy can mitigate certain attacks: even if an attacker found a way to call the API, if they tried to exploit it by sending unusual headers or methods, the CORS policy could stop them. Of course, one should not solely rely on CORS to prevent, for instance, a DELETE— the server authentication/authorization should also forbid the attacker’s action — but having defense in depth with CORS not even letting the request through to the server script is beneficial.
Detection, Testing, and Tooling
Detecting misconfigured CORS in a web application can be achieved through a combination of manual testing and automated tools. A straightforward manual test is to simulate cross-origin requests to your application using a tool or script where you can control the Origin header. For example, testers often use curl or Postman to send requests with an Origin: attacker.com to various endpoints of the target application and then inspect the Access-Control-Allow-Origin in the response. If the response echoes attacker.com (or returns *), that’s an immediate red flag that the server is allowing a potentially unauthorized origin. One common pattern to test is sending an Origin value that you suspect might bypass filters, such as https://trusted.com.attacker.com (as discussed earlier, to catch substring matches) or null. If any such request returns a permissive CORS header, you likely found a vulnerability. The OWASP Web Security Testing Guide suggests exactly this approach: enumerate key endpoints (especially those that require authentication or expose sensitive data) and try requests from untrusted origins, observing the behavior OWASP WSTG – Testing CORS. If the web application is properly configured, requests from unauthorized origins should either receive no Access-Control-Allow-Origin header or a value of “null” or something that doesn’t match, resulting in the browser blocking the response.
Modern browsers provide useful information in their developer consoles for CORS issues. If you’re testing an application via a front-end, you might see error messages in the console like “No 'Access-Control-Allow-Origin' header present” or “The value of the 'Access-Control-Allow-Origin' header is wildcard '*' which is not allowed with credentials.” While these messages are mostly meant for developers debugging integration, they can also hint at misconfigurations. For example, if you see the wildcard-with-credentials error, it indicates the server tried to allow everything while also allowing credentials, which means there’s likely a misconfiguration (as we saw in insecure examples). As a tester, if you see that error, you’d try to figure out what the server is actually sending and why. Maybe it’s sending * and the browser refused; that tells you the server is too permissive (and just needs a slight attacker nudge to be exploitable, like if the developer decides to change it to echo origins to “fix” the error – which sometimes happens during bug bounty where testers see evidence of an attempt at open CORS).
Automated scanning tools also exist for CORS. Many web vulnerability scanners (OWASP ZAP, Burp Suite, etc.) include passive or active checks for common CORS issues. They will often flag responses that include Access-Control-Allow-Origin: * especially if they also include Access-Control-Allow-Credentials: true. They may also try sending their own origin and see if it’s mirrored back. For example, an automated scanner might send Origin: https://random-test-domain.com and then highlight if the response contains Access-Control-Allow-Origin: https://random-test-domain.com. Burp Suite’s scanner, for instance, has specific signatures for detecting “CORS misconfiguration” and will report it with details on what origin was used and how the server responded. There are also specialized tools/scripts like CORStest (a Python script available on GitHub) that enumerate a list of common exploit patterns (e.g., various forms of null, subdomain tricks, etc.) and test them against a target, systematically identifying misconfigs. These tools are handy because they can try variations you might not think of manually, such as multiple Origin headers (which is not strictly allowed by spec but might reveal weird handling), or origins with unusual schemes (like Origin: http://localhost to see if the site is whitelisted development origins, etc.).
Another detection approach is to review the application’s code or configuration. Security code review can pinpoint insecure patterns: for instance, in a Java code review, seeing response.setHeader("Access-Control-Allow-Origin", "*") is an obvious problem. In a JavaScript Node review, noticing app.use(cors()) with no options (which defaults to allowing everything) would stand out. Similarly, configuration files might reveal allowed origins. For example, some .NET web.config or appsettings might list CORS rules, or Nginx/Apache configs could have add_header Access-Control-Allow-Origin "*";. Reviewing those can be part of a security audit process.
During testing, it’s also important to check preflight responses. If an application is supposed to allow certain custom headers or methods, ensure that the OPTIONS request (the preflight) responds correctly with Access-Control-Allow-Methods and Access-Control-Allow-Headers. Misconfigured preflights often manifest as functionality bugs (the legitimate front-end can’t call the API because preflight fails), but they could also indicate an incomplete security configuration. For instance, if an endpoint is supposed to allow cross-origin GET but the preflight is not handled at all, the front-end might not function, and a developer might quickly “fix” it by adding a blanket handler that allows everything. As a security tester, catching that early (and guiding a correct fix) is valuable.
An emerging set of tools focuses on the new Sec-Fetch-* headers (like Sec-Fetch-Site) that browsers send, which indicate the context of the request (same-site, cross-site, etc.). While not part of CORS proper, these can be used on the server side to log or even enforce policy. For example, you might log any request where Sec-Fetch-Site: cross-site for endpoints that shouldn’t be receiving cross-site traffic. During testing, examining these headers can confirm whether the browser considers a request cross-site (which it usually will if origins differ). They can also be used to tighten security: a server could refuse requests that are cross-site unless certain conditions are met. However, adoption of such checks is still not widespread, and they are defense-in-depth rather than primary controls.
In terms of tooling, developers can use browser extensions or online services to test CORS. There are Chrome/Firefox extensions that allow you to craft requests with specific origins or that highlight CORS responses. Additionally, web security labs (like the PortSwigger Web Security Academy) provide interactive examples where developers can practice exploiting and fixing CORS issues, which is a form of training rather than testing production systems, but it equips one to know what to look for. PortSwigger’s CORS labs often include scenarios like “CORS policy trusts arbitrary subdomains” or “CORS trusts null” and require the student to find a way to exfiltrate data — these same techniques can be applied in real testing engagements.
Finally, once a misconfiguration is detected, it’s important to verify its exploitability. A detected issue is typically confirmed by demonstrating that an attacker domain can indeed perform the cross-origin request. In a safe test environment, one might set up a dummy attacker page (even just using something like JSFiddle or a simple HTML file) that attempts to fetch the sensitive resource and prints out the result. If you see the data coming through, you have a proof of concept that the CORS misconfiguration is exploitable. This goes a step beyond just reading response headers in a tool like curl; it shows the browser actually handing over the data to an unauthorized script, which is the crux of the vulnerability.
Operational Considerations (Monitoring, Incident Response)
From an operational standpoint, monitoring for unusual cross-origin activity can help detect attacks or misconfigurations in real time. Web server logs typically include the Origin header if logging is configured to capture request headers. By reviewing logs, one can establish a baseline of expected Origin values. For example, if your service is only supposed to be used by your domain app.trusteddomain.com, then in normal operation almost all requests should either have Origin: https://app.trusteddomain.com or no origin (no origin is sent for same-origin requests or certain non-XHR requests). If you start seeing requests with Origin: http://evil.com or any origin you don’t recognize, that’s a potential indicator of someone probing or exploiting a CORS misconfiguration. Security teams can set up alerts for such anomalies. Some intrusion detection systems or WAFs (Web Application Firewalls) can be configured to flag requests where the origin header doesn’t match allowed patterns. This is somewhat tricky because not all legitimate requests have an Origin (for example, a normal browser navigation or form submission won’t have it), so it’s mainly about monitoring API endpoints that expect a specific origin.
Another operational concern is that, as part of incident response planning, you should account for the possibility of a CORS-related breach. If, say, it’s discovered that for a period of time your API was allowing * and an attacker took advantage, logs will be crucial to understanding the impact. Because the attacker’s requests are coming from the user’s browser, it may look like legitimate traffic (the IP addresses will be the users’ IPs, and user agent strings will be of real browsers). However, by looking at the Origin header in logs, you might distinguish malicious requests (they’ll carry the attacker’s domain as origin). If you have verbose application logs or monitoring, you might also catch odd usage patterns (e.g., a spike in requests to an endpoint that normally isn’t called often, coming from an unusual origin). During incident response, you would then identify which data was potentially accessed and which users were affected, based on those logs.
On the flip side, from a availability standpoint, misconfigured CORS can also cause legitimate functionality to fail, which becomes a support and ops issue. If a deployment accidentally tightens CORS too much (e.g., you forgot to include an origin that should be allowed), users might experience broken features (the front-end can’t retrieve data). Monitoring tools like application performance monitors or client-side error logging might catch the CORS errors users get in their browser consoles. If a spike in CORS errors is reported (perhaps through user feedback or error monitoring scripts), ops teams might need to quickly recognize that as a configuration issue and roll out a fix. Having configuration toggles or the ability to update allowed origins without a full redeploy can be a handy operational feature. Some systems allow dynamic reloading of CORS configuration or reading allowed origins from a database or environment variable so it can be adjusted quickly in response to incidents.
Another consideration is implementing honeypot origins or canaries. For instance, one could intentionally allow a fake origin in the CORS policy and monitor if it ever gets used. In theory, no legitimate client would use it, so any occurrence of that origin in logs would indicate someone trying to abuse or probe CORS. This approach is uncommon but can be part of an advanced monitoring strategy, akin to a honeytoken. More broadly, you might also monitor referer headers in conjunction with CORS to see if content is being accessed cross-site (though Referer is less reliable and more privacy sensitive).
In terms of incident response, if a CORS vulnerability is discovered, the immediate step is to correct the configuration (lock down the allowed origins). This may involve an emergency patch or configuration change. Since CORS settings are often centralized, it can sometimes be fixed with a single configuration update or deploying a new version of a service. Incident responders should also consider revoking or invalidating credentials (like sessions, API keys, etc.) if they suspect they were compromised via the vulnerability. For example, if an attacker was stealing session tokens or account data, forcing a logout of users or rotating keys might mitigate ongoing illicit access. As part of the post-incident analysis, one should identify how the misconfiguration occurred: Was it a coding error? A miscommunication of requirements? A change that didn’t go through security review? Feed those findings back into the DevSecOps lifecycle to prevent recurrence.
Periodic audits are an operational task too. Over time, an origin that was once allowed might become unnecessary (maybe an integration is deprecated). It’s good practice to periodically audit the list of allowed origins in your CORS config and confirm each one is still needed and trustworthy. During such audits, security engineers might also verify that each of those domains still belongs to the expected party. It’s not unheard of for a company to allow a partner’s domain, and then the partner’s domain registration lapses or gets acquired by someone else — suddenly you’re trusting an unexpected party. Operationally, maintaining the allowlist is akin to maintaining firewall rules or access control lists: prune them when not needed.
Finally, consider disaster and recovery scenarios. If for some reason a critical functionality must be opened up temporarily (perhaps a failover scenario where traffic is routed differently), have a plan for how to do that without accidentally opening up beyond what’s safe. For instance, if you have to quickly allow another origin due to an emergency partnership or domain change, ensure that gets the same scrutiny and later follow-up to lock it back down if appropriate. It’s better to grant temporary exceptions in a controlled way than to permanently weaken the policy under pressure. In an emergency, one might be tempted to set Access-Control-Allow-Origin: * as a quick fix (to “make things work”). An operational playbook should advise against that and suggest safer alternatives (like explicitly adding the one needed origin and commenting it with an expiry or follow-up reminder).
In summary, operationalizing CORS security means watching for anomalies, being able to react swiftly to misconfigs or attacks, and continuously maintaining the integrity of the policy as the application evolves. It’s part of the ongoing security posture of the application – not a set-and-forget setting.
Checklists (Build-time, Runtime, Review)
Build-time (Development & Design phase): During the development of features that involve cross-origin requests, ensure that the need for CORS is captured as a requirement and designed securely. Developers should document which origins need access and why. Incorporate unit tests or integration tests for CORS behavior: for example, a test that calls an API endpoint with an allowed origin (expecting success) and with a disallowed origin (expecting the response to have no CORS headers or an error). By writing tests for these scenarios, you treat CORS policy as part of the application’s contract. Also, at build-time, use linters or static analysis if available: some linting rules can catch the use of dangerous patterns (like AllowAnyOrigin in .NET or literal * in code). If your team uses CI/CD pipelines, consider adding a security check that scans configuration files for wildcard CORS settings. The goal in the build phase is to prevent insecure configurations from ever making it to production. This includes having a clear checklist item in the design: “Are cross-origin interactions needed? If yes, have we specified a safe allowlist of origins and configured the framework accordingly?” Following frameworks’ security guides (Spring, ASP.NET, Flask, etc. all have documentation on how to configure CORS properly) is part of the checklist – use the provided mechanisms rather than inventing your own approach.
Runtime (Deployment & Production phase): At runtime, the checklist focuses on verification and monitoring. First, verify that the correct CORS headers are indeed present (and only present) on the responses they should be. This can be done in a staging environment by inspecting responses or using automated integration tests that run after deployment. Ensure that no sensitive endpoints accidentally include Access-Control-Allow-Origin: * in production – sometimes web servers might be configured with defaults that you need to override. Also, include in your runtime checklist that the Vary: Origin header is enabled if dynamic origins are allowed, to avoid cache issues. Monitoring should be set up (as mentioned earlier) – ensure that logs capture origin headers. If using a monitoring dashboard, perhaps create a widget that shows the distribution of request Origin values hitting your service. If any unexpected origins show up, that should trigger investigation. Another runtime concern is configuration drift: if using cloud services or containers, ensure that any platform-level CORS settings (for example, some cloud API gateways have their own CORS config) match what you intend. It’s worth having a step in deployment that double-checks environment-specific CORS config (like checking that a staging environment isn’t accidentally wide-open). For instance, sometimes developers open CORS on a dev environment for ease of testing multiple front-ends; a runtime checklist item must be “Make sure we didn’t carry over a dev/testing CORS setting into production.” If possible, have a fail-safe: for example, some teams implement a middleware that, in production mode, will assert that Access-Control-Allow-Origin is never * on protected endpoints – essentially a sanity check that logs an error or even blocks startup if an insecure setting is detected.
Review (Code review & Security assessment): When reviewing code changes, pay special attention to any modifications around header manipulation, CORS library usage, or server configuration files. The reviewer should ensure that any introduction of @CrossOrigin or AllowedOrigins includes specific domains and not a wildcard (unless it’s a known benign case). A review checklist might include: “If new endpoints or controllers are annotated with CORS settings, do they reference a defined allowlist or config constant rather than a hardcoded *?” Also verify that no logic is unintentionally allowing broad access – e.g., if there’s custom code to parse the Origin header, is it doing a strict comparison? Security reviewers should also consider whether the allowed origins themselves are safe (is each one absolutely necessary, and do we trust that domain not to be compromised?). During a broader security assessment or penetration test, CORS should be in scope: the tester should confirm that indeed only the intended cross-origin interactions are possible. A checklist for an assessor: try an allowed origin (should succeed), try a disallowed origin (should fail to get data), try some trick origins (null, similar names, etc. – all should fail). Additionally, review how errors are handled: if a disallowed origin makes a request, does the server leak any information or does everything cleanly fail? It might be beneficial to ensure the server doesn’t send confusing CORS headers (for instance, sending multiple Access-Control-Allow-Origin headers, which can happen if misconfigured, and could cause undefined behavior in some clients). Checking response consistency is part of a thorough review.
Another aspect in review, especially for third-party components or templates: sometimes enabling CORS is done via copy-pasting code from the internet (like a Stack Overflow solution). A reviewer should be cautious if they see such code. Often, those solutions might recommend res.setHeader("Access-Control-Allow-Origin", "*") just to solve a dev’s immediate problem. During review, catching that and recommending the proper solution is key. Similarly, if a library like Flask-CORS is used, review its initialization parameters. Ensure that something like CORS(app) is not left in with default settings in production code. The review should confirm that actual domain names or regex patterns (if used) are correct and tight.
Finally, include documentation in the review checklist: all allowed origins and reasoning should be documented (maybe in code comments or in a design doc). This makes future reviews easier. If a developer sees allowedOrigins = {"https://partner.example.com"} and a comment “# partner portal needs access for embedding our widget”, they have context. Without such notes, someone might remove or change it not realizing the impact, or conversely someone might be afraid to remove an origin that’s no longer needed because they aren’t sure why it was there. Thus, as part of the review, encourage maintaining clear documentation around CORS rules.
Common Pitfalls and Anti-Patterns
Implementing CORS securely can be tricky, and there are several common pitfalls and anti-patterns that developers and even some frameworks fall into.
One major anti-pattern is using wildcards or overly broad domains in production. As repeated throughout, setting Access-Control-Allow-Origin: * is almost never appropriate for an authenticated application. Yet, this remains a common “quick fix” when a developer encounters a CORS error in their front-end. Because the browser error message essentially says “No 'Access-Control-Allow-Origin' header present” or “not allowed”, a search leads to advice like “just add this header to allow all”. The pitfall here is treating CORS errors purely as a technical malfunction rather than a security control. So the anti-pattern is blindly enabling * without appreciating the security implications. Developers should instead understand why the error occurred (perhaps they were testing from a different port or domain) and then configure the specific origin needed. The desire to solve the problem quickly makes the wildcard approach tempting, so teams should internalize that “wildcard in CORS = risky” and avoid it unless absolutely certain the data is public.
A closely related pitfall is reflecting the Origin header without validation (the arbitrary reflection pattern). This often creeps in via snippets like response.setHeader("Access-Control-Allow-Origin", request.getHeader("Origin"));. On the face of it, it seems logical: “allow whichever site is asking”. But without a check against a whitelist, it means any site gets allowed. Sometimes developers assume that the browser will only send Origin for “good” sites or that they will only deploy the code in contexts where the only origin hitting it is their own. These assumptions fail in open web scenarios. The anti-pattern is doing reflection because it’s easy, instead of implementing a proper check. This can be exacerbated by frameworks or libraries that make reflection easy. Some libraries allow a configuration like origin: true meaning “reflect origin” – which can be fine if you combine with a whitelist, but dangerous if unchecked. The proper pattern is to use reflection only after verifying the origin is in an allowlist (some frameworks do this behind the scenes if you provide a list).
Another subtle pitfall is improper wildcard usage in allowed headers or methods. While allowing all headers (Access-Control-Allow-Headers: *) or all methods might not immediately open a security hole (like allowing all origins does), it violates the principle of least privilege. It can also mask mistakes – for example, if your server doesn’t actually support a method, you shouldn’t declare it allowed. Or if you unintentionally allow a header like X-Admin-Auth because of a wildcard, you might introduce an unforeseen risk if that header controls some behavior. The best practice is to enumerate needed methods and headers. A pitfall is just writing allowedHeaders: "*" out of convenience. This is considered an anti-pattern because it’s a lazy approach that could allow requests that you never anticipated (for instance, a Flash or Silverlight legacy client might send weird headers – if you allowed *, those would go through; if you had restricted, the preflight would have stopped them).
An often overlooked pitfall is failing to send the Vary: Origin header when dynamically allowing multiple origins. This doesn’t directly enable attackers, but it’s a pitfall that can lead to caching issues. Without Vary: Origin, an intermediate cache (like a CDN or proxy) might capture a response for one origin and serve it to another, which could lead to either that second origin’s requests getting blocked by the browser (because they received the previous Access-Control-Allow-Origin and it doesn’t match) or, in worst case, leaking data across origins via cached responses. It’s an anti-pattern to not consider caching when implementing CORS. The correct approach is trivial (just including the Vary header), but it is commonly missed by custom implementations. Many secure-by-default frameworks handle this, but if you write your own CORS logic, you must remember to include it. Not doing so is a pitfall that might cause inconsistent behavior that’s hard to debug.
Another pitfall relates to developer testing in local or staging environments. Often in development, you might run a front-end on localhost:3000 and a backend on localhost:5000. To avoid CORS headaches while coding, developers sometimes disable CORS checks in the browser (using flags or plugins) or they configure the server to allow all origins during development. The danger is that this permissive setting might accidentally make it to production or become habit. It is an anti-pattern to have separate “dev mode, CORS off” and “prod mode, CORS on with restrictions” unless you are very disciplined, because there's a risk that something gets missed in the transition. Instead, incorporate the proper origins even in dev (like explicitly allow http://localhost:3000 in dev config) so that you’re exercising the mechanism properly. The pitfall scenario is a developer who has never tested with CORS enabled and when they do enable a policy in prod, they misconfigure it. Or they leave * because it “worked in dev”. The best practice is to treat dev environment with nearly the same CORS rules (just including the dev origins as needed) so that you catch issues early.
An anti-pattern emerging from misunderstanding is thinking that CORS is a security feature on the server that prevents inbound requests. Developers sometimes incorrectly assume that by not enabling CORS headers, they are safe from cross-site requests entirely. The nuance is that even if you don’t set CORS headers, the request still hits your server – it’s just that the browser won’t deliver the response to the calling script. But if your server performs a state-changing action (like a purchase or data modification) via a POST, an attacker’s site could still trigger that action (this is basically the CSRF scenario). CORS doesn’t stop the request reaching the server, it just stops the response from being read by the client. So a pitfall is relying on “not having CORS” as your only protection for something that should actually have CSRF protection or authentication. Conversely, some might think enabling CORS for a domain is some sort of authentication – which it is not. Allowing Origin: partner.com doesn’t ensure the request actually came from partner.com in a foolproof way (the browser enforces it, but an attacker could still send a forged request from their own server to yours and if your server trusts the origin blindly, that’s bad – though in practice if it’s not via a browser, the Origin header can be faked but then the response isn’t blocked by a browser because it’s not a browser scenario; so this is more of a logical point). The takeaway is that devs should not use CORS as a substitute for authentication or authorization. It’s a pitfall to say “we allowed only our partner's origin, so we don't need auth on that API” – that would be false, because an attacker could still directly curl your API (bypassing the browser altogether) if there’s no auth. CORS is a client-side protection, not a server-side access control.
A notable common pitfall is with subdomain patterns. Developers assume they can do something like AllowedOrigins = ".example.com" to allow all subdomains of example.com. The CORS spec doesn’t support partial wildcards like that in the header. What some frameworks do (like if you configure .example.com in certain libraries) is treat it as a pattern and internally match origins, but they then have to echo the exact origin rather than literally sending “.example.com”. If a developer tries to manually configure it (for instance, they might put *.example.com in some config not expecting it to literally appear, but some proxy or incorrectly written code might literally send “*.example.com” as the header, which is invalid), it ends up broken. The anti-pattern here is magical thinking with wildcards. The safe pattern is to explicitly list subdomains or implement code to check the suffix properly. Pitfall examples: trusting any subdomain of your domain (like allowing *.example.com) without realizing that if an attacker can create a subdomain (not uncommon in some scenarios, like they have an account that lets them host content on something.example.com), then they are inadvertently trusted. If, for instance, your organization uses a cloud service and it gives each user a sub-subdomain (user1.example.com, user2.example.com), and you allow *.example.com, you might be trusting user content domains. It has happened that companies allow all subdomains thinking they control them, but some subdomains might point to third-party hosts or have less security. The design should consider that – if any subdomains might be untrusted, do not wildcard trust all. Instead, enumerate the specific ones.
Finally, a subtle anti-pattern: Ignoring the preflight error or failing to handle it properly. If a developer sees their requests failing due to a missing preflight response, they might be tempted to disable the requirement (like by switching to only simple requests) or open up the server’s preflight to all. For example, some might set Access-Control-Allow-Methods: * on the OPTIONS response. While not directly a vulnerability, it’s a sloppy practice that could bite back. If you allow all methods in preflight but not actually support them, it might confuse clients or open potential for unexpected methods going through (should the server not properly enforce at route level). It’s best to keep preflight responses tight. Another aspect: some devs forget to implement an OPTIONS route at all. This leads to CORS failing. To quickly fix, they might put a generic catch-all that returns everything (like a wildcard * allow all on OPTIONS). That reintroduces the wildcard problem. The pitfall is treating the preflight as a nuisance and blanket-allowing it rather than as part of your security policy.
In summary, most anti-patterns revolve around being too permissive or not understanding what the CORS mechanism is doing. The remedy is always to be explicit, precise, and to understand that a misstep here can have serious consequences. Avoid shortcuts that bypass fundamental security checks. CORS is one area where a one-size-fits-all solution (like just allow everything) is almost always the wrong answer. Each application’s needs are unique, and the CORS configuration should be tuned accordingly.
References and Further Reading
OWASP Application Security Verification Standard (ASVS) 4.0 – The ASVS provides a comprehensive set of security requirements for web applications. It includes recommendations for safe CORS configuration under its verification requirements, ensuring that only trusted origins are permitted in cross-domain resource sharing. OWASP ASVS 4.0
OWASP Web Security Testing Guide (WSTG) – Testing for CORS (WSTG-CLNT-07) – This guide outlines how to assess a web application’s CORS implementation. It covers testing techniques such as checking for wildcard origins, origin reflection, and improper validation logic, with examples of how attackers exploit CORS misconfigurations. The WSTG is a valuable resource for understanding the tester’s perspective on CORS security. OWASP WSTG – CORS Testing
Mozilla Developer Network (MDN) Web Docs – HTTP Access Control (CORS) – MDN provides an excellent overview of CORS, including the role of each CORS header, what “simple” requests are, how preflight works, and common pitfalls. It’s a great reference for developers to understand browser behavior and correct server configuration, complete with examples and diagrams. MDN Web Docs – CORS
W3C Cross-Origin Resource Sharing Specification – The formal specification (a W3C Recommendation) for CORS. It defines the protocol in detail, including normative rules about when browsers should send preflights, how headers should be interpreted, and the security considerations behind them. While dense, this is the authoritative source for how CORS is supposed to work in compliant user agents. W3C CORS Spec
OWASP REST Security Cheat Sheet – CORS Section – Part of the OWASP Cheat Sheet Series, this guide includes a section on CORS for RESTful APIs. It emphasizes disabling CORS if not needed and being as specific as possible when enabling it. This resource provides succinct best practices in the context of API security, reinforcing the principles of whitelisting and least privilege for cross-domain requests. OWASP REST Security Cheat Sheet – CORS
PortSwigger Research: “Exploiting CORS Misconfigurations for Bitcoins and Bounties” – A seminal research article by James Kettle that popularized CORS exploitation techniques in the security community. It covers various scenarios of CORS misconfigurations found in the wild, such as origin reflection vulnerabilities and tricky edge cases (like null origin). The article also provides insight into why developers make these mistakes and how attackers systematically discover and exploit them. PortSwigger Research – Exploiting CORS Misconfigurations
PortSwigger Web Security Academy – CORS Tutorial and Labs – An educational resource offering an in-depth tutorial on CORS, along with interactive labs to practice finding and exploiting CORS issues. The content walks through what CORS is, examples of common CORS-based attacks, and guidance on protecting against them. It’s useful for developers and security testers alike to solidify their understanding through hands-on examples. PortSwigger Academy – What is CORS?
MITRE CWE-942: Permissive Cross-Domain Policy with Untrusted Domains – This entry in the Common Weakness Enumeration catalog describes the weakness of overly permissive cross-domain policies, which includes CORS misconfigurations (as well as similar issues like permissive crossdomain.xml files for Flash, etc.). It explains the nature of the weakness, potential consequences, and references to observed instances. CWE-942 Details
MITRE CWE-346: Origin Validation Error – A related CWE entry focused on failures to properly validate the origin of a request. In the context of CORS, this can map to logic errors where the origin header isn’t correctly checked against an allowlist. It’s a broader category that also covers issues in other domains (like Cross-Site Request Forgery in some interpretations), but it underscores the importance of validating the source of requests. CWE-346 Details
This content is authored with assistance from OpenAI's advanced reasoning models (classified as AI-assisted content). Material is reviewed, validated, and refined by our team, but some issues may be missed and best practices evolve rapidly. Please use your best judgment when reviewing this material. We welcome corrections and improvements.
Send corrections to [email protected].
We cite sources directly where possible. Some elements may be derived from content linked to the OWASP Foundation, so this work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. You are free to share and adapt this material for any purpose, even commercially, under the terms of the license. When doing so, please reference the OWASP Foundation where relevant. JustAppSec Limited is not associated with the OWASP Foundation in any way.
