JustAppSec
Back to research

Prototype Pollution

Overview

Prototype pollution is a software vulnerability that enables an attacker to manipulate the base prototypes of objects in prototype-based languages (most notably JavaScript). In JavaScript, almost all objects inherit properties from a prototype (by default, Object.prototype), which means that modifying a prototype affects every object that inherits from it (portswigger.net) (portswigger.net). Prototype pollution exploits this trait: if an attacker can inject properties into an object’s prototype chain, those properties may appear in objects across the application. This can subvert program logic and security controls, leading to serious issues such as unauthorized data access, privilege escalation, denial of service, or even remote code execution (cheatsheetseries.owasp.org) (portswigger.net). Although this vulnerability has been recognized only in recent years (with notable cases emerging around 2018), it represents a critical risk in modern applications and is not less dangerous than better-known bugs like SQL injection or XSS (portswigger.net) (portswigger.net). Prototype pollution primarily affects JavaScript (including Node.js and browser scripts), but the core concept – unsafely mixing untrusted data into object structures – has parallels in other languages’ object injection flaws.

Threat Landscape and Models

The threat model for prototype pollution involves an attacker supplying crafted input that the application uses to build or extend objects at runtime. Commonly, this arises when JSON data, query parameters, or other user-controllable fields are converted into objects without proper validation. An attacker might, for example, append malicious query parameters or JSON keys like __proto__ or constructor that the application’s code then merges into an object (portswigger.net) (labs.withsecure.com). If the code iterates through object fields or performs a shallow merge, the special __proto__ property (or similar) in the input can trick the runtime into modifying the object’s prototype. In client-side scenarios, an attacker may lure a victim into visiting a URL with such a payload (e.g. ?__proto__[evil]=true) or inject a malicious script into a web page that modifies objects; in server-side scenarios (Node.js), the attacker may send JSON payloads in API requests to achieve the same effect.

Prototype pollution attacks do not require the attacker to have privileged access — any interface that accepts structured input can be a vector. For instance, researchers have shown that even a public search endpoint or feedback form could be abused if it processes JSON or query parameters insecurely (portswigger.net). Both anonymous attackers and authenticated users might exploit this, depending on where the vulnerable code resides (e.g., in a public API vs. an internal admin tool). The vulnerability is often exacerbated by supply chain issues: many prototype pollution flaws originate in popular JavaScript libraries (such as jQuery, Lodash, or others), meaning a single library bug can indirectly expose thousands of applications (portswigger.net) (labs.withsecure.com). Attackers actively scan for these weaknesses, as evidenced by automated tools (like browser extensions and bots) designed to find vulnerable code patterns in high-traffic websites (portswigger.net). In threat modeling, prototype pollution is typically classified as a form of injection or object manipulation attack, often leading to secondary impacts like XSS or code injection. Because it is a relatively specialized issue, it was historically overlooked in many security guides, but its prevalence and severity have grown as modern web applications rely heavily on JavaScript object manipulation (portswigger.net) (portswigger.net).

Common Attack Vectors

Prototype pollution vulnerabilities generally arise from common coding patterns that dynamically assign object properties without adequate safeguards. One prevalent vector is object merging: combining two objects by iterating over the source object’s keys and assigning them to a target object. If the source is attacker-controlled and contains a key like __proto__ (or its equivalents), a naive merge function will assign target["__proto__"] = payload, thereby polluting the target’s prototype chain (www.netspi.com) (www.netspi.com). A similar issue occurs with object cloning routines that copy properties from a source object into a new object; if the source includes malicious prototype keywords, the new cloned object’s prototype may get tainted. Another vector is direct property set operations: for example, code that does object[someKey] = value with a key coming from user input. If an attacker can influence someKey to be __proto__ or constructor.prototype, this single assignment can inject properties into the global Object prototype (www.netspi.com) (www.netspi.com). Essentially, any coding pattern that takes untrusted input as object keys or property names, and writes them into an object, is a potential attack surface.

In client-side JavaScript, attack vectors often involve malicious URLs or scripts that exploit how frameworks parse query strings or JSON. A known example was the jQuery extend() vulnerability (CVE-2019-11358) where an attacker could pass URL parameters like __proto__[foo]=bar; when a web application used $.extend(true, {}, location.params) to parse query parameters, the __proto__ key in the input would modify the resulting object’s prototype, affecting the entire page context (portswigger.net). In server-side contexts (Node.js), RESTful APIs that accept JSON bodies are a common vector. For instance, an API endpoint might merge a JSON payload into a configuration object or user object; if the JSON includes a nested {"constructor": {"prototype": { "admin": true }}} structure, it can set Object.prototype.admin = true by leveraging the constructor.prototype path (even if __proto__ is filtered) (labs.withsecure.com). Attackers have also leveraged prototype pollution in conjunction with deserialization or templating: for example, polluting an object that a template engine uses can lead to server-side template injection or arbitrary code execution. In summary, the main pathways for this attack are unsanitized merges, deep copies, or property assignments involving user-supplied object keys.

Impact and Risk Assessment

The impact of prototype pollution can be severe, often enabling an attacker to pivot to higher-impact exploits. At a minimum, polluting a prototype can cause data integrity issues and application instability. For example, an attacker might add a property that alters the behavior of application logic – such as forcing a boolean flag to true – thereby bypassing security checks. A classic illustration is an object representing a user: if the code checks if (user.isAdmin) { ... }, an attacker who polluted Object.prototype.isAdmin = true would make this condition true for all user objects (www.netspi.com) (www.netspi.com). This amounts to a privilege escalation, granting unauthorized access or actions. Similarly, an attacker could set Object.prototype.authenticated = true or change default configuration values (like debugMode) to influence application flow.

Beyond logic manipulation, prototype pollution is often a stepping stone to other vulnerabilities. In client-side scenarios, it frequently leads to Cross-Site Scripting (XSS). If an attacker can inject a script via a polluted property – for instance, by manipulating something like an object of allowed HTML tags or overriding a DOM method – they can execute arbitrary JavaScript in the victim’s browser (portswigger.net) (portswigger.net). Many documented attacks show that a polluted prototype can be used as a “gadget” to bypass sanitization or insert malicious content into the DOM. On the server side, the consequences can escalate to Denial of Service (DoS) or Remote Code Execution (RCE). A polluted prototype might crash a Node.js process by causing unexpected behavior or infinite loops (DoS). Even more critically, researchers have demonstrated that prototype pollution can be chained with clever gadget exploitation to execute arbitrary code in Node.js applications (portswigger.net) (arxiv.org). For instance, a 2022 academic study found multiple gadget APIs in Node’s core that, when triggered via polluted prototypes, allowed execution of system commands, resulting in full RCE on the server (arxiv.org) (arxiv.org). They successfully exploited high-profile applications like the npm CLI and Rocket.Chat through this method, underlining that the risk is not just theoretical. The business impact of such exploits is high: it can compromise entire servers, leak or corrupt data, or deface web interfaces. CVSS ratings for prototype pollution findings are often critical, especially when combined with an XSS or RCE payload. Therefore, organizations should treat this vulnerability with the same seriousness as other injection flaws, given that it can undermine fundamental security controls. The risk is amplified by the stealthy nature of the attack—since it manipulates internal object behavior, detection can be non-trivial until something overt (like an XSS) occurs.

Defensive Controls and Mitigations

Preventing prototype pollution requires a combination of vigilant coding practices and defensive measures in the runtime environment. The primary strategy is input validation and sanitization for any untrusted data that will be used as object keys. In practice, this means disallowing or stripping any keys named __proto__, constructor (with .prototype), or other built-in object prototype references from incoming JSON or query parameters (labs.withsecure.com). A robust approach is to implement an allow-list of expected property names for objects – reject any key that isn’t explicitly permitted. This makes bypasses via encodings or slight variations (e.g., an attacker using a different capitalization or an alternate way to reference the prototype) much less likely (labs.withsecure.com). Notably, relying on simple block-lists (“forbid __proto__”) is dangerous because attackers can use synonyms like constructor.prototype or even obscure patterns to achieve the same effect (labs.withsecure.com). Many libraries have updated their code to automatically filter out prototype keywords; for example, recent versions of popular merge/extend utilities will ignore __proto__ by default. If using such libraries, ensure you have updated to a version where the issue is patched, and consider additional filtering as layers of defense.

Another important control is to use safer object creation patterns. When an object is needed to store untrusted key-value pairs (such as a parsed JSON payload), prefer creating it with no prototype. For instance, in JavaScript one can use Object.create(null) to create a “dictionary” object that does not inherit from Object.prototype (cheatsheetseries.owasp.org). Since this object has no prototype, attempts to pollute it via __proto__ will either have no effect or just create a regular property called “proto” (which does not link to any prototype). Likewise, developers can consider using Map or Set for collections of key-value data instead of plain objects (cheatsheetseries.owasp.org) (cheatsheetseries.owasp.org). Maps in JavaScript don’t have enumerable prototype properties that can be polluted via the same mechanism, thereby providing built-in safety for such use cases. On the platform side, Node.js offers a runtime flag --disable-proto=delete which can be set to remove the __proto__ accessor from the environment (cheatsheetseries.owasp.org). This is a defense-in-depth measure that prevents code from using the __proto__ property at all (legacy code using it might break, but it significantly reduces the attack surface). Even with __proto__ disabled, keep in mind that constructor.prototype assignments could still achieve pollution, so input validation is still required (cheatsheetseries.owasp.org).

In addition to preventive filtering, consider freezing prototypes to lock them against modification. For example, calling Object.freeze(Object.prototype) at the start of an application will prevent new properties from being added to the base prototype at runtime (labs.withsecure.com) (labs.withsecure.com). This can stop many pollution attacks cold, because even if an attacker injects __proto__ keys, the environment will not allow altering the frozen prototype. Be cautious with this approach: freezing is a global action and may interfere with libraries that legitimately (though arguably poorly) modify built-in prototypes (labs.withsecure.com) (labs.withsecure.com). It’s best applied in applications that have been tested for compatibility with frozen objects. Similarly, Object.seal() could be used to prevent adding or removing properties (though still allowing changes to existing ones), but sealing the root prototype is less common. These approaches implement a secure-by-default posture at runtime. As a design principle, code should avoid modifying global prototypes entirely – in fact, a good guideline is to treat built-in prototypes as immutable (if something needs to be added globally, it should be done in a controlled initialization step, not during normal execution, and certainly not based on user input).

Finally, standard secure development practices help mitigate prototype pollution. Following the principle of least privilege for data, don’t give user-provided data more power than necessary – for instance, do not use eval-like dynamics to create object properties wholesale from input. Use schemas and type-checking: if using a schema validation library for JSON (such as JSON Schema or Typed DTOs in frameworks), configure it to reject extra fields and enforce type constraints. This aligns with requirements in standards like the OWASP ASVS 4.0 which emphasize strict input validation and interpreting input only according to expected definitions. By limiting what input can do, you inherently prevent it from reaching into the language internals. Also, keep your dependency ecosystem up to date; many prototype pollution instances come from outdated libraries, so regular dependency scanning and patching (using tools like npm audit, OWASP Dependency-Check, or Snyk) is essential. In summary, defensive controls should eliminate known bad inputs (__proto__, etc.), use safer object constructs or languages features to avoid the problem entirely, and configure the environment and libraries to be hostile to prototype tampering.

Secure-by-Design Guidelines

Designing applications to be inherently resistant to prototype pollution is a proactive way to avoid relying solely on reactive fixes. One key guideline is to avoid using object structures for unvalidated ad-hoc data storage. If your design calls for storing user preferences, configurations, or other data in key-value form, prefer structured classes or validated maps over generic objects. For example, instead of accepting an arbitrary JSON and merging it into an internal config object, define a configuration schema and parse the JSON into a strongly-typed configuration object (or use a class with specific setters). This ensures that any keys not defined in the schema or class are ignored or cause an error. Many frameworks support this pattern: in typed languages (Java, C#), binding JSON to a class will naturally ignore unknown fields (or throw an exception if configured to do so), and in JavaScript one can use schema validators or TypeScript interfaces to constrain input. The design principle here is to make it impossible for unknown properties to infiltrate – by design, the system knows exactly what fields are expected.

Another design consideration is minimizing prototype usage in favor of safer abstractions. In modern JavaScript development, it’s uncommon to manually manipulate Object.prototype, and it’s considered poor practice to extend native prototypes (monkey-patching) because it can lead to collisions and unpredictability. By adhering to that practice, any legitimate code in the system should not be touching prototypes at runtime. Therefore, any attempt to do so can be treated as suspicious or simply blocked. In frameworks or libraries where dynamic object extension is needed, see if they provide safer APIs. For instance, certain templating libraries or object mappers allow hooking into the object creation process – you can intercept and validate keys at a single choke point. Designing in this hook or validation layer from the start is easier than retrofitting it later.

When designing web APIs or web interfaces, incorporate prototype pollution into your threat modeling and abuse case scenarios. Ask questions like, “What happens if an attacker supplies an unexpected object property in this request?” and ensure the design has an answer (e.g., “The extra property will be ignored because we iterate only over a defined set of keys” or “The input is rejected by schema validation”). Likewise, if your design uses third-party components, evaluate their behaviors: Does the UI library parse URL fragments into objects? Does the server framework automatically bind request parameters to objects? Secure design might involve disabling or overriding such features. For example, some Node.js frameworks have introduced safe parsers that automatically strip __proto__ from JSON or query params — using these safe defaults (or plugins that enforce them) is a design-level decision.

A secure-by-design approach also means planning for defense in depth. Even if your code should never allow a prototype pollution, design your deployment so that a successful exploitation has limited impact. For instance, in a Node.js application, running the process with least privileges and employing container isolation can limit what RCE via prototype pollution can achieve. In the browser, deploying Content Security Policy (CSP) can minimize what an XSS via polluted prototypes can do (e.g., blocking external script loads). These measures go beyond the code logic, but they reflect an architectural mindset that anticipates that no input (even structural parts of objects) can be fully trusted. In summary, a secure design treats prototype pollution as an expected threat: it avoids dynamic object property creation when possible, strictly defines data contracts, and uses frameworks and runtime configurations that reduce or neutralize the effects of malicious prototype manipulation.

Code Examples

Below are code examples in multiple languages and pseudocode, illustrating insecure patterns vulnerable to prototype pollution or analogous issues, and their secure counterparts. Each example is accompanied by an explanation of why the code is unsafe or how it has been improved to prevent prototype abuse.

Python

Insecure Example (Python): In this example, a user-provided dictionary data is merged into an object’s attributes without any filtering. The User class is meant to have a fixed attribute name, but the naive update function blindly sets all keys from data onto the object. An attacker could include a key like is_admin in the data to escalate privileges. This is analogous to a mass-assignment vulnerability, where untrusted input directly modifies security-sensitive fields.

class User:
    def __init__(self, name):
        self.name = name
        self.is_admin = False

def update_object(obj, data):
    # Insecure: blindly update object attributes from untrusted data
    for key, value in data.items():
        setattr(obj, key, value)

# Simulate attacker-controlled input:
user = User("Alice")
untrusted_data = {"is_admin": True}
update_object(user, untrusted_data)

if user.is_admin:
    print("User has admin privileges!")  # This will execute, even though it shouldn't

In the insecure Python code above, update_object does not restrict which attributes can be set. An attacker exploited this by passing {"is_admin": True}, causing the code to grant admin privileges. The vulnerability lies in directly using user input to set attributes that should be internal (here, is_admin).

Secure Example (Python): The secure version uses an allow-list of permitted keys. The safe_update_object function only sets attributes if the key is in a predefined list of allowed fields. In this design, even if an attacker provides disallowed keys (like is_admin), they will be ignored and not affect the object’s state.

class User:
    def __init__(self, name):
        self.name = name
        self.is_admin = False

def safe_update_object(obj, data, allowed_keys):
    # Secure: update only allowed attributes, ignore others
    for key, value in data.items():
        if key in allowed_keys:
            setattr(obj, key, value)

user = User("Alice")
untrusted_data = {"is_admin": True, "name": "Eve"}
safe_update_object(user, untrusted_data, allowed_keys=["name"])

print(user.name)      # Output: Eve (name was updated)
print(user.is_admin)  # Output: False (unchanged, attempt to set ignored)

In the secure Python example, is_admin remains False despite the malicious input. By whitelisting acceptable keys (["name"] in this case), the code ensures that unauthorized properties like is_admin are never modified by untrusted data. This prevents the kind of privilege escalation seen in the insecure version.

JavaScript

Insecure Example (JavaScript): This example demonstrates a simple object merge utility that is vulnerable. The mergeObjects function copies all properties from a source object to a target object. If the source object contains a special key such as __proto__, the assignment target[key] = source[key] will modify the target’s prototype. Below, the malicious input contains "__proto__": { "isAdmin": true }. After merging, the user object (and many other objects) will unexpectedly have an isAdmin property via the prototype chain.

function mergeObjects(target, source) {
  // Insecure: does not filter prototype keys
  for (let key in source) {
    target[key] = source[key];
  }
  return target;
}

// Application code
let user = { name: "guest" };
console.log(user.isAdmin);            // undefined (as expected initially)

// Attacker-controlled payload (could come from JSON or query params)
let maliciousInput = {
  "__proto__": { "isAdmin": true }
};
mergeObjects(user, maliciousInput);

console.log(user.isAdmin);            // true (inherited from polluted prototype)
if (user.isAdmin) {
  console.log("User is admin!");
}
console.log({}.isAdmin);             // true (all objects now inherit isAdmin)

In the insecure JavaScript snippet, mergeObjects doesn’t guard against prototype pollution. The line target["__proto__"] = source["__proto__"] is effectively executed during the loop, which changes the prototype of user to the provided object. As a result, user.isAdmin becomes true even though user had no such property initially. Furthermore, the pollution is global ({}.isAdmin is true) because user’s prototype was the root Object.prototype before. This highlights how dangerous an unchecked merge can be.

Secure Example (JavaScript): The secure approach is to validate keys and avoid merging dangerous properties. Here we implement safeMerge which skips any keys that match __proto__ or constructor (or any other internal prototype-related keys). We also demonstrate creating a safe target object with no prototype using Object.create(null). This ensures that even if a malicious __proto__ slips through, it would only become a normal property on a null-prototype object, not affecting the global prototype chain.

function safeMerge(target, source) {
  for (let key in source) {
    // Secure: filter out prototype pollution keys
    if (key === "__proto__" || key === "constructor") {
      continue;
    }
    if (Object.prototype.hasOwnProperty.call(source, key)) {
      target[key] = source[key];
    }
  }
  return target;
}

// Using a null-prototype object as target for extra safety
let safeTarget = Object.create(null);
let input = {
  "__proto__": { "role": "admin" },
  "name": "Eve"
};
safeMerge(safeTarget, input);

console.log(safeTarget.name);        // "Eve" (merged normally)
console.log(safeTarget.role);        // undefined (role was not merged into safeTarget itself)
console.log(Object.prototype.role);  // undefined (global prototype not polluted)

In the secure JavaScript code, safeMerge explicitly ignores __proto__ and constructor keys. The use of Object.prototype.hasOwnProperty.call is an extra precaution to avoid acting on inherited properties from the source. After merging, we see that safeTarget.name is set as expected, but the malicious role under __proto__ did not pollute anything: it was skipped entirely. Moreover, by initializing safeTarget with Object.create(null), we eliminate any inherited __proto__ property on the target side as well. The global Object.prototype remains untouched, and Object.prototype.role is still undefined, indicating that the attack was successfully blocked.

Java

Insecure Example (Java): While Java isn’t a prototype-based language, analogous issues occur through improper object binding. In this example, we use a simple data-binding scenario with a user JSON. The User class has a field isAdmin that is supposed to be controlled by the system, not the user. However, the code naively populates a User object from JSON using a hypothetical JSON parser or manual setting. If an attacker crafts the JSON to include "isAdmin": true, the resulting User object will have isAdmin set to true, potentially granting admin rights in the application.

public class User {
    public String name;
    public boolean isAdmin = false;
}

public class UserService {
    // Insecure: directly mapping JSON to an object with sensitive fields
    public User createUserFromJson(String jsonInput) {
        // Assume a JSON parsing that maps fields to the User class
        User user = new User();
        try {
            JSONObject data = new JSONObject(jsonInput);
            user.name = data.getString("name");
            if (data.has("isAdmin")) {
                // Developer mistakenly allows admin flag to be set from input
                user.isAdmin = data.getBoolean("isAdmin");
            }
        } catch (Exception e) {
            // handle parsing error
        }
        return user;
    }
}

// Attacker-provided JSON
String maliciousJson = "{ \"name\": \"Eve\", \"isAdmin\": true }";
User newUser = userService.createUserFromJson(maliciousJson);
System.out.println(newUser.name);     // Eve
System.out.println(newUser.isAdmin);  // true (compromised: attacker set this)

In the insecure Java example, the createUserFromJson method fails to enforce any policy on the isAdmin field. The code even explicitly checks if "isAdmin" is present and then sets it, which is a serious design flaw. This simulates the situation where a developer trusts JSON data to fill an object (for instance, using an ORM or JSON-binding library that automatically sets fields). The result is that a malicious JSON has turned a normal user into an admin. This is comparable to prototype pollution in effect – the attacker injected a property (admin status) that alters program logic – even though Java’s type system is different.

Secure Example (Java): A secure approach is to separate the data that can be set by users from the data that should be fixed by the system. One way is to use a transfer object or DTO that only contains allowed fields (e.g., name), and exclude sensitive fields like isAdmin entirely from the parsing logic. In this example, we only use the provided name and ignore any isAdmin value in the JSON. The isAdmin field is left at its default (false), or it could be set based on business logic (not user input).

public class User {
    public String name;
    public boolean isAdmin = false;
}

public class UserService {
    // Secure: only map expected fields, ignore or reject disallowed fields
    public User createUserFromJsonSafe(String jsonInput) {
        User user = new User();
        try {
            JSONObject data = new JSONObject(jsonInput);
            user.name = data.getString("name");
            // Notice: we do NOT copy isAdmin from input at all
        } catch (Exception e) {
            // handle parsing error
        }
        // isAdmin remains false unless set by application logic elsewhere
        return user;
    }
}

// Attacker-provided JSON with extra fields
String maliciousJson = "{ \"name\": \"Eve\", \"isAdmin\": true }";
User newUser = userService.createUserFromJsonSafe(maliciousJson);
System.out.println(newUser.name);     // Eve
System.out.println(newUser.isAdmin);  // false (input 'isAdmin' was ignored)

The secure Java version ignores the isAdmin field completely. By not even attempting to set user.isAdmin from the JSON, the code ensures that admin privileges cannot be granted via input. In real applications, one might use libraries like Jackson with configurations to fail on unknown properties or to use explicit annotations (@JsonIgnore on sensitive fields). The key takeaway is that the binding process should strictly limit which fields are populated from external input. In effect, we treat any extraneous property in JSON as a potential attack and simply ignore it or reject the input, thus preventing manipulation of critical flags or attributes.

.NET/C#

Insecure Example (.NET/C#): In ASP.NET (and similar frameworks), model binding can automatically populate properties of an object from request data. This example shows an MVC controller method that takes a UserProfile model from an HTTP POST. The UserProfile contains an IsAdmin property meant only for internal use. However, the code as written will trust whatever is in the posted form or JSON. An attacker could craft a request that includes IsAdmin=true and thereby have the model binder set the IsAdmin property to true. The code then persists this change, effectively allowing privilege escalation.

public class UserProfile {
    public string Name { get; set; }
    public bool IsAdmin { get; set; }  // should not be set via user input
}

public class AccountController : Controller {
    [HttpPost]
    public IActionResult UpdateProfile(UserProfile input) {
        // Insecure: directly trusting bound input including IsAdmin
        UserProfile currentUser = database.GetUser(User.Identity.Name);
        currentUser.Name = input.Name;
        currentUser.IsAdmin = input.IsAdmin;  // Attackers can set this via input!
        database.Save(currentUser);
        return RedirectToAction("ProfileUpdated");
    }
}

In the insecure C# example, the UpdateProfile action does not differentiate between safe and unsafe fields coming from the user. The framework will bind any form field or JSON field named Name or IsAdmin to the input object. If a malicious user (or an intercepting proxy) adds IsAdmin=true in the request, the input.IsAdmin will be true. By copying it to currentUser.IsAdmin and saving, the application would unknowingly promote that user to an admin role. This is an instance of the classic “over-posting” or mass assignment vulnerability in web frameworks, analogous to prototype pollution in that arbitrary properties from the client are affecting server-side object state.

Secure Example (.NET/C#): To fix this, the server-side code must ignore or explicitly disallow binding of sensitive fields. One solution is to use a view-model or DTO that doesn’t include the IsAdmin property at all for the data coming from the client. Alternatively, one can use binding attributes to exclude it. In this secure example, we assume the UserProfileUpdateDto contains only the properties that a user is allowed to change (just Name). The controller only uses that DTO for binding. The IsAdmin property is never bound from the client and remains under server control.

public class UserProfileUpdateDto {
    public string Name { get; set; }
    // No IsAdmin field here, it won't be bound from client input
}

public class AccountController : Controller {
    [HttpPost]
    public IActionResult UpdateProfile(UserProfileUpdateDto input) {
        UserProfile currentUser = database.GetUser(User.Identity.Name);
        // Secure: only update legitimate fields
        currentUser.Name = input.Name;
        // currentUser.IsAdmin is not touched by client input
        database.Save(currentUser);
        return RedirectToAction("ProfileUpdated");
    }
}

In the secure C# code, the client can only submit a Name value. Even if an attacker manually adds IsAdmin to the request, the model binder will ignore it because UserProfileUpdateDto has no such property. The server logic never blindly trusts an incoming IsAdmin flag. If the application needs to set or change admin status, it would be done through a separate administrative pathway that normal users cannot invoke. This approach ensures that user input cannot directly alter security-critical fields. It demonstrates the principle of least authority in data binding: the input DTO grants no ability to set admin privileges, thus nullifying that attack vector.

Pseudocode

Insecure Example (Pseudocode): The following pseudocode illustrates a generic unsafe pattern. A function mergeConfigs merges a user-supplied configuration object into a default configuration object. It does so by iterating through all keys of the userConfig and setting them on defaultConfig. An attacker’s input includes a key that targets the prototype. The effect is that after merging, the global default settings are altered in an unexpected way (in this pseudocode, the maximum login attempts for all users gets reduced to 1 via the polluted prototype).

function mergeConfigs(defaultConfig, userConfig):
    for each key in userConfig:
        defaultConfig[key] = userConfig[key]
    return defaultConfig

// Default application settings
defaultConfig = { maxLoginAttempts: 5, timeout: 300 }

// Attacker-provided config tries to pollute prototype
userConfig = { "__proto__": { "maxLoginAttempts": 1 } }

merged = mergeConfigs(defaultConfig, userConfig)
print(merged.maxLoginAttempts)        // 5 (own property still intact)
print(defaultConfig.maxLoginAttempts) // 5 (defaultConfig not directly changed)

// But the prototype of defaultConfig is now polluted:
newUserConfig = {}
print(newUserConfig.maxLoginAttempts) // 1 (inherited from polluted Object.prototype!)

In this insecure pseudocode example, the developer’s intention was to let users override some configuration values, but they did not anticipate the __proto__ key. The result is subtle: the defaultConfig object still shows its own maxLoginAttempts as 5 (since it had that property set explicitly), so on the surface it might look okay. However, by assigning to __proto__, the code actually changed the prototype that defaultConfig (and many other objects) use. The new object newUserConfig with no own properties now inherits maxLoginAttempts: 1. Essentially, the application’s notion of “default” max login attempts has been globally corrupted to 1 via the prototype. This could allow an attacker to bypass security by causing accounts to lock out after a single attempt, or conversely if it was a property like require2FA set to false, to disable two-factor authentication checks in logic.

Secure Example (Pseudocode): The secure pseudocode demonstrates a defensive approach. We incorporate checks to prevent prototype pollution and use a safe object creation for merging. The function secureMergeConfigs filters out any keys that look suspicious (in a real implementation, this would probably be a list of blocked keys like __proto__, prototype, constructor). It also creates the target object as a new plain object (or one without a prototype). This way, even if a malicious key slipped through, it wouldn’t have a lasting effect on global state.

function secureMergeConfigs(defaultConfig, userConfig):
    secureTarget = createObjectWithoutPrototype()
    for each key in userConfig:
        if key == "__proto__" or key == "constructor":
            continue  // ignore prototype pollution attempt
        if key in defaultConfig: 
            secureTarget[key] = userConfig[key]
    for each key in defaultConfig:
        if key not in secureTarget:
            secureTarget[key] = defaultConfig[key]
    return secureTarget

defaultConfig = { maxLoginAttempts: 5, timeout: 300 }
userConfig = { "__proto__": { "maxLoginAttempts": 1 }, "timeout": 100 }

mergedSafe = secureMergeConfigs(defaultConfig, userConfig)
print(mergedSafe.maxLoginAttempts)  // 5 (unchanged default)
print(mergedSafe.timeout)          // 100 (user override applied safely)

// Prototype remains unpolluted globally:
obj = {}
print(obj.maxLoginAttempts)        // undefined (no global pollution)

In the secure pseudocode, secureMergeConfigs uses multiple layers of defense. It skips keys known to be used in prototype pollution. It also only allows overriding keys that exist in the defaultConfig (enforcing an allow-list of known config options). This means an attacker cannot introduce a completely new config field via input – if it's not in the default set, it’s ignored. Finally, by populating a fresh object (secureTarget) and then perhaps replacing the old config with it, we avoid modifying the original object’s prototype. After running this merge, the resulting mergedSafe has timeout set to 100 (as the user wanted to override that), but maxLoginAttempts remains the default 5. The malicious __proto__ input had no effect on the prototype chain. Any attempt to access maxLoginAttempts on a new empty object still yields undefined, not 1, confirming that we successfully thwarted the attack.

Detection, Testing, and Tooling

Detecting prototype pollution vulnerabilities can be challenging because they often do not produce obvious error logs or stack traces when they occur – the application simply behaves differently. However, there are both static and dynamic approaches to find and diagnose these issues.

Static Analysis: Security-focused static analysis tools and linters can be tuned to catch common patterns that lead to prototype pollution. For example, linters or code scanners can flag instances of object merging or property assignment that use dynamic keys. Custom rules can be written to detect code like for (key in obj) target[key] = obj[key] or use of Object.assign on untrusted data. GitHub’s CodeQL, for instance, has been used in academic research to successfully detect prototype pollution sources and even trace them to “gadget” usage that could lead to RCE (arxiv.org) (arxiv.org). Many SAST (Static Application Security Testing) tools now include checks for this vulnerability, especially for JavaScript code. When using static analysis, it’s important to have it analyze your dependencies as well – patterns in third-party libraries might be flagged if you include source, or you might rely on known vulnerability databases for minified bundles. Additionally, reviewing dependency code (or at least reading security bulletins for them) is advisable: for example, checking if your version of Lodash or jQuery is known to be vulnerable to prototype pollution (tools like npm audit or OWASP Dependency-Check will alert on known CVEs). As a simpler static check during code review, developers can search for suspicious tokens like “proto” in the codebase – unless there is a very good reason, no production code should be containing references to __proto__ property. Similarly, look for uses of constructor.prototype in contexts that take user input.

Dynamic Analysis and Fuzzing: On the dynamic side, security testing tools can attempt to inject prototype pollution payloads and observe application behavior. Web vulnerability scanners (like Burp Suite, OWASP ZAP) can be configured with fuzz strings that include __proto__ patterns in parameters. For instance, a scanner might try requests like ?__proto__[test]=123 or JSON bodies with {"__proto__": {"test": 123}} to see if the application responds unusually or if a subsequent request observes the injected value. One challenge is that the effects might not be immediately visible in the HTTP response. Some specialized techniques help here: for client-side apps, there are browser extensions (such as the open-source ppscan tool (portswigger.net)) that monitor the runtime after injecting payloads, trying to catch if a global object got polluted. These tools often hook into JavaScript in the browser to detect if the prototype of Object was modified (for example, by checking for new properties on Object.prototype after certain actions or using the browser’s developer APIs). For server-side, a dynamic tester might send a sequence: first a polluting payload, then another request to check for a side-effect. As an example, the tester could send a JSON with {"__proto__": {"polluted": "yes"}} to an endpoint (which if vulnerable might silently succeed), and then send a second benign request that, say, asks the server to echo some object; if the word “yes” appears where it shouldn’t, it’s an indicator of pollution. Security researchers have also created instrumentation that logs whenever prototypes are written to – for Node, one could monkey-patch the __proto__ setter to log a warning (in a test environment) whenever it’s invoked.

Fuzzing and Unit Tests: Incorporating malicious-case testing into development can catch prototype pollution early. Developers can write unit tests for any function that merges objects or handles structured input: for example, a test could supply an object with __proto__ key to a parsing function and then assert that Object.prototype was not modified. If it was, the test fails. This sort of test acts as a canary for the vulnerability. Fuzz testing frameworks can also be pointed at your JSON or parameter handling code to generate many combinations of nested __proto__ or constructor keys to ensure none slip through validation. Recent research has provided collections of payload variations (e.g., using different unicode encodings of __proto__ or alternate notations) which can be included in a fuzz dataset to test the robustness of filters (labs.withsecure.com).

Specialized Tooling: The security community has begun developing tools specifically aimed at identifying prototype pollution. Apart from browser extensions for client-side detection, there are repository-scanning tools and IDE plugins. For example, some GitHub bots or CI pipelines can utilize scripts to scan JavaScript files for vulnerable patterns automatically on pull requests (acting as a quality gate). In enterprise settings, one might integrate static code analysis rules from CERT or TSCS (TypeScript secure coding standards) that forbid usage of certain object operations on untrusted data. Monitoring tools can also play a role in detection: for instance, in Node.js, one could potentially instrument the runtime to throw an exception when a global prototype is modified at runtime (except during an initialization phase). Node’s experimental --disable-proto=throw flag (as opposed to delete) is intended to abort operations using __proto__. Although not widely used in production due to compatibility issues, such flags and instrumentation can be invaluable in a testing environment to make sure no code path (including in third-party libraries) is inadvertently using __proto__.

In summary, detection of prototype pollution requires a mix of scanning code for vulnerable patterns, testing application endpoints with malicious inputs, and potentially instrumenting the environment to catch any unexpected prototype modifications. Given that prototype pollution is an emerging concern, teams should keep an eye on new tools and research in this space. For example, the research community’s work on dynamic taint analysis (arxiv.org) and gadget detection provides insights that may eventually be integrated into mainstream security testing tools.

Operational Considerations (Monitoring and Incident Response)

From an operational security perspective, monitoring for prototype pollution exploitation and having an incident response plan are important, especially for server-side scenarios where the impact can be critical.

Monitoring and Logging: Applications should log unusual input keys and structure. For instance, server-side logging can be configured to record unexpected keys in JSON payloads or query parameters. If your application receives a key literally named "__proto__" or "constructor" in a place where it doesn’t belong, this should be treated as a suspicious event. Web application firewalls (WAFs) can be tuned or rules deployed to detect such patterns in requests. Many WAFs and intrusion detection systems have signatures for common prototype pollution payloads, like the presence of __proto__ or [%]5B__proto__%5D (URL-encoded) in HTTP parameters. While blocking by WAF is not foolproof (attackers can obfuscate the payload), it adds a layer of defense and gives security teams visibility. On the client side, it’s more difficult to monitor (as the attack happens in the user’s browser), but user reports or error telemetry might give clues – for example, if a certain page consistently throws JavaScript errors or behaves strangely when a certain URL parameter is present, that could hint at a prototype pollution attempt causing issues.

For Node.js applications, consider implementing an application-level monitor: since adding to Object.prototype is rarely done legitimately at runtime, one could hook into that. For example, at application startup, you could snapshot the properties of Object.prototype. Periodically (or upon certain triggers), check if new properties have appeared on Object.prototype. If so, that’s a red flag that pollution has occurred. This check could be done in a low-overhead way (a simple property count or comparing against a known set) and then log an alert if something new is detected. While this might not catch the pollution before it does damage (it detects after the fact), it can help identify that an attack succeeded, prompting an incident response. Similarly, in a browser context, a web application could run a self-check in development or test modes to ensure no unexpected global properties. However, in production, such checks could be costly or interfere with normal function, so they are more suitable for a monitored debug mode or a security diagnostics endpoint.

Incident Response: If a prototype pollution vulnerability is discovered in your application (either via an external report or internal finding), the response should be swift due to the high severity potential. First, assess the scope: determine which parts of the application are affected (client, server, or both). If it’s due to a library, identify all applications or services using that library version. The immediate mitigation (if a full fix or deploy is not instantaneous) might involve implementing an input filter at a higher level – for instance, deploying a hotfix in the request handling layer to drop malicious keys, or using a WAF rule to block them. This can buy time while a proper code fix is developed and tested.

If active exploitation is suspected or detected (e.g., you find Object.prototype polluted with strange properties in a running system), treat it as a security incident. In a server scenario with possible RCE, you should assume the system may be compromised. Initiate incident response procedures: capture forensic data (memory dumps, logs around the event), isolate the system (take potentially compromised servers out of rotation), and ultimately rebuild or restart them after cleaning up (since a polluted prototype in memory will persist until the process is restarted or the property is deleted manually). Investigate how far the attacker got – for instance, if logs show an isAdmin property being set, check subsequent logs for any actions taken by that admin-level account. Or if a payload was aiming for XSS (<script> injection via a prototype gadget), see if there were follow-up requests that indicate a successful XSS (like known malicious domain access or new user sessions created).

For client-side incidents (e.g., an XSS via prototype pollution on the web application), incident response might involve analyzing web logs to identify if a malicious link was broadly circulated or clicked by users. If the application is a single-page app and can be patched client-side, issue an urgent update to sanitize inputs or freeze prototypes. In parallel, communicate with users if necessary (for instance, advising them not to click on suspicious links or clearing any data if needed).

Operationally, after addressing an incident, feed the learnings back into development: update threat models, add regression tests for that scenario, and improve monitoring. If the prototype pollution came through a third-party library, it might trigger a review of dependency update policies or perhaps locking down certain dangerous capabilities even within libraries (for example, you might decide to always freeze prototypes in your app as a rule, to prevent any library from accidentally doing harm).

In summary, monitoring for signs of prototype pollution involves looking for the tell-tale markers of an attack in logs and memory, and responding involves both technical remediation (patching the code, flushing out polluted state by restarting services, etc.) and possibly broader security incident handling if data or systems were compromised. Because this vulnerability can escalate quickly to serious breaches, treating any confirmed prototype pollution as an incident is prudent.

Checklists

Ensuring protection against prototype pollution should be part of the software development lifecycle. The following considerations can serve as a mental checklist during development, build, and deployment phases, as well as during code reviews.

Build-Time Practices

During development and build time, developers and security engineers should integrate controls to prevent and detect prototype pollution. This involves incorporating security tooling and guidelines early in the process. For instance, define secure coding standards that explicitly forbid unsanitized use of dynamic object properties – this can be included in team guidelines or a secure coding checklist document. Adopt linters or static analysis in the build pipeline with rules targeting prototype pollution patterns (many modern static analysis tools allow custom rules or have built-in checks for this vulnerability). If using JavaScript/TypeScript, consider enabling ESLint rules or TypeScript compiler options that might catch unsafe typings or the usage of any in places where structured types are expected (since using any to merge objects can hide dangerous behavior). Additionally, dependency management is crucial: at build time, use tools like npm audit or Gradle OWASP dependency-check to fail the build if a dependency with a known prototype pollution CVE is present. This helps catch vulnerable libraries (for example, an outdated Lodash or jQuery version) before the code is shipped. When designing build pipelines, include security tests – for example, have a suite of unit tests (as mentioned earlier) that specifically test for prototype pollution in critical components, and run these as part of continuous integration. The OWASP Application Security Verification Standard (ASVS) can be used during design and build as a reference: ASVS sections on input validation and output encoding are particularly relevant; verifying compliance with those (e.g., ASVS 5.3.x for allow-list validation) will inherently address prototype pollution concerns if followed.

Runtime Considerations

At runtime, the application environment can be configured to reduce the risk or impact of prototype pollution. In Node.js, deploy with the --disable-proto=delete flag if possible, especially for back-end services that don’t need to support legacy __proto__ usage (cheatsheetseries.owasp.org). This simple flag removes one major avenue of attack. Similarly, if your application logic permits, call Object.freeze(Object.prototype) at the very beginning of your program (for web front-ends, this can be done before app initialization, and for Node, right after your imports and before handling any requests) (labs.withsecure.com). Freezing the prototype is a drastic measure but highly effective at runtime – it ensures no library or code later (malicious or accidental) can modify the base object prototype. Monitor performance and compatibility in a staging environment when using freezing, because some frameworks might inadvertently try to extend prototypes and could break.

During deployment, ensure that any client-side code (if you deploy JavaScript to browsers) is built in production mode which often strips out development utilities that might inadvertently expose prototype manipulation functions. Also consider applying content security policies (CSP) in web apps; while CSP doesn’t directly stop prototype pollution, it can prevent an attacker who achieved an XSS via prototype pollution from easily loading external scripts or exfiltrating data. This limits the exploit’s impact at runtime. On the server, employ containerization and sandboxing to mitigate the damage of a potential RCE. For instance, running Node.js processes with Docker seccomp profiles or in a least-privileged container can ensure that even if an attacker gets code execution via a polluted prototype gadget, they cannot easily escalate to the host or other systems.

Finally, runtime should be accompanied by observability: ensure that your logging (as described earlier) is collecting enough information. Application performance monitoring (APM) tools might also catch anomalies – for example, if a prototype pollution triggers a memory leak or unusual CPU usage, APM alerts could hint at it (though they won’t directly say “prototype pollution”, any unexplained change in behavior is worth investigating). In summary, runtime defenses are about hardening the environment (using flags and freezing), constraining what exploits can do (via sandboxing and CSP), and keeping an eye on the application’s health and behavior for any irregularities.

Code Review and Testing

When reviewing code for security (or conducting a security-focused test), explicitly look for patterns that may lead to prototype pollution. This means examining any code that deals with object merging, cloning, extending, or copying. If you see something like a manual merge function or use of a spreading operator {...obj} on untrusted data, question whether it could introduce prototype keys. Ask developers to justify that either the input is sanitized or the object involved is not a normal prototype-inheriting object. If the code uses third-party libraries to do merges (like Object.assign, _.merge, etc.), check the versions and whether they handle prototype keys safely; if not, insist on upgrading or adding a wrapper to filter input.

During reviews, also inspect any data transfer objects or binding logic. In frameworks (Rails, ASP.NET, etc.), ensure that models used for binding exclude sensitive fields (as in the .NET example using DTOs). Look at JSON parsing: if using something like JSON.parse directly on request body and then using the result in an object, that’s a red flag – there should be a validation step in between. Good code reviews for prototype pollution often involve a bit of adversarial thinking: “If I were an attacker, where could I slip __proto__ into this flow?” Follow the data flow: from the entry point (HTTP request, form input, etc.) through the processing – find places where keys of an object are iterated or assigned.

Penetration testers should include prototype pollution in their test plan. This might involve using a proxy to modify requests and inserting payloads in various parameters (including nested JSON structures). Test both typical user input vectors (like form fields, API JSON fields) and less obvious ones – for example, hidden fields, browser storage (could an attacker put a malicious object in window.localStorage if the app reads from it?), or inter-service messages. Additionally, test the aftermath: if a potential pollution vector is found, see if it actually has an effect by probing application behavior or data. The absence of an immediate error doesn’t mean the payload did nothing; sometimes the effect might only trigger under certain conditions (e.g., when a certain piece of code runs later). Therefore, tests may combine actions: one request to pollute, another to trigger the gadget.

As part of the review checklist, ensure that all team members are aware of this vulnerability type. Often, education is the best preventative check – if developers know about prototype pollution, they are more likely to design and implement defenses from the start. Resources like the OWASP Cheat Sheet and the references in this article can be shared within the team for that purpose. In summary, code review and testing against prototype pollution involve a keen eye for unsafe patterns and an intentional attempt to simulate the attack during testing, ensuring that any weakness is identified and corrected before deployment.

Common Pitfalls and Anti-Patterns

Despite growing awareness, developers still fall into some common pitfalls regarding prototype pollution:

One frequent mistake is assuming that JSON or object input is harmless as long as it doesn’t contain executable code. Developers might diligently filter script tags or SQL keywords from input, but fail to consider that even property names can be vectors. Prototype pollution is essentially a form of code-less injection – it doesn’t directly inject script, but it tampers with the execution context. Trusting data structures too much is an anti-pattern; even keys and object shapes from the client need validation.

Another pitfall is partial filtering. For example, after learning about __proto__ issues, a developer might put in a quick fix to drop any property literally named "__proto__". This is a start but not sufficient. Attackers can use variations like "__PROTO__" (which may or may not trigger, depending on how the application uses the key), or more reliably, the constructor.prototype approach as mentioned. Also, if deep object merging is in play, an attacker might nest the dangerous key deeper in the object where a shallow filter might not catch it. An anti-pattern is to chase specific bad strings (blacklist approach) rather than designing a robust allow-list or structural validation.

Using outdated or insecure libraries is another issue. Many prototype pollution cases have originated from utility libraries (like deep merge functions, or query string parsers). An anti-pattern is neglecting to update these dependencies. In some teams, there is hesitance to upgrade for fear of breaking something; however, not upgrading can leave known holes open. Modern package management and monitoring can largely automate this, so not taking advantage of those tools is a mistake. It’s important to not only bump the library version but also verify that the vulnerability is indeed resolved and no new ones are introduced.

Monkey-patching prototypes in application code is a subtle anti-pattern relevant here. Sometimes developers extend JavaScript’s built-in prototypes (like adding methods to Object.prototype or Array.prototype) for convenience. This is widely considered a bad practice in general, but in security terms it can confuse matters. If your codebase is already modifying prototypes intentionally, it becomes harder to distinguish a malicious modification from an intentional one, and you might unintentionally create gadget-like conditions. For example, adding a new method to Object.prototype that is widely used could become a gadget if an attacker can influence an input to call that method in an unsafe way. It’s far safer to keep prototypes untouched; use utility functions or modern language features instead of patching global prototypes.

Another common pitfall is in framework configuration. Many frameworks have settings to protect against mass assignment or prototype pollution, but they might be turned off for backward compatibility. An example: some Node.js frameworks had to update their body parsers to drop __proto__, but if an application sticks to an old version or disables that feature, it’s at risk. Similarly, in something like Ruby on Rails, strong parameters must be used to avoid mass assignment – forgetting to use that is analogous to what we showed in other languages with admin fields being set. Essentially, neglecting the security features of frameworks (or not being aware of them) is an anti-pattern. Secure defaults are more common now, but developers porting older code or copying snippets from old tutorials might inadvertently reintroduce unsafe patterns.

Finally, a general anti-pattern is not considering property names as part of the input validation scope. Many security validations focus on values (e.g., length of a string, characters allowed in a field value). But in prototype pollution, the keys themselves are the payload. A thorough input validation routine should validate or at least vet keys if the input is a key-value structure. Not doing so is a conceptual gap. For instance, allowing arbitrary keys in a JSON API “because the client is just sending object data” can be dangerous. The safe pattern is to treat keys with the same level of skepticism as values.

In summary, avoiding these pitfalls means: treat all parts of input (including object keys) as potentially malicious, comprehensively filter or validate rather than patching one string at a time, keep your tools and libraries up-to-date, avoid practices that blur the lines between legitimate and illegitimate prototype changes, and leverage framework protections to the fullest. Awareness and secure coding discipline are the antidotes to these anti-patterns.

References and Further Reading

OWASP Prototype Pollution Prevention Cheat Sheet – Official OWASP guidance on prototype pollution, including an explanation of the risk and recommended defensive techniques (such as using Object.create(null), freezing prototypes, and filtering input).

PortSwigger Web Security Academy: Prototype Pollution – Educational resource with an overview of prototype pollution vulnerabilities. Includes interactive labs for both client-side and server-side prototype pollution, illustrating how the attacks work in practice and how to prevent them.

PortSwigger Daily Swig – “Prototype pollution: The dangerous and underrated vulnerability impacting JavaScript applications” (2020) – Article by Ben Dickson featuring insights from security researchers on what prototype pollution is, why it had been overlooked, and its potential impact (XSS, RCE, etc.), with real-world examples like jQuery and Kibana vulnerabilities.

PortSwigger Daily Swig – “Prototype pollution vulnerabilities rife among high-traffic websites, study finds” (2021) – News piece outlining the findings of researchers who scanned popular websites for prototype pollution flaws. Discusses common vectors (unsanitized query parameters), the relative lack of awareness, and tools developed for detection (including a browser extension for finding pollution sinks).

Silent Spring: Prototype Pollution Leads to Remote Code Execution in Node.js (Shcherbakov et al., 2022) – Academic research paper that systematically examines prototype pollution in Node.js. The authors developed static and dynamic analysis techniques to find pollution sources and “gadgets,” ultimately demonstrating end-to-end exploits (including RCE) in real applications like the npm CLI and Parse Server. This paper underscores the severity of prototype pollution and provides deeper technical context.

WithSecure Labs – “Prototype Pollution Primer for Pentesters and Programmers” (2022) – A detailed blog series exploring prototype pollution. It covers basics of prototypes, how pollution can be introduced, techniques for finding and exploiting gadgets, and mitigation strategies (such as prototype freezing and input validation). This resource is useful for both understanding the attacker perspective and defensive measures.

OWASP ASVS 4.0 – Application Security Verification Standard – While not focused on prototype pollution specifically, ASVS provides a comprehensive list of security requirements. Relevant controls include sections on input validation (which implicitly covers preventing unsanctioned data in object structures) and output encoding. Ensuring compliance with these requirements (e.g., only allowing defined object properties from external input) helps prevent prototype pollution vulnerabilities.

Imperva – “What is Prototype Pollution? Risks & Mitigation” – An overview from Imperva’s security learning portal that explains prototype pollution in accessible terms. It describes how the vulnerability works in JavaScript, the potential impacts, and general mitigation approaches. A good introductory read for developers new to the topic, reinforcing the need for proper input handling and up-to-date libraries.

GitHub – BlackFan/client-side-prototype-pollution – Repository maintained by security researcher Sergey Bobrov (BlackFan) that lists known client-side prototype pollution gadgets and payloads. It’s a useful resource for understanding the various ways prototype pollution can manifest and how an attacker might leverage different properties or JavaScript APIs as exploitation gadgets once the prototype is polluted.

Snyk Learn – “JavaScript Prototype Pollution” – An interactive learning module by Snyk that walks through a prototype pollution vulnerability step-by-step. It provides hands-on examples of exploiting a vulnerable snippet and guidance on how to fix the code. This can be a practical way for developers to solidify their understanding by actually seeing the attack and mitigation in action. (Free registration may be required to access Snyk Learn content.)


This content is authored with assistance from OpenAI's advanced reasoning models (classified as AI-assisted content). Material is reviewed, validated, and refined by our team, but some issues may be missed and best practices evolve rapidly. Please use your best judgment when reviewing this material. We welcome corrections and improvements.

Send corrections to [email protected].

We cite sources directly where possible. Some elements may be derived from content linked to the OWASP Foundation, so this work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. You are free to share and adapt this material for any purpose, even commercially, under the terms of the license. When doing so, please reference the OWASP Foundation where relevant. JustAppSec Limited is not associated with the OWASP Foundation in any way.