JustAppSec
Back to research

Template Injection

Overview

Template injection is a critical class of software vulnerability that occurs when untrusted input is processed by a template engine as executable code. In this scenario, an attacker’s data is mistakenly interpreted as template directives or expressions, allowing the attacker to interfere with the template rendering process (www.paloaltonetworks.com). Template engines are widely used to generate dynamic web pages and emails by mixing static templates with runtime data. When developers embed user input directly into templates without proper safeguards, attackers can introduce malicious template syntax that the engine will execute. This violates a fundamental security principle – the separation of code and data – and leads to severe consequences. Injection flaws (including template injection) have long been among the most prevalent and dangerous web application vulnerabilities, historically ranking #1 in OWASP’s Top 10 and still remaining a top concern in recent years (www.synopsys.com).

Template injection vulnerabilities matter because they often enable a higher degree of compromise than more familiar issues like Cross-Site Scripting (XSS). In a typical XSS attack, malicious input executes in the victim’s browser, affecting that user’s session. In contrast, server-side template injection (SSTI) allows attackers to execute code on the server host itself, potentially leading to a full server compromise (portswigger.net). Even client-side template injection (which occurs in the user’s browser through frameworks like AngularJS or Vue) can lead to severe XSS and client takeover. In essence, template injection blurs the line between data and code, enabling attackers to run arbitrary instructions within the context of the application. Given the possibility of remote code execution (RCE) on the server and complete hijacking of application logic, template injection represents a high-impact threat that application security engineers and developers must treat with the utmost seriousness. This article provides a rigorous exploration of template injection – covering its threat landscape, attack vectors, impacts, defenses, coding patterns (good and bad), and best practices – to empower practitioners with the knowledge to prevent and detect this vulnerability.

Threat Landscape and Models

Template injection vulnerabilities arise under two broad scenarios: developer mistakes and intentional template functionality. In the first scenario, a developer inadvertently uses unvalidated user input in the context of a template. For example, an application may build an email or page template by concatenating strings and user-provided values, then pass the result to a template engine for rendering. If the user input contains template syntax (such as {{...}} in Jinja2 or {<% ... %>} in a JSP), the engine will treat it as code to execute. A simple oversight – such as assuming user input is purely textual – can thus introduce an eval-like behavior where attackers supply template expressions to be executed on the server. This threat model assumes a remote adversary who controls an HTTP request parameter, form field, or any input that gets embedded into a template. Attackers typically probe for this flaw by injecting template metacharacters (for example, {{7*7}}) into various inputs and observing the output or behavior. If the resulting page displays 49 (calculating 7*7) or leaks an error message referencing template syntax, it indicates the input was executed by a template engine, confirming the vulnerability.

The second scenario involves applications that intentionally expose templating capabilities to users as a feature. Content management systems, wiki platforms, blog software, and SaaS customization features often allow privileged users to design custom templates or content with placeholders. For instance, a marketing application might let users craft email templates using a limited template language for personalization (e.g., to insert a recipient’s name). In theory, such features are meant to use a restricted subset of the template language. In practice, if the sandbox or filtering is improperly implemented, attackers can exceed their intended privileges by injecting raw template code. The threat model here may involve an insider or a less-trusted user (perhaps a tenant in a multi-tenant system) who leverages the templating feature to break out of the sandbox and execute unauthorized actions. A real-world example is template functionality in forums or blogs where users can insert template tags – if the engine isn’t locked down, an attacker could inject system calls or read sensitive variables from the server environment.

Attack surfaces for template injection vary across technologies. Server-side template injection (SSTI) targets templating engines running on the server (common examples include Jinja2 in Python, FreeMarker or Thymeleaf in Java, Twig in PHP, Django templates, and Razor in .NET). Attackers deliver payloads via any input that ends up in a server-side template: URL query parameters, POST body fields, cookies, or even persisted data that gets incorporated into a template later. Client-side template injection (CSTI) occurs in the browser, often in single-page applications that use JavaScript frameworks (like AngularJS, Handlebars, or Vue) to render templates on the frontend (portswigger.net) (www.paloaltonetworks.com). An attacker might craft input containing AngularJS double-curly expressions ({{ }}) or other template tokens and cause the front-end to evaluate it. Notably, AngularJS (up to version 1.x) and similar frameworks implemented sandboxes to limit what template expressions can do (portswigger.net). However, researchers have demonstrated multiple sandbox escapes in AngularJS that allowed arbitrary JavaScript execution on the client, leading to full XSS (for example, injecting {{constructor.constructor('alert(1)')()}} to break out of Angular’s sandbox). Both SSTI and CSTI are part of the threat landscape, though SSTI is typically more devastating since it compromises the server’s trust boundary, while CSTI usually “only” impacts the user’s session or data.

From a threat modeling perspective, template injection transforms what should be inert data into active code. This undermines almost any security control, because the injected template code executes with the privileges and context of the application. In a server-side scenario, that means an attacker can potentially issue OS system calls, read or modify server-resident data, interact with back-end databases, or pivot to internal networks using the compromised server as a foothold. In a client-side scenario, the attacker’s injected code can perform any action that a script in the browser could: stealing cookies or tokens, modifying the UI for phishing, or invoking backend APIs on behalf of the victim user. Therefore, the threat model for template injection must assume a worst-case adversary who can escalate a simple text input field into a primitive for code execution. It is essential to map out any places in the application where templating is used and consider “What if an attacker controlled this input?” during design and threat modeling.

Common Attack Vectors

Unsafely Embedded Input: The most common vector for template injection is straightforward: user input is embedded directly into a template string or template file without sanitization. In practice, this often happens when developers construct templates on the fly. For example, consider a Python web application that greets users by name. A naive implementation might concatenate a name parameter into a template: Template("Hello " + name + "!").render(). If a user passes the name value as {{7*7}}, the template engine (Jinja2, in this case) will evaluate it, yielding “Hello 49!” instead of a harmless string. This vector can manifest in many languages: a Node.js app using EJS or Pug could take a query parameter and include it in a template, or a Java app using FreeMarker might read a template snippet from user input. In all cases, the attacker supplies template syntax in an input that the application developer assumed would be simple data. The moment the application feeds that data into the template processor, the attacker’s payload executes. This vector is especially dangerous because it may not be obvious during normal operation – the application might work fine with typical user values, and only reveal its vulnerable behavior when an attacker’s carefully crafted input triggers the template logic.

Template Expression Injection: Some templating systems allow mixing of code and data via expressions, even if the template structure is mostly fixed. Attackers seek out places where user data is inserted into templates in a way that they can break out of the intended data context. For instance, a Handlebars template might be written as {{title}}: {{content}}. If an attacker can manipulate the title or content such that they inject a Handlebars expression or helper, they could interfere with rendering. In the Handlebars example, an attacker controlling title might set it to "}} {{#with this}}alert(1){{/with}}", attempting to terminate the current context and start a new block (abusing the Handlebars syntax) – essentially injecting a new template directive. Similarly, in a Java Spring application using Thymeleaf, user input might end up in an attribute like th:text="${userInput}". If Thymeleaf is misconfigured such that it interprets ${userInput} from an untrusted source as an expression, an attacker could try ${T(java.lang.Runtime).getRuntime().exec('calc')} as input, leveraging Thymeleaf’s ability to call methods. In general, any situation where user-supplied text gets interpreted by an expression language or template evaluator is ripe for this kind of injection. Attackers catalog the syntax for various template engines (Jinja2, Twig, Velocity, etc.) and attempt to inject engine-specific payloads. For instance, ${7*7} in a FreeMarker context, #{7*7} in a Thymeleaf context, <%= 7*7 %> in a JSP/ERB context, and so on – observing whether math operations or other unexpected outputs appear. If they do, it confirms that the input was executed inside the template environment.

Sandbox Escape in Client-Side Templates: In client-side template injection, the attacker typically needs to overcome built-in sandboxing. Modern front-end frameworks often restrict template expressions to a subset of JavaScript to prevent abuse. For example, AngularJS templates by default cannot call sensitive browser APIs or arbitrary JavaScript functions directly; they operate within a limited sandbox. Attackers therefore hunt for creative ways to escape these sandboxes. A notorious example in AngularJS was using the constructor of a built-in object to execute an arbitrary string as code ({{constructor.constructor('alert(1)')()}}). Similarly, in older versions of Angular, one could chain properties to eventually reach window or document objects and invoke methods. The attack vector here often involves injecting a seemingly benign value that the application then binds to the front-end. If the application takes data from the server (possibly originally provided by an attacker via a stored XSS-like vector) and inserts it into an Angular template without proper encoding, the attacker’s payload will run when that template renders in users’ browsers. The impact is an XSS (stealing user data or performing unauthorized actions as the user), but delivered through the template mechanism. Attackers exploiting CSTI often chain it with other techniques; for instance, they might bypass a Content Security Policy (CSP) by injecting Angular template payloads that ultimately load malicious scripts, since CSP might not anticipate the templating layer being abused in this way (portswigger.net).

Exposed Template Interfaces: Some applications provide direct interfaces to render user-provided templates – for example, a feature where administrators can upload or edit email templates in the web UI. If such interfaces do not strictly control what template code can be executed, attackers (or lower-privileged users finding a way to access the feature) can feed in harmful templates. A classic case is the use of template engines to allow customization: A cloud service might let users design a webpage using templates. The expectation is that users will create benign templates with allowed placeholders (like {{ user.name }}). However, an attacker might include a payload that the engine’s sandbox doesn’t catch – for instance, using a rarely known gadget or function to break out of restrictions. In Java’s Velocity template engine, attackers have exploited the ability to instantiate arbitrary Java objects via the template to achieve code execution. In Python’s Jinja2, if developers enabled certain sandbox bypasses, attackers have used special sequences (like {{ self.__init__.__globals__['os'].popen('id').read() }}) to execute OS commands. Thus, the vector here is abusing legitimate template functionality that was exposed for flexibility. The challenge for attackers is identifying the engine and version in use (sometimes error messages or subtle clues in output can reveal this), then crafting a payload that the specific engine will execute. With a determined approach and knowledge of template engines, attackers can often pivot from “you may supply a template for formatting output” to “you can execute code on our system” – especially if the developers assumed the engine’s sandbox was unbreakable.

In summary, common vectors reduce to a simple principle: anywhere user-controlled input intersects with a template interpreter, there’s potential for injection. Attackers supply inputs containing template special characters and code snippets, leveraging any lack of input validation or output encoding. They take advantage of overly-powerful template languages – many template engines are essentially mini programming languages (with loops, conditionals, function calls, etc.), so injecting into them is akin to injecting into a scripting environment. Recognizing these vectors in your own application (during design, coding, and testing) is key to preemptively closing the holes or detecting exploit attempts.

Impact and Risk Assessment

The impact of template injection can be severe, often equating to a complete system compromise. On the server side, once an attacker can inject template expressions, they may achieve remote code execution (RCE) within the server’s context (portswigger.net). This means the attacker can execute arbitrary commands or code on the server hosting the application. The severity of RCE cannot be overstated: an attacker could steal sensitive data (database contents, credentials, file system data), modify or destroy data, install backdoors, or use the compromised server as a pivot to attack other internal systems. In many cases, a single SSTI vulnerability is enough for an attacker to escalate to full administrative control over the application and underlying server. For example, with a Python Jinja2 injection, an attacker might exploit Python’s dynamic features to open a reverse shell or read AWS API keys from environment variables. With a Java FreeMarker injection, the attacker could instantiate JVM classes to exfiltrate files or invoke system commands. Even if direct code execution is initially sandboxed by the template engine, many sandboxes have known escape techniques (www.acunetix.com) – thus, what begins as a limited template injection often ends in a full break of the sandbox and unrestricted code execution. Organizations must treat any confirmed SSTI as a critical vulnerability requiring immediate remediation or incident response.

It’s important to note that not all template injections immediately yield code execution; sometimes the impact is data exposure or integrity violation. For instance, an attacker might leverage the template engine’s capabilities to read sensitive data server-side (even if they can’t execute shell commands). Template engines often have access to a lot of context: database objects, configuration settings, or file includes. An attacker might inject a payload to dump all server environment variables or read application secrets. In one notable case, an SSTI was used to retrieve AWS secret keys from a Flask application’s config, which were then used to compromise cloud resources. Thus, even when RCE is not achieved, SSTI can lead to loss of confidentiality or integrity of data. Similarly, on the client side, a CSTI might be used “only” to deface a page or steal a user’s data via XSS, but if that user is an administrator, the attacker could leverage it to perform administrative actions via the compromised browser session. The risk should therefore be evaluated in the context of what level of access an injected template code provides the attacker – often, it’s a much higher level than the attacker’s original privileges.

Comparing server-side and client-side impacts: Server-side template injection is typically higher risk because it breaks the security perimeter of the server. The attacker’s code runs with the server’s privileges and can affect all users and data on the system. This often leads to a worst-case impact (complete system compromise). In fact, SSTI vulnerabilities are frequently assigned maximum CVSS scores due to the potential for RCE. As PortSwigger’s research notes, SSTI is easy to overlook but can turn a minor-looking issue into a pivot for deep lateral movement in an infrastructure (portswigger.net). Client-side template injection, while usually confined to the user’s browser, can still be very dangerous in context. A CSTI essentially equates to a DOM-based XSS: the attacker can perform any action that the user could from their browser (token theft, actions as the user, keylogging, etc.). If the affected user has elevated privileges (say, an admin user using an Angular-based admin panel), the attacker might achieve administrator-level actions through the injected script. However, the effects are generally limited to that user’s session or data, and do not directly compromise the server or other users (unless the attacker uses the foothold to escalate further, such as by planting a persistent script or leveraging the trust to pivot to a server weakness). The risk of CSTI is therefore similar to other client-side injection issues – significant for user security and possibly compliance (if personal data can be stolen), but typically not as catastrophic as SSTI. Nonetheless, CSTI can erode user trust and cause substantial damage (consider a banking app where CSTI enables theft of session cookies or manipulation of transactions in real-time).

When assessing risk, one should consider exploitability and prevalence as well. Template injection vulnerabilities are often highly exploitable with publicly known techniques and tools. The knowledge needed to exploit SSTI in common engines is readily available (there are open-source tools and extensive payload repositories), meaning a low-skilled attacker could potentially leverage an SSTI with minimal effort. Furthermore, modern web applications frequently use templating engines, so the prevalence of this issue maps to how often developers introduce the flaw. Unfortunately, it’s not a rare bug: many incidents and penetration tests have demonstrated template injection bugs in production systems (from minor websites to major platforms). Because of the high impact and moderate-to-high likelihood in poorly reviewed code, most standards (OWASP, CWE, etc.) classify template injection as a critical issue to mitigate. From a risk management standpoint, eliminating template injection is a high priority for any application that uses templates. Even in cases where exploitation seems complex (e.g., requiring breaking a sandbox), one must assume attackers will eventually figure it out – as history has shown with repeated sandbox escapes in template engines (portswigger.net). Therefore, both the impact and probability justify strong countermeasures and thorough remediation efforts.

Defensive Controls and Mitigations

Preventing template injection requires a disciplined approach to input handling and template usage. The fundamental strategy is never allow raw user input to be interpreted as template code. In practice, this means avoiding any dynamic construction of templates with user data. All user-supplied content should be treated as data to be rendered, not as part of the template’s logic or structure. The simplest and most effective control is to use static templates with placeholders and pass user input as variables to the template engine’s render function. By separating the template definition (which is constant and written by the developer) from the data (which comes from the user and is safely injected at runtime), you eliminate the chance for the user to inject new directives. For example, instead of Template("Hello " + name).render(), one should define Template("Hello {{ name }}").render(name=user_name). In this safe pattern, any special characters in user_name are handled as plain text. Many template engines by default HTML-escape variables when rendering, which further protects against XSS. Developers should leverage these auto-escaping features and not disable them without a very good reason.

Another critical control is input validation and sanitization on any content that might end up in a template. If users are allowed to input rich text or pseudo-code (for instance, in a CMS that accepts templates), the application must rigorously sanitize that input. This can involve stripping or encoding template metacharacters. As a basic safeguard, characters like {, }, <%, ${, etc., should be considered dangerous in contexts where a template engine might interpret them. However, relying on blacklisting specific sequences can be error-prone – attackers may find encodings or alternate syntax to bypass naive filters. Therefore, a more robust approach is whitelisting acceptable content. For instance, if you expect a user to only enter plain text (letters, numbers, basic punctuation), enforce an input pattern that excludes any characters with special meaning in the template engine. Use established libraries or framework features for sanitization where possible. For HTML content, OWASP’s HTML Sanitizer or similar libraries can strip out <script> tags, but for template injection, you might need a custom sanitizer to remove template directives. It’s worth noting that template injection often crosses into the territory of code injection, so generic input-hardening measures (like blocking common dangerous patterns) have value, but they should be tailored to the template syntax in question.

Many template engines offer configuration options to improve security – these should be used diligently. Sandboxing and restricted modes can significantly reduce the risk. For example, Jinja2 has a sandboxed environment where only a subset of Python is accessible to templates, and dangerous attributes or methods can be blocked. Apache FreeMarker, in recent versions, disables the ?api and ?new built-ins by default (which could otherwise be abused to call arbitrary Java code) and allows enabling a “secure mode” where certain operations are forbidden (www.cnblogs.com). When using these engines, always consult the “security considerations” section of their documentation (portswigger.net) and enable the recommended settings. If the engine supports an allow-list of safe functions or variables, configure it so that templates can only perform intended actions. Disable eval-like features: Many engines have some evaluate/execute functionality (for example, a {{ eval(...) }} or the ability to include dynamic code). Unless absolutely necessary, these features should be turned off. The OWASP Application Security Verification Standard (ASVS 4.0) explicitly advises against using eval or dynamic execution; if such features must be used, the input must be sanitized or sandboxed stringently (github.com). In the context of templates, this means you should avoid runtime compilation of templates from user input. Where dynamic templates are needed (e.g., user-customizable templates), run them in a contained environment. Some applications go as far as executing user-supplied templates in a separate process or container with limited permissions, so even if code execution occurs, it’s isolated. This level of containment may be appropriate for multi-tenant systems where you can’t fully trust even high-privileged users’ templates.

Another key mitigation is choosing the right template engine or framework for the job. If users require only simple text substitutions (for instance, customizing a welcome message with their name), prefer a logic-less template language like Mustache or a simple replacement scheme. These simpler templates do not support arbitrary code execution – they lack the complex expression capabilities that make injection possible. For example, Mustache templates do not have the ability to call functions or execute code; they can only insert provided data and loop over data structures. This dramatically reduces the injection risk (though it doesn’t eliminate XSS risk if output encoding isn’t handled). In contrast, full-featured engines like Jinja2, Thymeleaf, or Razor are powerful but can be dangerous if misused. Use the simplest template mechanism that meets your needs. If you don’t need users to supply logic, don’t give them a template language that has logic. Similarly, on the client side, if your front-end doesn’t need to interpret user-provided template syntax, consider disabling template interpolation in user-provided strings. For instance, in Angular, you can use ng-non-bindable or Angular’s strict contextual escaping to ensure certain user inputs are not processed as Angular expressions.

Output encoding is the last line of defense. Even if an attacker manages to inject something, proper output encoding can neutralize many payloads. For example, if a user-supplied value is placed into an HTML template and you ensure it’s HTML-encoded, any injected <script> tag or Angular double-curly brace will appear as harmless text to the end user rather than executing. Most modern template engines have auto-escaping on by default (e.g., Django’s template engine or Ruby on Rails ERB will escape variables, and .NET’s Razor encodes output by default). Do not disable these features. If your engine doesn’t auto-encode, explicitly apply encoding routines to variables (e.g., use library functions to HTML-encode special characters in strings). Keep in mind that encoding is context-specific – HTML encoding for HTML context, JavaScript string encoding for JavaScript context, etc. Template injection often leads to XSS on the client side, so proper output encoding can turn a potentially serious CSTI into just a weird-looking string on the page with no harmful effect. That said, encoding won’t stop server-side execution issues; it’s mainly a defense for the client-side aspect.

Defense in depth is crucial. Besides coding practices, consider using security safeguards like Web Application Firewalls (WAFs) or input filters that can detect template injection attempts. Some WAFs have signatures for common template payloads (for example, they might block inputs containing {{ followed by suspicious characters, or #{ which is not common in normal text). While determined attackers can sometimes evade naive WAF rules (via obfuscation or alternate payloads), these protections can reduce noise and catch blatant attacks. Runtime protective measures, such as Runtime Application Self-Protection (RASP), can potentially detect if a template engine is being misused (for instance, detecting the Jinja2 render method being invoked with unexpected content). Finally, keeping the template engine and platform up to date is important. Security patches in these engines often address known escape vectors or provide new config options to harden the execution. Using a modern version with all security features enabled gives you a better security baseline.

Secure-by-Design Guidelines

Achieving security against template injection is easiest when it’s baked into the design from the beginning. A secure design starts with the principle: do not mix code and data. In the context of templates, this means designing the system such that templates are treated as code (authored or vetted by the development team) and user inputs are always treated as data. For example, if an application needs to allow users to customize email content, a secure design might offer a set of predefined placeholder tokens (like %FIRST_NAME%, %LAST_NAME%) that the user can place in their template, rather than letting them write raw template logic. The system would then post-process the saved template, replacing those tokens with actual user data when generating emails. This way, the user’s influence is limited to content arrangement, not execution of logic. By contrast, an insecure design would directly let users write in the template engine’s native syntax, opening the door to injection. Thus, designing a domain-specific markup or restricted template language for user content is a common secure design strategy. Many large applications do this: for instance, forum software might allow a limited set of BBCode or Markdown for styling user posts (which can be safely rendered), instead of raw HTML or script.

Threat modeling and security architecture should explicitly consider template engines. Ask questions early like: “What happens if an attacker controlled this template input?” or “Do we really need user-supplied logic here, or just user-supplied data?” By identifying components that involve templating, architects can decide to isolate those components. In multi-tenant applications where users can create templates (like a theming system or custom report generator), consider designing a dedicated sandbox service. For example, run a template rendering service in a locked-down Docker container or separate microservice that has limited permissions. If a user’s template triggers an exploit, the damage may be confined to that container (which can be quickly reset) and won’t touch the main application or database. This is an architectural mitigation akin to how browser makers sandbox JavaScript in webpages – assume it can be malicious and contain it. Secure design might also mean implementing safe defaults: configure the template engine in secure mode by default. Only if a feature absolutely requires a non-default insecure option should you consider toggling it, and even then, it should go through a security review. An example is enabling template extension or file includes – if your design doesn’t need the ability to {% include file %} from user input, disable that in the engine configuration.

Another design guideline is aligning with established security standards. The OWASP ASVS (Application Security Verification Standard) includes specific requirements for injection defense and template safety. For instance, ASVS 4.0 requirement 5.2.5 states that applications must protect against template injection by sanitizing or sandboxing any user input used in templates (github.com). Incorporating such standards into the design phase – essentially treating them as design constraints – can ensure that certain risky patterns are avoided from the get-go. For example, as a design rule, a team might decide: “We will not use any template rendering of user-provided strings at runtime. All templates will be static files.” This could be documented in the architecture or coding guidelines, and any deviation (like adding a new feature to allow user-defined templates) would require an exception process with security sign-off. Secure-by-design also means choosing frameworks that make safe patterns easy. If a particular web framework encourages mixing user data in template code (or doesn’t clearly separate template context), that framework may be riskier. Many modern frameworks, however, have moved towards safer paradigms – for instance, React’s approach of a virtual DOM and auto-escaped JSX expressions inherently avoids a whole class of injection (you cannot arbitrarily run server-side code via a React component’s props). Aligning design with such frameworks can reduce the chances of template injection by construction.

Lastly, consider usability and security together in the design. Often, template injection vulnerabilities come from developers or power-users trying to achieve a needed functionality (dynamic content) in the most straightforward way. If the secure way is too cumbersome, they might take shortcuts. A good design will provide safe mechanisms that are also convenient. For instance, provide a rich text editor for email templates that only allows certain safe dynamic fields, as opposed to expecting users to write raw template syntax. This not only improves user experience but also guides them towards safe usage. If only non-technical staff use the feature, they won’t even need or miss the raw template capabilities, and technical attackers will find no direct entry to supply malicious code. In summary, secure design for templates is about offering the needed flexibility in a controlled manner, foreseeing abuse vectors and eliminating them at the design level, and making the secure way the easy way.

Code Examples

To solidify our understanding, we present code examples in multiple languages illustrating insecure vs. secure patterns for template usage. Each example demonstrates a bad practice that could lead to template injection, followed by a mitigated version using safer techniques. These examples assume typical frameworks and libraries for each language, and they include brief annotations explaining the security implications.

Python

Insecure Implementation (Bad)

Imagine a Flask web application that uses Jinja2 for templating. In this bad example, the developer takes user input and builds a template string dynamically:

from flask import request
from jinja2 import Template

@app.route("/greet")
def greet():
    # Insecure: directly embedding untrusted input into a template string
    name = request.args.get('name', '')
    template_str = "Hello " + name + "!"             # User input concatenated
    output = Template(template_str).render()         # Template is rendered
    return output

In the above Python code, a user-provided name is concatenated into the template_str. The Template constructor compiles the string as a Jinja2 template and then renders it. If an attacker supplies name="{{7*7}}", the template engine will evaluate the expression 7*7 instead of treating it as text. The response would be Hello 49! – clearly indicating that the input was executed as code, not printed literally. Worse, an attacker could supply a more malicious payload (Jinja2 payloads can call OS commands or read files via certain object references) and achieve remote code execution on the server. The flaw is that untrusted data is introduced into the template without any sanitization or delimitation to mark it as data.

Secure Implementation (Good)

A secure approach in Python is to keep the template static and pass user input as data:

from flask import request, escape
from jinja2 import Template

@app.route("/greet")
def greet():
    # Secure: use a static template and inject user input as a variable
    name = request.args.get('name', '')
    safe_name = escape(name)                         # Escapes HTML special characters in name
    template = Template("Hello {{ user_name }}!")    # Template placeholder for a name
    output = template.render(user_name=safe_name)    # Render with data context
    return output

In this fixed version, the template string is a constant "Hello {{ user_name }}!". The user’s name is provided to the template via the render context, not by modifying the template string itself. This means any input will be inserted only where the {{ user_name }} placeholder is, and it will be treated as data. We explicitly call escape(name) (imported from Flask or Jinja2’s utilities) to HTML-encode any special characters in the name; this ensures that if the output is embedded in an HTML page, characters like < or " are safely escaped to prevent XSS. With this approach, an attacker’s input of {{7*7}} would be rendered on the page literally as {{7*7}} (or as {{7*7}}! with the exclamation mark) rather than being executed, because the template engine no longer sees {{7*7}} as part of its syntax – it only saw the static {{ user_name }}. This pattern – static templates + data binding with encoding – is the recommended way to use Jinja2 and other Python templating systems securely.

JavaScript (Node.js)

Insecure Implementation (Bad)

Consider a Node.js Express application using the Pug template engine (formerly Jade). In this bad example, the server accepts a template snippet from a query parameter and renders it:

const express = require('express');
const pug = require('pug');
const app = express();

app.get('/preview', (req, res) => {
    // Insecure: rendering user-supplied template code
    const userTemplate = req.query.template;         // User provides template code as a query param
    const data = { userName: req.query.name };       // Another param for data
    try {
        const html = pug.render(userTemplate, data); // Compiles and renders the template string
        res.send(html);
    } catch (e) {
        res.status(500).send("Template rendering error");
    }
});

Here, an attacker could craft a malicious query string: ?template=div= process.exit()% for example. Pug templates allow embedded JavaScript; an attacker who knows Pug’s syntax could attempt to execute Node.js code. In fact, Pug (and many templating engines in Node) execute in the context of the server’s JavaScript runtime. If userTemplate contains something like #{require('child_process').exec('rm -rf /')}, and if Pug doesn’t sandbox require (which by default it does not), this input would execute and attempt to run a system command. The code above naively trusts both template and name query parameters – template is used as template code, and name is just data (which could itself be abused if the template uses it unsafely). The try/catch will catch syntax errors, but it won’t stop an attacker’s code from running; it only handles the case where the template fails to render properly. This is a classic template injection: the application effectively does an eval on attacker-provided code via the template engine.

Secure Implementation (Good)

A more secure pattern is to use only developer-defined templates and bind untrusted data into them. For example, assume we have a Pug template file welcome.pug that looks like:
(welcome.pug): p Welcome, #[strong #{userName}]! – This template will safely insert the userName variable into the HTML.

const compiledTemplate = pug.compileFile('./views/welcome.pug');

app.get('/preview', (req, res) => {
    // Secure: use a pre-compiled template and only inject sanitized data
    let userName = req.query.name || '';
    userName = userName.replace(/[^\w\s]/g, '');   // Simple sanitization: remove non-alphanumeric
    const html = compiledTemplate({ userName: userName });
    res.send(html);
});

In this secure version, we ignore any template parameter from the user – the template is fixed as welcome.pug (which the developer controls). We compile it once (which could also be done at app startup for efficiency). When handling a request, we take the name parameter and subject it to a sanitization step. Here, for illustration, we remove any character that is not a word character or whitespace, ensuring the name is a plain string (no curly braces or dangerous symbols). Then we render the compiled template with the sanitized userName. In Pug, using the #{} syntax in the template will by default HTML-escape the inserted text, so even if the user’s name contained something like <script>, it would be rendered as harmless text. The key security improvements are: (1) the template code is no longer influenced by user input, and (2) the user input is sanitized/validated before being inserted. Now an attacker sending ?name=#{process.exit(0)} or any similar payload will just see those characters on the resulting page, not code execution. The application no longer evaluates raw input as code, closing the template injection vector.

Java

Insecure Implementation (Bad)

Java applications often use templating engines for web views or email generation. In this example, suppose an application uses the FreeMarker engine for email templates. The bad implementation reads a template from a user-supplied string (perhaps stored in a database or received via a request) and processes it:

import freemarker.template.*;
import java.io.*;

public String generateEmail(String userTemplate, Map<String, Object> dataModel) throws IOException, TemplateException {
    // Insecure: using user-provided template content
    Configuration cfg = new Configuration(Configuration.VERSION_2_3_31);
    Template template = new Template("userTemplate", new StringReader(userTemplate), cfg);
    StringWriter out = new StringWriter();
    template.process(dataModel, out);    // Processes the template with the data
    return out.toString();
}

If userTemplate is derived from untrusted input, this code is dangerous. FreeMarker templates can contain powerful directives (loops, conditions) and even call Java methods or constructors (especially in older versions or if certain flags are enabled). For instance, an attacker could supply a template string like:

Hello ${user}! <%-- normal usage --%>
<#assign ex = "freemarker.template.utility.Execute"?new()>${ ex("calc.exe") }

If the application passes this string into generateEmail, the FreeMarker engine will interpret <#assign ex = "...Execute"?new()> as an instruction to instantiate an Execute utility (a known FreeMarker class) and then call it with "calc.exe". This could spawn a process on the server (note: newer FreeMarker versions disable this particular feature by default, but many other tricks exist). Even without Execute, FreeMarker’s ${...} expressions could allow reading of internal data or calling getter methods on objects in the dataModel. The root problem is clear: userTemplate is compiled and executed as a template without any filtering. The Configuration is default (no special security settings). This function basically gives an attacker a sandboxed Java execution environment – and history shows that determined attackers often find ways out of such sandboxes.

Secure Implementation (Good)

To make this secure, we should remove the ability for the user to supply arbitrary template code. Instead, the user can perhaps choose from a set of pre-defined template names or provide simple content that gets embedded safely. Here’s a safer approach using FreeMarker:

import freemarker.template.*;
import org.apache.commons.text.StringEscapeUtils;

public String generateEmailSafe(String templateName, String userContent) throws IOException, TemplateException {
    // Secure: load a predefined template and inject sanitized content
    Configuration cfg = new Configuration(Configuration.VERSION_2_3_31);
    cfg.setClassForTemplateLoading(this.getClass(), "/templates"); 
    cfg.setAPIBuiltinEnabled(false);                         // Disable dangerous built-ins
    cfg.setLogTemplateExceptions(false);
    Template template = cfg.getTemplate(templateName + ".ftl");
    // Prepare data model with only safe, escaped data
    Map<String,Object> dataModel = new HashMap<>();
    dataModel.put("content", StringEscapeUtils.escapeHtml4(userContent));
    StringWriter out = new StringWriter();
    template.process(dataModel, out);
    return out.toString();
}

In this secure version, templateName is not user-controlled beyond maybe choosing a file (and we append a safe extension and load it from a trusted directory, preventing arbitrary file includes). The template files (e.g., welcome.ftl, password_reset.ftl) are written and vetted by the development team. We also configure FreeMarker with some safety options: setAPIBuiltinEnabled(false) turns off the ?api directive that could expose internal Java classes, and we avoid enabling any feature that isn’t needed. We then take userContent (which might be something like a message body or a comment that the user can provide, depending on context) and apply StringEscapeUtils.escapeHtml4 to it. This ensures that any HTML special characters in the content are escaped (to prevent breaking HTML context or injecting scripts in the output). The data model passed to the template only contains this safely escaped content. The template file templateName.ftl presumably has a placeholder like ${content} in it, and since we’ve escaped the content, FreeMarker will by default just insert it (there’s no risk of FreeMarker interpreting it as further template code because it’s just a string value). By loading a template from a trusted location and injecting only sanitized data, we’ve mitigated the injection. Even if an attacker tries to manipulate templateName through user input, we should validate that it matches an allowed pattern (for example, only certain values or filenames). The key aspect is that at no point do we execute an arbitrary template provided by an attacker. This aligns with secure design: only trusted code (our template files) is executed, and untrusted input is confined to data that gets encoded appropriately.

.NET / C#

Insecure Implementation (Bad)

In ASP.NET and C#, Razor is a common templating language (used in MVC views and also in Razor Pages, etc.). There are libraries like RazorEngine that allow compiling and running Razor templates from strings at runtime. An insecure example is as follows:

using RazorEngine;
using RazorEngine.Templating;

public string RenderTemplate(string userTemplate, object model) {
    // Insecure: compiling and running a user-supplied Razor template
    string templateKey = "userTemplate";
    // No sanitization here – userTemplate may contain C# code
    string result = Engine.Razor.RunCompile(userTemplate, templateKey, model.GetType(), model);
    return result;
}

If userTemplate is not trusted (say it comes from a user’s input or a database entry that a user can modify), this is extremely dangerous. Razor templates can include inline C# code within @{ ... } or as part of the markup. For example, an attacker could provide:

<p>Hello @Model.Name!</p>
@{ System.Diagnostics.Process.Start("calc.exe"); }

When RunCompile executes this, the output would include the “Hello [Name]” paragraph, and it would also execute the hidden Process.Start call on the server, launching Calculator (or any process – in a real attack, it could be a reverse shell or some script). Essentially, RazorEngine in this context is acting like an eval of C# code with very few restrictions (the code runs with the same privileges as the application). This is a template injection in the .NET world – treating user input as a Razor template to compile is equivalent to handing the keys to your server to the attacker. Beyond obvious code execution, an attacker could also use such a template to read environment variables, access database through the model if not null, or call static methods. There’s no built-in sandbox in this scenario; the code is running in the full trust of the application’s process.

Secure Implementation (Good)

The secure approach in .NET is again to avoid runtime compilation of untrusted templates. Instead, pre-define the templates and use the templating engine as intended, or use simpler string formatting if templating isn’t needed. For demonstration, let’s assume we want a greeting email template. We’ll define the template as a constant and only inject data:

using System.Net;
using RazorEngine;
using RazorEngine.Templating;

public string RenderGreeting(string userName) {
    // Secure: use a fixed template and encode the user input
    string safeName = WebUtility.HtmlEncode(userName);
    string template = "<p>Hello @Model.Name, welcome!</p>";
    var model = new { Name = safeName };
    string result = Engine.Razor.RunCompile(template, "greetingTemplate", model.GetType(), model);
    return result;
}

In this secure version, the template string is a constant defined by the developer ("<p>Hello @Model.Name, welcome!</p>"). We compile this template with a key "greetingTemplate" and provide a model object that contains the data. The user-provided data is userName, which we immediately HTML-encode using WebUtility.HtmlEncode – encoding both ensures that any HTML special chars are neutralized and also (in Razor’s case) if the template tried to output Model.Name, it would be safe (Note: Razor’s engine typically encodes output by default too, but we double-encode here for absolute safety in this context). Since the template is fixed, there’s no place for an attacker to inject new code; the worst they could do is provide a name that includes some script, but because we encoded it, something like "<script>alert(1)</script>" in the name would be rendered as &lt;script&gt;alert(1)&lt;/script&gt; in the HTML – not executed. This example might appear trivial, but it demonstrates the core idea: the template is static and only data changes. In a real app, instead of RunCompile on a string, you would typically have a Razor view file precompiled, and you just pass the model to it. The use of RazorEngine here is for illustration; one should be cautious even with RazorEngine – if you ever find yourself needing to compile a template from an untrusted source at runtime, that’s a red flag. More preferably, if dynamic templating is needed in .NET, you might restrict it by using a simpler templating library (like a mustache implementation for .NET) that lacks the ability to execute arbitrary C# code. .NET also has the concept of Code Access Security and AppDomains which historically could sandbox code, but those are largely obsolete in .NET Core. Therefore, the emphasis remains on not letting untrusted code run in the first place. By sticking to fixed templates and encoding user input, we uphold that principle.

Pseudocode Illustration

Insecure Pseudocode

To generalize the concept, here’s a pseudocode example of what not to do:

function renderPage(userInput):
    template_code = "<html><p> Hello " + userInput + "</p></html>"
    return TemplateEngine.render(template_code)

In this insecure pseudocode, a function renderPage constructs an HTML template by concatenating userInput into a template string. Then it calls TemplateEngine.render on the assembled template. The pseudocode stands in for any templating system – the key point is that userInput can contain something like {{malicious code}} or other template directives. Because we’ve directly inserted it, the TemplateEngine will execute whatever code is embedded in userInput. This pattern is essentially the same as dynamically calling eval on user data, and it will lead to template injection vulnerabilities.

Secure Pseudocode

And here’s pseudocode following secure principles:

function renderPageSafe(userInput):
    template_code = "<html><p>Hello {{ name }}</p></html>"   # static template with placeholder
    safe_input = escape(userInput)                           # encode or sanitize user input
    return TemplateEngine.render(template_code, { name: safe_input })

In the secure version, the template is static and contains a placeholder (e.g., {{ name }} or whatever token the engine uses). We then escape or sanitize userInput – for instance, replacing < with &lt;, removing dangerous characters, etc., depending on context. Finally, we call the template engine’s render method with the template and a data map ({ name: safe_input }). The template engine will insert safe_input into the placeholder. Now, if userInput contained something resembling template code, it doesn’t matter – by the time it’s inserted, it’s just data. The template syntax ({{ name }}) was defined by us, and the engine won’t re-interpret the inserted value as code. This pseudocode mirrors the secure patterns we saw in real languages: never concatenate untrusted input into template definitions; always treat it as data. By following this approach, we preserve the intended functionality (the user’s input appears in the output) without introducing execution of unintended commands.

Detection, Testing, and Tooling

Detecting template injection vulnerabilities requires both automated scanning and manual probing, as these flaws can sometimes be subtle. Static Application Security Testing (SAST) tools can help by flagging suspicious patterns in code. Many SAST tools (like Fortify, Checkmarx, or CodeQL queries) have rules to detect user input flowing into template engine APIs. For example, a SAST tool might alert on usage of Template.fromString(userInput) in Python or Engine.Razor.RunCompile(untrustedString, ...) in .NET. These are clear code smells of potential template injection. SAST analysis can be integrated into the build pipeline so that such issues are caught early. However, not all instances are straightforward; sometimes the template engine is used through reflection or complex frameworks, so manual code review is important. Security-focused reviewers should pay close attention to code that handles templating. They should enumerate all places where templates are rendered and verify that in each case, the templates are either constant or constructed only from trusted sources. If any use of templates involves concatenation or dynamic evaluation, that deserves scrutiny.

Dynamic testing (DAST) and penetration testing are extremely effective for identifying template injection in running applications. A common technique is fuzzing inputs with engine-specific payloads. For instance, a tester might input {{7*7}} or #{7*7} into various form fields or endpoints and look for arithmetic results (like 49 or other telltale outputs). Another trick is to input something that will cause a deliberate error in template parsing, such as an unmatched brace ({{ without }}). If the response is an error message containing words like “Jinja2” or “org.thymeleaf.TemplateEngine” or stack traces, it’s a strong indicator that a template engine attempted (and failed) to parse the input (portswigger.net). Testers also use differential inputs: for example, input name=abc vs name={{abc}} and see if the output differs in a non-escaped way. If {{abc}} disappears or triggers a change, the engine likely processed it. Modern security testing tools incorporate these techniques. Burp Suite, a popular web security testing tool, has an extension known as the Backslash Powered Scanner which systematically probes inputs with special characters (including those used in template engines) (owasp.org). This can automate the detection of template injection by observing responses to a battery of test payloads. There are also specialized tools like Tplmap (owasp.org) – an open-source tool specifically designed to find and exploit server-side template injection. Tplmap works somewhat analogous to SQLMap (for SQL injection), automating the injection of payloads for various template engines (Smarty, Mako, Jinja2, Velocity, etc.) and even attempting to exploit them to give a shell. Security engineers often run such tools against endpoints (especially those that take rich input) to see if any template injection vectors are present.

For client-side template injection, testing might involve delivering payloads that only manifest when rendered in a browser. This is akin to XSS testing – you might insert AngularJS curly braces into fields of a single-page application and then observe the DOM or behavior in the browser. If you find that typing {{alert(1)}} into a field triggers an alert (or any JavaScript execution) when the interface updates, that’s a positive find for CSTI. Automated tools for XSS (like DOM XSS scanners) can catch some of these, but often CSTI requires a manual approach: identifying front-end frameworks in use (Angular, Handlebars, etc.), and then crafting payloads specific to them. For example, if you know an application uses AngularJS, you’d try known sandbox escape sequences. Security research blogs (such as PortSwigger’s research articles) often publish lists of probe payloads for different engines (portswigger.net) (portswigger.net). Incorporating those into your testing methodology is useful. Additionally, the OWASP Web Security Testing Guide (WSTG) has a section on testing for SSTI which outlines an approach to systematically find and verify these issues (like sending sequences and analyzing output) – testers should refer to such guides for structured techniques.

Besides finding the vulnerability, organizations should invest in monitoring and detection in production. This crosses into the operational domain, but it’s worth mentioning in a testing context: if you have application logging, consider adding warnings whenever a template engine is invoked with suspicious inputs. For instance, instrument the template rendering function to log a message if it encounters template syntax in a user-supplied value (some custom code or wrapper could do this). That way if an attacker is probing, you might catch it early from logs. Similarly, using an Intrusion Detection System (IDS) or WAF in front of the application can produce alerts when certain patterns (like {{ or <% appearing in requests unexpectedly) are observed. Of course, such patterns might sometimes appear in normal traffic (for example, a blog platform might legitimately handle double curly braces from posts about code), so tuning is required to avoid false positives.

Tooling for prevention should also be mentioned. Some frameworks provide safer templating alternatives or linters. For example, ESLint (for JavaScript) can be configured with rules to disallow certain dangerous constructions in front-end code (like disallowing new Function() or AngularJS ng-bind-html usage with untrusted sources, which often relate to injection). Similarly, in languages like Python, linters or type checkers might detect usage of eval or template rendering of input. While these are not foolproof, they are part of a secure development toolchain. Security unit tests are another angle: one can write unit tests that ensure certain functions (that should escape output) are indeed escaping them. In summary, detection and testing of template injection span static analysis, automated dynamic scans, and manual pentesting. A combination of these approaches yields the best coverage: static analysis to find obvious coding flaws early, and dynamic testing to catch any runtime issues or complex injection scenarios that static might miss.

Operational Considerations (Monitoring and Incident Response)

Even with robust preventative controls, organizations should be prepared to detect and respond to template injection incidents at runtime. Monitoring is the first line of defense in operations. Applications should be instrumented to log unusual events related to template rendering. For instance, if a template engine throws an exception or renders an unexpected value, these events should be logged with enough detail (though careful not to log sensitive data) to aid in diagnosis. Many template injection attempts will cause errors (especially during reconnaissance), such as syntax errors in templates. Monitoring your application logs for messages like “TemplateSyntaxError” (Jinja2) or “Invalid template” or stack traces referencing template engine classes can reveal active attempts. These logs should ideally feed into a central SIEM (Security Information and Event Management) system where alerts can be set. If an attacker is repeatedly triggering template errors (probing for an injection vector), the security team can be alerted to investigate that user or source IP. Additionally, monitor for patterns like high multiplication results in output (e.g., seeing “Hello 49” frequently could be a quirky sign of 7*7 attempts) – though this is more anecdotal, it underscores that being familiar with how an attack might manifest in logs or output can help operations staff catch it.

Web Application Firewalls (WAFs) in production can also mitigate template injection. Many WAFs have rulesets for common injection attacks, including server-side template injection. For example, the ModSecurity OWASP Core Rule Set contains rules to detect sequences that look like template injections (it might flag things like #{ or ${ with suspicious characters following, or the presence of known template function names in inputs). If tuned properly, a WAF could outright block some malicious requests before they hit the application. However, operational teams must be cautious with WAFs: rules should be tested to avoid false positives that could block legitimate traffic (especially if users might legitimately use curly braces or similar in input, say in a discussion about code or such). Virtual patching via WAF is a valuable approach if you discover a template injection vulnerability and need an immediate fix – you can create a custom WAF rule to filter out the malicious payloads while working on the code fix.

Incident response for a suspected template injection should be swift and thorough due to the high impact potential. If you detect that an injection has likely occurred (through an alert, or perhaps an anomaly like a sudden execution of an unexpected process on the server), treat it similarly to a code execution incident. The first step is containment: for example, disable the vulnerable functionality (if known) or take the application offline to prevent further exploitation. If the injection came through a specific feature (like a “custom report template” feature), turn that feature off if possible. Next, assess indicators of compromise: since SSTI often yields server access, check for any new processes, new files, or unusual network connections originating from the server. Attackers might have installed webshells or backdoor accounts. If your application runs in a container or sandbox, evaluate whether the attacker could have broken out of it (though many don’t isolate template engine execution by default). Gather logs around the timeframe of the attack – what payload was used? For example, you might see in logs that an attacker managed to run os.system('whoami') via a template payload. That information is valuable to scope what they did.

Forensic analysis is important post-incident. Template injection-based breaches may allow data access – you should assume that sensitive data (customer info, credentials, etc.) could have been accessed or exfiltrated if the attacker had RCE. Thus, part of incident response is analyzing databases and storage for any signs of tampering or unauthorized access (e.g., did the attacker create a dump of the user table via the template?). Also, check integrity of code and files – an attacker might leave a malicious template or modify an existing one to maintain persistence (for instance, if they got access to an admin panel, they might implant a hidden template that does evil things whenever rendered). Eradication involves removing any such backdoors, patching the code vulnerability, and possibly rotating secrets that might have been compromised (passwords, API keys from environment variables, etc.). The response team should also consider if the vulnerability was exploited broadly – was it a targeted attack on your system, or could it have been a bot scanning many hosts for a known vulnerability? If the latter, coordinate with any broader community if needed (for example, if it was a 0-day exploit in a common framework, inform the vendor or community).

Operationally, after an incident, improvements should be made. Deploy new monitoring if something was missed. Perhaps add an application health check that ensures templates don’t contain certain content. Some teams implement “canary” values – e.g., place a fake object in the template context such that if someone enumerates it, an alert triggers (like a dummy variable that if accessed indicates template manipulation). While that’s advanced and not common, it shows creative ways to catch attackers early. Another operational aspect is education and process: feed back the incident learnings to the development process so that similar bugs are caught earlier (this might mean adding new SAST rules, more code review on templating features, etc.). In summary, from an ops standpoint, treat template injection attempts/attacks as you would a serious intrusion. Monitor actively, respond decisively, and then harden the system to prevent future occurrences.

Checklists

Build-Time Security Checks: During development and build, teams should systematically guard against template injection. First, ensure that all template usage in the code follows safe patterns – use peer code reviews with a checklist item for “no user-controlled template compilation.” If using frameworks like Django, Rails, or ASP.NET MVC where templating is mostly static, verify that any uses of dynamic evaluation (like Django’s safe filter or custom template tags) are necessary and safe. A secure build pipeline will also include linting or static analysis configured to detect risky code. For example, add checks that flag functions known to invoke templates with strings (e.g., Jinja2’s Template() with a non-constant argument). If your project has coding guidelines documentation, include clear guidance: e.g., “Do not use TemplateEngine.parse/compile on any request parameter or user input. Instead, use static templates and context data.” Automated tests can be written to enforce some of this – for instance, a unit test that tries to render a template with an evil payload in a suspected area should result in a safe output or a handled error, not execution. As part of build-time threat modeling, consider each new feature: if it introduces or changes templating logic, model the threats (the team might use OWASP ASVS as a baseline: ASVS 5.2.5 specifically would be checked – “Are we sanitizing or sandboxing any user input to templates?” (github.com)). Finally, dependency management is part of build-time security: use updated versions of template engines. If a certain version of a template library had a known sandbox escape or vulnerability, ensure the project uses a patched version. Tools like npm audit or Retire.js (for Node) and OWASP Dependency Check (for Java, .NET) can catch known vulnerable library versions that might make template injection easier for an attacker.

Runtime Security Measures: At runtime (in production), enforce strict configurations and environment settings that reduce the impact of a potential injection. For example, run the application with the least privileges necessary – if an SSTI is exploited, the damage can be curtailed if the process doesn’t have admin rights or access to all system files. Containerization or using AppArmor/SELinux profiles can further isolate the template engine’s actions (e.g., prevent it from writing to certain directories or making outbound network connections). Ensure that any sandbox modes in the template engines are active in production (double-check configuration files or environment toggles, as sometimes developers might disable a sandbox for debugging and forget to re-enable it). Monitoring processes and memory usage can also clue into suspicious activity at runtime – if a template injection is being abused to, say, run a crypto miner, you might see unusual CPU spikes or processes. Having an application performance monitoring (APM) tool could detect functions being invoked that normally aren’t (like system calls) if it hooks into the runtime; for instance, if suddenly Runtime.exec (Java) or Process.Start (.NET) is being called and it never was during normal operation, that’s a red flag. On the network side, egress filtering is a good runtime protection: if the server doesn’t need to initiate outbound connections to arbitrary hosts, block it by firewall rules. That way, even if an attacker manages RCE and tries to download a payload or exfiltrate data, they are limited. Similarly, database accounts used by the app should have principle of least privilege – if template injection runs some DB queries, at least a read-only account can’t drop tables.

Security Review and Testing Checklist: Periodic security reviews (or a pre-release review) should have a checklist item dedicated to injection flaws. This means reviewing all new code for injection vectors like template injection, SQL injection, etc. For template injection, the reviewer should verify: (1) All template rendering calls are identified; (2) None of them take direct user input as the template logic; (3) Any user input included in templates is properly encoded or sanitized. If dynamic templating is a feature (by design), then the checklist should ensure that sandboxing is in place and thoroughly tested. For example, if the application claims to restrict user templates to a safe subset, the review should include attempting to break out of that subset. Checklists can also include verifying compliance with standards like OWASP ASVS – e.g., ASVS item 5.2.4 “no use of eval() or dynamic code exec without sandbox” (github.com) is directly relevant; a reviewer would search the codebase for any eval or runtime compile usage. Another checklist point: ensure logging is in place for template errors or unusual events (so if an issue arises later, there is visibility). It’s also wise to have a checklist item for scanning using the latest tools – run the app (maybe in a staging environment) with something like Burp or Tplmap to see if any obvious injection got through. Essentially, a thorough security review before deployment acts as a final checkpoint to catch anything the developers or earlier tests missed. After deployment, regular audits should repeat these checks, especially if the application has modules that let administrators or users update templates – those should be tested anew whenever changes occur, as they are high-risk areas.

Common Pitfalls and Anti-Patterns

Despite the known risks, certain bad practices with template engines continue to appear. Recognizing these anti-patterns can help developers avoid them. One common pitfall is using template engines as if they were simple string replace utilities, without realizing the execution context they introduce. For example, a developer might think, “I’ll just use Handlebars to replace some placeholders in a string provided by the user,” not realizing that if the user’s string contains {{}}, Handlebars will try to interpret it. The anti-pattern here is failing to distinguish between allowing user input in content vs. in code. To avoid this, treat any use of a template engine as a potentially dangerous operation unless you control the template.

Another anti-pattern is blacklisting a few bad characters or sequences instead of proper validation/escaping. Developers under time pressure might attempt a quick fix like: userInput.replace("{{", "").replace("}}", "") to prevent Jinja injection, or remove <% to prevent JSP injection. This approach is brittle and usually bypassable. Attackers can often encode or use alternate sequences ({% in Jinja for statements, or different whitespace to fool simple replacements). A partial blacklist gives a false sense of security. The correct approach is whitelisting allowed input patterns (if expecting alphabetic text, enforce that) or using proven libraries for sandboxing. In short, improper or incomplete sanitization is a pitfall – it’s better to use well-vetted libraries or frameworks rather than custom regexes to cleanse input, because the latter are often incomplete.

A subtle pitfall occurs in assuming that template sandboxes are flawless. Developers might rely on a template engine’s documentation that says “we sandbox expressions” and therefore be comfortable allowing a bit of dynamic content. However, history has shown that sandboxes can have escape vulnerabilities. For instance, older AngularJS had sandbox bypass bugs; early versions of Jinja2’s sandbox had weaknesses when certain Python objects were allowed. If an application is relying solely on the template engine’s sandbox for security, that’s a risk. The safer stance is defense in depth: even with a sandbox, still limit what input is allowed, and run with least privilege. Essentially, trust but verify; do not assume the sandbox will withstand all attacker ingenuity.

An anti-pattern in some enterprise systems is exposing too much of the application internals to the template context. For instance, giving the template engine a model or context that includes rich objects (like an entire User object with lots of properties, or a database session) can amplify the impact if injection is possible. In some Java templating scenarios, if an attacker can call methods on objects in the context, having a huge object graph gives them many possibilities (maybe one of those objects has a method to send emails or execute commands). It’s better to pass only the data needed for rendering (perhaps just simple DTOs or maps of basic values). Over-privileged template context is a pitfall that can turn a moderate issue into a critical one; keep the context minimal.

A related anti-pattern is not updating template engines or frameworks due to compatibility concerns, thereby missing out on security patches. Organizations sometimes lock to older versions of frameworks where known template injection exploits exist (for example, an old version of a CMS that allowed injecting template tags in posts). Attackers often target such known CVEs. The solution is to integrate regular dependency checks and updates, even if it means refactoring templates that break with new versions. Skipping these updates “because it works right now” can leave an application open to a well-known exploit.

Finally, a classic pitfall is using eval() or equivalent on user input, sometimes as a quick templating hack. For example, using JavaScript’s eval to parse a user-supplied string of concatenated HTML, or Python’s eval on a format string – these are extreme cases of injection and should be outright avoided. If you ever find yourself writing eval(userProvidedString) with the intent of some sort of templating outcome, stop and refactor to a safe method. This is not only an anti-pattern but practically a guaranteed vulnerability.

In summary, common mistakes revolve around underestimating the complexity and capability of template engines. Treat them with the same caution as you would raw SQL queries. Always ask: Are we inadvertently running code supplied by an untrusted source? If the answer could be yes, it’s time to redesign that part. Avoid shortcuts that handle security superficially, and instead use holistic input validation, context restriction, and up-to-date secure libraries. By learning from these pitfalls, developers can steer clear of the traps that have compromised many applications in the past.

References and Further Reading

OWASP Application Security Verification Standard 4.0 (ASVS 4.0) – Particularly sections 5.2.4 and 5.2.5, which define requirements for avoiding eval and protecting against template injection by sanitizing or sandboxing inputs. This standard provides a checklist for secure development practices, emphasizing the need to handle template engines safely. (OWASP ASVS 4.0).

OWASP Web Security Testing Guide (WSTG) – Testing for Template Injection – A comprehensive guide for security testers on how to identify server-side template injection (WSTG-ID: WSTG-INPV-18). It includes example scenarios (Flask/Jinja2, Twig, etc.), payloads to try, and references to tools like Tplmap. This resource is valuable for understanding how template injection manifests and is detected in real applications. (OWASP WSTG SSTI).

“Server-Side Template Injection: RCE for the modern web app” – James Kettle (PortSwigger Research) – This groundbreaking 2015 research (updated in 2025) by James Kettle introduced the world to SSTI as a critical vulnerability. The whitepaper and blog post detail how various template engines (across different languages) can be exploited to achieve remote code execution. It provides deep insight into identifying template injection (sometimes mistaken for XSS) and techniques for exploitation, which in turn sheds light on how to defend against them. (PortSwigger Research: Server-Side Template InjectionPortSwigger Blog).

PortSwigger Web Security Academy – Server-side Template Injection Topic – The Web Security Academy offers interactive labs and explanations for SSTI. The materials here complement the research paper by providing step-by-step examples in a learning format. It’s useful for developers to “play” with a safe vulnerable environment and truly grasp the impact of SSTI, as well as experiment with mitigations. (PortSwigger Academy: Server-Side Template Injection labs – available on PortSwigger’s website).

Acunetix Blog – “Exploiting SSTI in Thymeleaf” (2020) – An article that focuses on a specific Java template engine (Thymeleaf) and how SSTI can be achieved. It walks through the process of discovering a template injection in Thymeleaf and escalating it to execute code, including sandbox escape. This is a great case study for Java developers, and it also references the original research by Kettle and others, emphasizing that even lesser-known engines can be vulnerable. (Acunetix – SSTI in Thymeleaf).

Palo Alto Networks Blog – “Understanding Template Injection Vulnerabilities” by Artur Avetisyan (2022) – A comprehensive overview of both server-side and client-side template injection. It starts from first principles (what template engines are, why template injection happens) and moves into examples and mitigation strategies. Importantly, it also discusses how their WAAS (Web App & API Security) solution can virtually patch such issues, giving a perspective on runtime protection. This blog is useful for both learning the basics and seeing how industry tools address the problem. (Palo Alto – Understanding Template Injection).

Fortify Vulnerability Catalog – Template Injection – Fortify’s vulnerability knowledge base (VulnCat) provides an explanation and an example in JavaScript of template injection, mapping it to CWE-94/95. It’s a concise reference that reinforces why using user input as a template is dangerous, and it shows how Fortify’s static analysis might catch such an issue (with a Handlebars example). Developers can refer to this to understand how a leading SAST tool conceptualizes the flaw. (Fortify VulnCat: Template Injection – available via Fortify’s online VulnCat).

Veracode Blog – “Intro to Secure Coding with Template Engines” (2022) – A blog series that delves into secure usage of template engines across languages. The introductory post discusses the challenges developers face in using templates securely and sets up guidelines that will be detailed per-language in subsequent posts. It emerged in the context of real-world template vulnerabilities (like those in Magento). This is a recommended read for developers to get language-specific advice (Java, .NET, JS, Python) from a security standpoint. (Veracode Blog).

PayloadsAllTheThings – Template Injection – PayloadsAllTheThings is a community-driven repository of attack payloads. The section on Server Side Template Injection contains a wealth of payload samples for different engines, detection techniques, and even one-liner exploits. While this is attacker-oriented information, defenders can use it to understand what patterns to look for and test against. By trying some of these payloads on your application (in a safe testing environment), you can verify whether an engine might be interpretable by user input. (GitHub: PayloadsAllTheThings – Server Side Template Injection).

Tplmap Tool (GitHub) – Tplmap is an open-source penetration testing tool specifically for template injection. It automates detection of the template engine and exploitation to the extent of providing a shell or executing OS commands via the template. Security engineers can use Tplmap in a controlled test to evaluate the risk of an SSTI vulnerability (it supports many engines like Jinja2, Tornado, Twig, Velocity, etc.). Using Tplmap on a test system can illustrate the importance of the issue to development teams (e.g., showing how a seemingly harmless input can lead to a shell). (Tplmap on GitHub).


This content is authored with assistance from OpenAI's advanced reasoning models (classified as AI-assisted content). Material is reviewed, validated, and refined by our team, but some issues may be missed and best practices evolve rapidly. Please use your best judgment when reviewing this material. We welcome corrections and improvements.

Send corrections to [email protected].

We cite sources directly where possible. Some elements may be derived from content linked to the OWASP Foundation, so this work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. You are free to share and adapt this material for any purpose, even commercially, under the terms of the license. When doing so, please reference the OWASP Foundation where relevant. JustAppSec Limited is not associated with the OWASP Foundation in any way.