Password Storage
Overview
Password storage is a fundamental security concern in application design, directly affecting the confidentiality and integrity of user accounts. Storing passwords in plaintext or reversibly encrypted form has led to numerous high-profile breaches where millions of credentials were exposed. In response, modern security standards mandate that passwords be stored using one-way cryptographic hashes with appropriate protections such as unique salts and computational work factors. The goal is to ensure that even if attackers compromise a system and obtain the stored password data, they cannot easily recover the original passwords. By transforming passwords with secure hash functions and careful design, applications significantly reduce the risk of credential exposure and its cascading fallout (such as unauthorized account access and credential stuffing attacks on other services).
A robust password storage scheme acknowledges that determined attackers may obtain stored credentials through database breaches, insider threats, or backups. Given this possibility, password verification data must be cryptographically hardened to resist offline cracking attempts (pages.nist.gov). Rather than relying on obfuscation or hoping a breach never occurs, secure design assumes attackers will get the stored password representations and focuses on making it computationally infeasible for them to derive the plaintext. In practice, this means eliminating any form of plaintext or reversible password storage and instead using proven one-way hashing techniques with modern algorithms and parameters. This approach aligns with industry standards such as the OWASP Application Security Verification Standard (ASVS), which requires that passwords be salted and hashed with an approved one-way function to mitigate offline attacks (cornucopia.owasp.org). It also reflects the guidance of NIST Digital Identity Guidelines, which state that verifiers “SHALL store memorized secrets in a form that is resistant to offline attacks” by salting and hashing them with a suitable key derivation function (pages.nist.gov).
Threat Landscape and Models
The threat model for password storage centers on adversaries who obtain unauthorized access to stored credentials and attempt to recover the original passwords. Attackers may range from opportunistic hackers exploiting a SQL injection vulnerability to state-sponsored actors or malicious insiders with direct database access. In a typical scenario, an attacker who compromises the database can retrieve the password verifier data (whether hashes or encrypted passwords) and then conduct offline cracking. Offline attacks are particularly dangerous because once the attacker has the data in hand, they can use unlimited time and computing resources—bypassing any online protections like account lockouts or rate limiting. This threat is exacerbated by the widespread availability of powerful hardware and tools that automate password cracking. Modern GPUs and cloud computing allow billions of hash computations per second for fast hashing algorithms (arstechnica.com) (cwe.mitre.org), meaning that any password storage scheme using fast or unsalted hashes is extremely vulnerable. Attackers also leverage large dictionaries of leaked passwords and sophisticated algorithms (e.g., hashcat rules) to crack hashes more efficiently by targeting likely passwords, not just random brute force (cheatsheetseries.owasp.org) (arstechnica.com).
A robust password storage design must consider both mass compromise scenarios and targeted attacks. In a mass compromise, the attacker’s goal is to crack as many passwords as possible from a dump of hashed credentials. Weak storage (e.g., unsalted SHA-1 hashes) enables the use of precomputed rainbow tables or parallelized guessing to crack a large fraction of passwords quickly. Proper salting defeats precomputation attacks by requiring each password to be attacked separately (cheatsheetseries.owasp.org). Targeted attacks, on the other hand, focus on a specific high-value account. Even with salting, an attacker can devote considerable computing power to a single hash to attempt to recover a particularly important password. Therefore, the password hashing function itself must impose significant computational cost per guess (via CPU and memory usage) to slow down these attacks (cwe.mitre.org) (cwe.mitre.org). The threat model also includes insider threats: for example, a rogue administrator could copy a credentials database. In such cases, the same principles apply—if the passwords are irreversibly hashed with strong algorithms, the insider gains no immediate advantage without investing in cracking efforts. By contrast, if passwords are stored in plaintext or with reversible encryption and the insider obtains the decryption key, the compromise is instant and total. Modern guidelines explicitly discourage any form of reversible password storage. OWASP emphasizes that passwords should almost never be encrypted: hashing (one-way) is the appropriate approach, since encryption by definition allows recovery of the plaintext given the key (cheatsheetseries.owasp.org). Only in rare legacy integration cases—for example, an application that must supply the user’s password to a legacy system that cannot accept tokens or modern single sign-on—could encrypted storage be contemplated, and even then it is a last resort to be avoided wherever possible (cheatsheetseries.owasp.org).
Common Attack Vectors
Attackers employ a variety of vectors to steal or abuse stored credentials, making password storage a crucial defensive frontier. One common attack vector is database compromise, often achieved through injection vulnerabilities (SQL, NoSQL) or misconfigured database services. If an application is breached and the user table is dumped, any password data stored there becomes the next target of attack. Similarly, backups or cloud storage buckets containing databases can be inadvertently exposed, as has happened in many publicized incidents. If those databases contain poorly protected passwords, attackers can immediately exploit them. Another vector is insider access: an employee or contractor with sufficient privileges might exfiltrate credential data. This threat underscores that even internal access should not equate to seeing plaintext passwords—strong hashing ensures that not even administrators can retrieve user passwords, limiting insider abuse.
Beyond direct data theft, application-layer leaks also pose a risk. Logging and error handling mechanisms can accidentally expose passwords if not carefully designed. For instance, an overly verbose authentication error might print a password to a log file, or debug logging could record user credentials in plaintext. Attackers who gain read access to log files or monitoring systems could harvest these secrets. Therefore, secure password storage involves not just the database column type, but also policies on handling password data in transit and in memory. During user registration or login, the plaintext password should exist only briefly in memory for hashing, and never be written to disk or logs. Even seemingly benign features like "Remember Me" or client-side storage (browser local storage) can become vectors if they store credentials insecurely. Modern applications use secure authentication tokens (or re-prompt for passwords) rather than storing raw passwords on the client, precisely to avoid creating new attack surfaces for credential theft. In summary, every pathway by which password data flows through a system—APIs, inter-service communication, logging, backups—must be scrutinized to ensure the password remains protected or ephemeral at all times.
Once an attacker obtains hashed passwords, offline cracking becomes the predominant attack technique. The attacker will attempt to guess passwords, hash each guess in the same way, and compare to the stolen hashes (cheatsheetseries.owasp.org). Common strategies include dictionary attacks using lists of known passwords (including the billions of credentials leaked from other sites), brute-force attempts (especially for short passwords), and hybrid attacks that mutate dictionary words (adding numbers, symbols, etc.). If the hashing scheme is weak (e.g., unsalted or using a fast hash like MD5/SHA-1), attackers can leverage rainbow tables or parallel computing to test huge numbers of candidate passwords quickly (cwe.mitre.org). This was vividly demonstrated in breaches like LinkedIn’s 2012 incident, where millions of unsalted SHA-1 hashes were cracked with relative ease once leaked. Conversely, if the scheme uses a strong, slow hash function with unique salts, the attacker’s job becomes much harder – they must perform a resource-intensive computation for each guess for each account. The difference is dramatic: an efficient GPU can test on the order of 10^8–10^9 SHA-1 or MD5 hashes per second (arstechnica.com), but only a tiny fraction of that rate for a memory-hard algorithm like Argon2id with high memory settings. Attackers will still succeed against weak or common passwords given enough time, but robust storage can limit the damage by ensuring only the weakest passwords are crackable and that the process is painfully slow. Essentially, proper password storage shifts most of the security burden onto the strength of the hashing scheme and not solely onto users’ password choices (though strong password choices are still important).
Impact and Risk Assessment
The impact of insecure password storage is far-reaching. If an application stores passwords in plaintext (or reversibly encrypted with an accessible key), any breach of the credential store results in immediate compromise of all user accounts. The attacker can simply read or decrypt the passwords and use them to impersonate users at will. The damage extends beyond the breached application: because password reuse is rampant, a breach exposing actual passwords often leads to credential stuffing attacks on other websites, where attackers try the stolen email/password combinations to take over accounts elsewhere (arstechnica.com). Thus, a single insecure storage breach can facilitate a chain reaction, undermining user accounts across multiple services. The reputational damage to the breached organization is also severe. Users and regulators have little tolerance for poor password practices—incidents where companies admitted to storing plaintext passwords (as happened in the infamous RockYou and Facebook internal storage incidents) are met with harsh criticism and regulatory scrutiny. From a compliance perspective, storing passwords improperly may violate data protection regulations and industry standards, potentially leading to fines or legal liability.
Even when passwords are hashed but using inadequate methods, the risk remains high. Hashes that are unsalted or use obsolete algorithms (like MD5, SHA-1, or unsalted SHA-256) offer insufficient resistance to attackers. Once obtained, such hashes can be cracked in bulk using precomputed tables or accelerated brute force, meaning the breach impact approaches that of plaintext storage. For example, unsalted SHA-1 hashes were cracked by the millions in real breaches, revealing users’ original passwords in cleartext. The risk here is that organizations may have a false sense of security (“at least we weren’t storing plaintext”) when in reality the weak hashing did little to protect their users. The window of exposure is critical in risk assessment: with fast hashes, attackers can expose a large percentage of passwords within hours or days of a breach. With a strong, slow hash (e.g., Argon2id, bcrypt with high cost, or PBKDF2 with sufficient iterations), the cracking timeline for each password stretches to months or years, and many may effectively never be cracked if they are strong or if the cost is continually increased. This delay can make the difference between a minor incident and a catastrophic breach. It gives organizations time to detect the breach, inform users, and have them change passwords before an attacker can exploit them. It also markedly reduces the likelihood that an attacker will bother cracking everything—if it takes, say, 10 seconds per guess per account, attacking even a million accounts might be impractical.
Risk assessment for password storage should also consider password strength distribution. Even with perfect hashing, users who choose very weak passwords (like "password123") will be at risk if an attacker is willing to brute force or guess those specific values. Proper storage raises the cost of getting even those weak passwords, but does not guarantee they remain secret. Therefore, organizations often enforce minimum password requirements and check new passwords against known-breached password lists (per NIST’s guidance to disallow commonly used passwords). In effect, secure storage is one layer in a defense-in-depth strategy: it assumes that some passwords will be weak, but by not storing them in decipherable form, it significantly mitigates the risk. On the other hand, if storage is done correctly, even if many users have weak passwords, an attacker with stolen hashes might crack the weakest fraction but be unable to crack those beyond a certain complexity. This gradation of impact (only easiest passwords cracked) is far preferable to a total compromise scenario. Finally, consider the operational impact: if a breach occurs but passwords were well-hashed, the required incident response (forcing password resets, etc.) is more orderly and confidence-inspiring than if passwords were plaintext (in which case immediate emergency resets and possibly personal data compromises are in play). Well-hashed passwords mean the organization can honestly communicate that passwords were protected in storage, which can reduce harm to user trust and limit liability.
Defensive Controls and Mitigations
To secure password storage, modern applications employ a combination of cryptographic controls and architectural strategies. The cornerstone is one-way hashing with a per-user salt. Each user’s password, at creation or change time, is processed by a password hashing function that produces a fixed-size hash value. A cryptographically secure salt (random value) is generated uniquely for each password and combined with the password in the hashing process (cheatsheetseries.owasp.org). The salt is then stored alongside the hash (either concatenated in the hash record or in separate database fields). Salting ensures that no two users will have the same hash value unless they coincidentally chose the same password and had the same salt, which is computationally improbable if salts are large and random (cheatsheetseries.owasp.org). This uniqueness defeats attacks like rainbow tables: an attacker cannot precompute hashes for common passwords in advance without knowing each user’s random salt, forcing them to recompute from scratch for each hash they attempt to crack. OWASP ASVS requires at least a 32-bit (4 byte) salt (cornucopia.owasp.org) (NIST likewise mandates ≥32 bits (pages.nist.gov), though in practice salts are usually 16+ bytes for good measure). In summary, salting is a basic but essential control that isolates each password’s security from the others.
Beyond salting, the choice of hashing algorithm and configuration is critical. Not all hash functions are suitable for password storage. General-purpose cryptographic hashes (like SHA-256 or SHA-3) are designed to be fast and are optimized for throughput, which is the opposite of what we want for password protection (cwe.mitre.org). Instead, password storage should use a password hashing function or key derivation function (KDF) specifically designed to be slow (computationally expensive) and ideally memory-intensive. This deliberate slowness (often configurable via a work factor) dramatically reduces an attacker’s ability to test password guesses in bulk (cwe.mitre.org). Modern options include Argon2id, scrypt, bcrypt, and PBKDF2, each of which has parameters to tune the computation cost. For example, Argon2id was the winner of the 2015 Password Hashing Competition and is designed to resist GPU cracking by using both CPU and memory hardness (cheatsheetseries.owasp.org). It takes parameters for memory size, iterations (time cost), and degree of parallelism. Scrypt likewise uses memory-hard techniques to make large-scale cracking difficult. Bcrypt (dating to 1999 but still relevant) uses a CPU-intensive key setup and iterative expansion, and PBKDF2 (standardized in PKCS#5) applies many HMAC operations iteratively. The work factor (also called cost factor) in these algorithms should be set as high as feasible for the deployment environment (cheatsheetseries.owasp.org) (cwe.mitre.org). This means calibrating how many iterations or how much memory can be used such that verifying a password is not too slow for legitimate users but is extremely taxing for an attacker attempting millions of trials. OWASP guidelines suggest, for instance, using Argon2id with on the order of 2 iterations and ~19 MiB of memory allocated, or scrypt with N=2^17 (~131072) and r=8, p=1, which provide strong resistance (cheatsheetseries.owasp.org). For bcrypt, a minimum cost (work factor) of 10 is recommended (cheatsheetseries.owasp.org) – higher on systems that can handle it – and for PBKDF2 (HMAC-SHA-256), at least on the order of 600,000 iterations in modern contexts (cheatsheetseries.owasp.org). These numbers are periodically adjusted as hardware improves. A general rule is that verifying one password should take perhaps 100 milliseconds to 1 second of server time; anything significantly faster is a missed opportunity to slow down attackers, whereas anything much slower might impact user experience or invite denial-of-service on the authentication endpoint (cheatsheetseries.owasp.org).
Another defense-in-depth control is the use of a pepper, which is a secret value (similar to a salt but not stored with the hash) added into the hashing process. A pepper is essentially a site-wide secret key or token that is combined with each password (for example, as an additional salt or as a key in an HMAC) before hashing (cheatsheetseries.owasp.org). Because the pepper is not stored in the database, an attacker who steals the hash and user-specific salt still cannot attempt guesses unless they also obtain the pepper value. Proper pepper implementation requires careful handling: the secret pepper must be stored securely, typically in an external secrets vault or hardware security module (HSM) separate from the application database (cheatsheetseries.owasp.org). If the pepper is ever compromised, it should be rotated (changed), which unfortunately means all passwords would need rehashing – essentially forcing a password reset for all users (cheatsheetseries.owasp.org). Thus, peppers add significant protection against certain scenarios (like SQL injection leading only to DB dump, without code execution on the app server), but they come with operational overhead. NIST’s guidance echoes this concept, recommending an additional keyed hash iteration with a secret “salt” stored separately, to render stolen hashes impractical to crack (pages.nist.gov). It’s important to note that peppering is an optional defense; it is not a substitute for per-user salts or strong hash algorithms, but rather an additional layer for high-security environments or defense against high-impact breaches.
Holistic mitigation strategies also include password policies and user guidance. While not part of storage itself, encouraging stronger passwords (longer passphrases, blocking common passwords) will improve the overall security when combined with strong hashing. Even a top-notch hashing scheme can be undermined if many users choose “123456” or “password1!” – those will likely be cracked even from Argon2id hashes, simply because they appear in every attacker’s dictionary. NIST guidelines (SP 800-63B) recommend against arbitrary complexity rules in favor of checking passwords against known compromised lists and allowing length and ease of use to improve naturally. In practice, many organizations implement a haveibeenpwned API check or similar to bar users from selecting extremely common passwords. This complements the storage defenses by tackling the problem at both ends: storage makes guessing hard, and password policies reduce the chance that the attacker’s guesses succeed quickly.
Finally, architectural isolation can be a mitigation: the component that handles password hashing (e.g., an authentication service) should be isolated and hardened. This limits the exposure of password handling code and keys. For example, if a microservice architecture is used, the authentication service could be the only one that knows the pepper and performs hashing, returning only success/failure to other services. This way, even if another part of the application is compromised, the attacker cannot directly retrieve or verify hashed passwords without calling the auth service (where rate limits and monitoring might detect abuse). In summary, defensive controls for password storage boil down to: never store plaintext or reversible passwords; always hash with a strong, slow algorithm; use unique salts; consider a pepper for added security; and complement these with user password policies and robust overall system design.
Secure-by-Design Guidelines
Secure password storage should be built into the design of authentication systems from the outset, rather than patched on later. A guiding principle is never needing to retrieve a user’s password in plaintext. The system should be designed such that the only operation needed is verifying that a user-provided password matches the stored hash. This means features like “email me my password” are fundamentally insecure by design – instead, offer password reset flows, which do not require knowing the existing password, only confirming identity via email or two-factor channels. At the design phase, architects should adopt the stance that passwords will be treated as write-only secrets: the application accepts them, processes them into a secure hash, and thereafter deals only with the hash for verification. Any design that calls for reversible encryption of passwords should trigger immediate re-evaluation of requirements. For instance, if an external system needs to authenticate a user, consider delegating via OAuth/OIDC or exchanging tokens rather than storing the user’s password to replay it. Modern authentication protocols (OAuth2, SAML, etc.) exist precisely to avoid sharing actual passwords across systems. In rare cases where an external integration absolutely cannot avoid using the password, it might be preferable to ask the user for it at the moment needed (and immediately discard after use) rather than store it persistently.
Secure design also involves planning for algorithm agility and upgrades. A system launched in 2025 with Argon2id and certain parameters might need an update in the future if weaknesses are discovered or if hardware improvements render the chosen work factor inadequate. The storage format should ideally encode what algorithm and parameters were used for each password (many hash libraries include this in the hash string, or it can be stored in separate columns). This way, if you later switch to a new algorithm or higher cost, you can still verify old hashes and transparently upgrade them. A common pattern is to store hashes in a format like $algorithm$parameters$salt$hash. For example, bcrypt hashes typically start with $2b$10$... where 10 is the cost. Argon2 has a standardized string format as well (PHC string format). Designing with this in mind means when a user logs in, you can detect that their password is hashed with an old method, verify it, and then re-hash with the new method and update the stored value. This incremental upgrade approach (sometimes called “opportunistic rehashing”) ensures you are not stuck with outdated hashes indefinitely (cheatsheetseries.owasp.org) (cheatsheetseries.owasp.org). In some cases, a bulk rehash might be done by forcing password resets, but that is less user-friendly. It is better design to allow multiple algorithms in transition and update gradually. The design should also consider a secure way to handle users who never log in (thus never triggering a rehash); policies might be set to expire hashes older than a certain age to compel an update for long-dormant accounts if needed.
Another design guideline is minimal exposure of password data throughout the system. Only the authentication module should handle raw passwords, and even there, the plaintext should live only for the brief moment of hashing. For example, in a web application following MVC architecture, the controller handling login should immediately pass the password to the hashing routine and then discard it. Avoid passing the plaintext around in insecure manner – e.g., do not store it in an HTTP session object or send it to other internal APIs. In memory-managed languages, one cannot easily clear memory, but developers can at least avoid creating unnecessary copies of the password in memory. In languages like Java and .NET, using char arrays for passwords (and clearing after use) instead of immutable strings is a design choice to reduce lingering remnants in memory. At the system design level, ensure that any inter-service communication that might involve credentials is encrypted in transit. NIST requires that password input be sent over an authenticated protected channel (e.g., TLS) to prevent eavesdropping (pages.nist.gov) – this is part of secure design, as storing passwords safely is moot if they can be stolen in transit. Thus, from the moment a user enters a password to the moment it is hashed and stored, every step should be designed to limit exposure: TLS in transit, minimal plaintext handling, immediate hashing, and secure storage of the hash.
Finally, incorporate secure defaults in the design. If using a framework or library, leverage its password storage features instead of writing your own. Many frameworks have built-in user authentication systems (e.g., Django, Ruby on Rails, ASP.NET Identity) that implement current best practices for hashing and salting. Using those means that improvements (like increasing iteration counts or moving to a new algorithm) might come as part of framework updates. If you design a custom solution, ensure the default configuration is secure (for example, default to Argon2id with strong settings) and that configuration allows future increases in cost factors. Also, design the system to handle error cases safely – e.g., if the hashing function fails or an out-of-memory error occurs (perhaps due to large Argon2 memory usage), ensure the application doesn’t fall back to an insecure path or store a plaintext by accident. Secure design anticipates failure modes and ensures there is no scenario where password storage reverts to an unsafe mechanism.
Code Examples
In this section, we explore concrete code examples demonstrating insecure and secure password storage practices across several programming languages. Each example pair illustrates common pitfalls (“bad” approach) and recommended approaches (“good” approach) with commentary.
Python (Good vs Bad)
Insecure Example (Python): Consider a Python web application that hashes passwords using a fast, unsalted hash like MD5, or worse, stores the password as plain text. In the snippet below, the developer uses Python’s built-in hashlib to hash a password with MD5 and stores it, without any salt:
import hashlib
def store_password_insecure(user_id, password):
# Insecure: using MD5 (fast hash) with no salt
hash_value = hashlib.md5(password.encode('utf-8')).hexdigest()
# Storing hash_value directly (e.g., in a database)
save_password_hash(user_id, hash_value)
This approach is insecure because MD5 is a very fast algorithm and can be cracked with brute force or lookup tables. The lack of a salt means that identical passwords will produce identical hashes, enabling rainbow table attacks and making it trivial to crack common passwords. An attacker with a list of hash values from this scheme can rapidly compute or lookup the original passwords. In fact, MD5 has been considered broken for password hashing for many years – it’s too quick and has known collision weaknesses.
Secure Example (Python): Python supports robust password hashing through libraries such as bcrypt or argon2-cffi, or via the built-in hashlib.pbkdf2_hmac for PBKDF2. In the following example, we use bcrypt to hash a password with a work factor of 12, which automatically handles salt generation and a slow hash computation:
import bcrypt
def store_password_secure(user_id, password):
# Secure: using bcrypt with automatic salt and configurable work factor
# Generate a salt and hash combined with a cost factor of 12
hashed = bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt(rounds=12))
# hashed is a byte string containing salt, cost, and hash in one
store_user_hash(user_id, hashed)
def verify_password(user_id, input_password):
stored_hash = get_user_hash(user_id) # retrieve stored bcrypt hash (bytes)
# bcrypt.checkpw will hash input_password with salt & cost from stored_hash and compare
if bcrypt.checkpw(input_password.encode('utf-8'), stored_hash):
return True # password is correct
else:
return False
In this secure example, bcrypt.gensalt(rounds=12) creates a random salt and encodes the work factor (2^12 iterations internally) into the salt. The bcrypt.hashpw function then produces a hash that encapsulates the salt, cost, and hashed output. The resulting hashed value (often rendered as a string like $2b$12$...) is what gets stored in the database. This approach is secure because bcrypt is a slow, adaptive hash: the cost factor of 12 will significantly slow down any brute force attempts (compared to a single MD5 which is lightning-fast). Additionally, each password gets a unique salt, preventing attacks that exploit hash comparisons across users. The use of bcrypt.checkpw for verification ensures a constant-time comparison and reuses the correct salt and cost parameters embedded in the stored hash. Python’s bcrypt library handles all low-level details, reducing the chance of developer error. An even newer alternative would be argon2-cffi, which can implement Argon2id hashing with memory hardness, but it similarly provides high-level functions (argon2.PasswordHasher().hash(password)) to generate a secure hash. The key is that the developer is using a purpose-built password hashing function rather than a generic hash. By doing so, the storage is resilient against offline attacks: even if an attacker steals the bcrypt hashes, cracking them would require an immense amount of time due to the work factor.
JavaScript (Good vs Bad)
Insecure Example (JavaScript/Node.js): An insecure approach in a Node.js application would be using a quick hash like SHA-1 or SHA-256 without a salt. For example, a developer might use Node’s crypto module to hash passwords directly because it’s easy, not realizing this results in insufficient security:
const crypto = require('crypto');
function hashPasswordInsecure(password) {
// Insecure: using SHA-256 with no salt (fast and unsalted)
const hash = crypto.createHash('sha256').update(password).digest('hex');
// This hash is stored for the user (e.g., in a database)
return hash;
}
// Example usage:
let storedHash = hashPasswordInsecure('P@ssw0rd');
// storedHash might look like "5e884898da28047151d0e56f8dc62927..." for the input "password"
The above code is problematic because SHA-256, while cryptographically strong in terms of collision resistance, is designed to be fast. With modern hardware, an attacker can compute billions of SHA-256 hashes per second, making brute force or dictionary attacks feasible (cwe.mitre.org). The code also doesn’t use a salt, meaning an identical password for two users will result in the same stored hash string. An attacker could also precompute hashes of millions of common passwords and simply look up matches. In essence, this Node.js approach offers little more protection than plaintext storage against a determined attacker.
Secure Example (JavaScript/Node.js): A secure Node.js implementation would use a slow hashing function. The popular choice in the Node ecosystem is the bcrypt library (e.g., bcryptjs or the native bcrypt Node module). Alternatively, Node’s built-in crypto.pbkdf2 can be used for PBKDF2. Here’s an example using bcrypt:
const bcrypt = require('bcrypt');
async function storePasswordSecure(userId, password) {
// Secure: using bcrypt with a salt and a work factor
const saltRounds = 12;
const hash = await bcrypt.hash(password, saltRounds);
// The hash includes the salt and cost, store it for the user
await saveUserHash(userId, hash);
}
async function verifyPassword(userId, inputPassword) {
const storedHash = await getUserHash(userId);
const match = await bcrypt.compare(inputPassword, storedHash);
return match; // true if password is correct
}
In this code, bcrypt.hash() automatically generates a random salt and incorporates it into the resulting hash string (which will start with something like $2b$12$ indicating the algorithm and cost). The saltRounds variable (here set to 12) controls the work factor; higher values make hashing slower. By using this function, the developer offloads the complexity to the library: the output hash is securely formatted and can be directly stored. Verifying a password is similarly simple: bcrypt.compare() will parse the stored hash, extract the salt and cost, hash the input, and do a constant-time comparison. This is secure because bcrypt’s slowdown significantly hinders attackers. For example, if one hash check takes, say, 100ms, an attacker can do at most 10 hashes per second per CPU core, versus millions per second with a raw SHA-256. This orders-of-magnitude reduction means even if the database is compromised, guessing passwords becomes a very costly operation. Additionally, since each hash has a different salt, attackers cannot amortize their work across multiple accounts; they must attack each hash independently. Using a well-vetted library like bcrypt also means subtle issues (like proper handling of length limits or avoiding encoding pitfalls) are already considered by the implementers. Node’s asynchronous bcrypt.hash and bcrypt.compare functions ensure that even though hashing is slow, it doesn’t block the event loop, maintaining server responsiveness while still enforcing security.
Java (Good vs Bad)
Insecure Example (Java): A common mistake in Java applications is using the standard cryptographic libraries to hash passwords without applying a proper key stretching algorithm or salt. For instance, a developer might do something like this using Java’s MessageDigest:
import java.security.MessageDigest;
import java.nio.charset.StandardCharsets;
import javax.xml.bind.DatatypeConverter;
public class InsecurePasswordStorage {
public static String hashPasswordInsecure(String password) throws Exception {
// Insecure: using SHA-1 digest without a salt or iterations
MessageDigest md = MessageDigest.getInstance("SHA-1");
byte[] hashBytes = md.digest(password.getBytes(StandardCharsets.UTF_8));
// Convert to hex string for storage
return DatatypeConverter.printHexBinary(hashBytes);
}
}
This Java code computes a SHA-1 hash of the password and outputs it in hex. It is insecure because SHA-1 (like other fast hashes) can be computed quickly by attackers; moreover, no salt is used here. SHA-1 is actually no longer considered cryptographically strong either (it’s known to have collision weaknesses), but even if SHA-256 were used in the same way, the approach would still be inadequate for password storage. Without a salt or iterations, this scheme is vulnerable to the same issues discussed: rainbow tables, dictionary attacks, and immediate compromise of common passwords. Many older Java systems and tutorials used such patterns (or even worse, MD5), and those systems need urgent upgrades. Storing SHA1(password) is only marginally better than plaintext – a targeted attacker can invert those hashes with roughly $2^{61}$ operations at worst (the entire SHA-1 space), and practically much less by concentrating on likely passwords.
Secure Example (Java): Java’s standard libraries offer PBKDF2 via the SecretKeyFactory and PBEKeySpec, which can be used to implement salted, iterated hashing. Additionally, third-party libraries such as Spring Security or Apache Shiro provide high-level password hashing utilities. A simple example using PBKDF2 with HMAC-SHA256 is shown below:
import java.security.SecureRandom;
import java.security.spec.KeySpec;
import java.util.Base64;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.PBEKeySpec;
public class SecurePasswordStorage {
// Configuration for PBKDF2
private static final int SALT_LENGTH = 16; // 16 bytes = 128 bits salt
private static final int ITERATIONS = 100_000; // work factor (could be higher)
private static final int KEY_LENGTH = 256; // output hash length in bits
public static String generateSalt() {
SecureRandom rand = new SecureRandom();
byte[] salt = new byte[SALT_LENGTH];
rand.nextBytes(salt);
return Base64.getEncoder().encodeToString(salt);
}
public static String hashPassword(String password, String base64Salt) throws Exception {
byte[] salt = Base64.getDecoder().decode(base64Salt);
KeySpec spec = new PBEKeySpec(password.toCharArray(), salt, ITERATIONS, KEY_LENGTH);
SecretKeyFactory skf = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256");
byte[] hash = skf.generateSecret(spec).getEncoded();
// Store salt and hash (both need to be saved; here we return concatenation as example)
return Base64.getEncoder().encodeToString(salt) + "$" + Base64.getEncoder().encodeToString(hash);
}
public static boolean verifyPassword(String password, String stored) throws Exception {
String[] parts = stored.split("\\$");
String saltPart = parts[0];
String hashPart = parts[1];
String computed = hashPassword(password, saltPart);
// Timing-safe comparison of the Base64 hashes:
return computed.split("\\$")[1].equals(hashPart);
}
}
In this secure Java example, we first generate a 16-byte random salt using SecureRandom. We then use SecretKeyFactory with PBKDF2WithHmacSHA256 to derive a 256-bit hash from the password and salt, using 100,000 iterations. The resulting hash (and the salt) are encoded in Base64 and stored together (separated by a $ in this example string). Verification involves extracting the salt from storage, recomputing the PBKDF2 hash on the input password with the same salt and iteration count, and then comparing the result with the stored hash. This approach is considerably more secure: PBKDF2 with 100k iterations is slow (on the order of 100ms or more per hash on a single CPU core), which greatly inhibits brute force. Each password has a unique salt, so hashes cannot be compared across accounts or precomputed. The use of SecureRandom ensures salts are unpredictable. While 100k iterations is a solid baseline, current recommendations (such as OWASP’s) often call for even higher iterations – potentially hundreds of thousands or more, balancing against system performance (cornucopia.owasp.org). Notably, newer algorithms like Argon2id might be available in Java through third-party libraries (e.g., Bouncy Castle’s implementation), and those could be used similarly with appropriate parameters. The key security improvement here is evident in the code: the inclusion of a salt and a high iteration count (work factor) transforms the storage scheme from trivial to extremely resistant. Even if an attacker obtains the salt$hash string, they cannot use a GPU to simply invert it; they must perform 100k HMAC-SHA256 operations for each guess, per user, making mass cracking largely infeasible. Additionally, by using well-tested library calls (JCE’s PBKDF2 implementation), we avoid pitfalls like writing our own slow loop or mismanaging byte encodings. The final comparison uses a constant-time equality check to avoid timing attacks (in Java, string equality might short-circuit on the first mismatch; a proper implementation would use a library or a manual time-insensitive compare, which is omitted here for brevity).
.NET/C# (Good vs Bad)
Insecure Example (.NET): In older .NET applications, one might find code that hashes a password using a single-shot algorithm like SHA-256 or even MD5, with no salting. For example, using the System.Security.Cryptography namespace, a developer might do:
using System.Security.Cryptography;
using System.Text;
public string ComputeHashInsecure(string password) {
// Insecure: MD5 hashing with no salt
using (MD5 md5 = MD5.Create()) {
byte[] inputBytes = Encoding.UTF8.GetBytes(password);
byte[] hashBytes = md5.ComputeHash(inputBytes);
// Convert to hex string
StringBuilder sb = new StringBuilder();
for (int i = 0; i < hashBytes.Length; i++) {
sb.Append(hashBytes[i].ToString("X2"));
}
return sb.ToString();
}
}
This C# code uses MD5 (an extremely fast and now insecure hash function) to hash the password, and then converts it to a hexadecimal string. The result might look random, but it is entirely deterministic and straightforward for attackers to crack. With no salt, any two identical passwords will result in the same MD5 hash. Tools and rainbow tables for MD5 are widespread; an attacker who steals these hashes can reverse many of them to plaintext almost instantaneously for common passwords. Even for less common passwords, MD5’s speed (billions of hashes per second on modern hardware) means an offline brute force can succeed in a short time. This is clearly unacceptable in today’s security landscape. The code also demonstrates a manual way of converting the hash to hex; modern implementations would simplify that, but the core issue remains the use of MD5 or SHA without salting or stretching.
Secure Example (.NET): In .NET (especially in recent .NET Core/5/6+), there are built-in facilities for password hashing. For instance, ASP.NET Core Identity uses PBKDF2 by default (with HMAC-SHA256, 10,000 iterations by default in older versions, increased in newer versions). One can also use the Rfc2898DeriveBytes class directly to implement PBKDF2. Here’s an example using PBKDF2 via Rfc2898DeriveBytes, which is analogous to the Java example:
using System;
using System.Security.Cryptography;
using System.Text;
public class PasswordHasher {
private const int SaltSize = 16; // 128-bit salt
private const int Iterations = 100000;
private const int KeySize = 32; // 256-bit hash
public static string HashPassword(string password) {
// Generate a random salt
byte[] salt = new byte[SaltSize];
using (var rng = RandomNumberGenerator.Create()) {
rng.GetBytes(salt);
}
// Derive a 256-bit subkey (use HMAC SHA256)
using (var pbkdf2 = new Rfc2898DeriveBytes(password, salt, Iterations, HashAlgorithmName.SHA256)) {
byte[] key = pbkdf2.GetBytes(KeySize);
// Format: salt and hash (Base64 for storage)
string saltB64 = Convert.ToBase64String(salt);
string hashB64 = Convert.ToBase64String(key);
return $"{saltB64}:{hashB64}";
}
}
public static bool VerifyPassword(string password, string storedSaltHash) {
var parts = storedSaltHash.Split(':');
if (parts.Length != 2) return false;
byte[] salt = Convert.FromBase64String(parts[0]);
byte[] storedHash = Convert.FromBase64String(parts[1]);
using (var pbkdf2 = new Rfc2898DeriveBytes(password, salt, Iterations, HashAlgorithmName.SHA256)) {
byte[] key = pbkdf2.GetBytes(KeySize);
// Compare byte-by-byte
return CryptographicOperations.FixedTimeEquals(key, storedHash);
}
}
}
In this secure C# example, we generate a 16-byte salt using a secure RNG. We then use Rfc2898DeriveBytes with SHA-256 to derive a 32-byte hash (which is 256 bits) from the password and salt, using 100,000 iterations. The salt and the hash are then stored together, separated by a colon in this case (they could also be stored in separate database fields). To verify, we take the stored salt, run the same PBKDF2 on the input password, and compare the result with the stored hash using CryptographicOperations.FixedTimeEquals (which is a .NET method that does a constant-time comparison to prevent timing attacks on the comparison). This method adheres closely to NIST and OWASP recommendations: a unique salt, a high iteration count (work factor), and a secure hash algorithm. If we want to adopt even stronger algorithms, there are .NET libraries for Argon2 (for example, using Isopoh.Cryptography.Argon2 package) that could replace PBKDF2 in a similar fashion, specifying memory and time cost parameters. But PBKDF2 is FIPS-approved and widely available, making it a common choice in enterprise environments (cheatsheetseries.owasp.org). By using this approach, if an attacker obtains the stored salt:hash strings, they face a daunting challenge. Each password guess they want to test requires 100k SHA-256 HMAC operations, and they must do this for each account separately because the salt differs. This essentially enforces a linear scaling of attack cost with respect to the number of accounts and the hardness of each account’s password. The storage format is also future-proof to a degree: it’s straightforward to add a version tag or include the iteration count in the stored string if needed, to enable changing the work factor in the future.
Pseudocode (Good vs Bad)
To solidify the concept, we present a pseudocode comparison of bad vs. good password storage practices. Pseudocode abstracts away language specifics to focus on the algorithm and logic.
Insecure Pseudocode:
# BAD PRACTICE
function registerUser(username, password):
# Directly store plaintext password (insecure!)
database.save(username, password)
In this worst-case scenario, the application simply stores the password as given by the user, in plaintext. This is obviously dangerous: any read access to the database, backup, or logs immediately reveals user credentials. Unfortunately, history has seen real systems do exactly this, resulting in devastating breaches. A slightly less blatant (but still insecure) variant might be:
function registerUser(username, password):
hash = FastHash(password) # e.g., MD5 or SHA1
database.save(username, hash)
Here the password is hashed, but with a fast, unsalted hash (FastHash stands for an unsuitable function). This pseudocode might represent developers using standard library hash functions without considering salt or speed. The outcome is insecure for reasons we have extensively discussed: fast hashes enable fast cracking, and no salt allows reuse attacks. Both of these pseudocode examples fail to protect passwords in the event of a breach.
Secure Pseudocode:
# GOOD PRACTICE
function registerUser(username, password):
salt = SecureRandomBytes(length=16)
hash = SlowHash(password, salt, cost=appropriate)
database.save(username, salt, hash)
function verifyLogin(username, inputPassword):
record = database.load(username)
salt = record.salt
storedHash = record.hash
inputHash = SlowHash(inputPassword, salt, cost=record.cost)
if inputHash == storedHash:
grantAccess(username)
In the secure pseudocode, when a new user registers, the system generates a random 16-byte salt using a cryptographically secure random generator. It then computes SlowHash(password, salt, cost) – this represents a password hashing function like Argon2, bcrypt, or PBKDF2, which takes a password, the salt, and a cost factor (number of iterations or memory settings). The resulting hash (and the salt and ideally the cost or algorithm identifier) is stored for the user. On login, the system retrieves the salt and stored hash for the username, recomputes the hash of the input password using the same salt and cost parameters, and then compares it. Access is granted only if the hashes match. This design ensures that even if an attacker steals the salt and hash from the database, they cannot easily reverse it to get the password: they would have to guess passwords and perform the same expensive hash function for each guess. The inclusion of the salt thwarts any use of precomputed hashes or sharing of work between accounts. The cost factor slows down hashing – for legitimate users this is a one-time slight delay during login, but for an attacker trying millions of guesses, it’s a game-changer. The pseudocode uses == to compare hashes; in a real implementation, a constant-time comparison would be used to avoid timing attacks, but that detail is beyond pseudocode scope. The key is that this design embodies the core of secure password storage: one-way, salted, and deliberate computational cost.
Detection, Testing, and Tooling
Detecting weaknesses in password storage implementations is a critical part of an application security program. Many issues can be identified through secure code review or using automated static analysis tools. For example, a static analysis tool or linting rule might flag the use of MD5 or SHA1 in code sections related to password handling, since these are red flags for insecure hashing. Similarly, if the code calls an encrypt routine for passwords or stores user passwords in configuration files, these patterns can be recognized as dangerous. Security-focused static analyzers (like Codacy, SonarQube, Checkmarx, Veracode, etc.) often have rules for common mistakes: e.g., “use of weak cryptographic hash”, “plaintext credential storage”, or use of deprecated crypto functions. During a code review, an AppSec engineer should trace how passwords are handled from the point of entry on registration or change, to storage in the database. Signs of a robust implementation include use of well-known password hashing APIs (as illustrated in our good code examples) and absence of any custom “encryption” or homegrown manipulation. If custom code is present, reviewers should verify that it is doing proper salting and using an appropriate algorithm. For instance, encountering something like MessageDigest.getInstance("SHA-256") in Java code for password storage would prompt further investigation and likely a recommendation to use a proper KDF with a salt and iterations.
Dynamic testing can also reveal password storage issues, albeit indirectly. One classic test is the “forgot password” functionality: if an application can email you your existing password, that’s an immediate indication that the password is stored in plaintext or reversible form (since the server could retrieve it). A secure system should never be able to send you your original password – it should instead reset it. As a tester, using the forgot password feature can thus be very telling. Another dynamic test is to create two accounts with the same password, then see if any observable data (like a user ID or profile info via some API) reflects identical hashes. In a black-box test, you might not see the hash directly, but sometimes vulnerabilities or verbose error messages can leak clues (for example, a SQL error that shows a substring of a hash). Penetration testers also often check configuration or administrative interfaces: for example, poorly secured admin pages might allow downloading user records. If such an interface shows password fields (even if masked), it’s a bad sign – either the actual passwords are stored, or the system is doing something questionable. Additionally, testers examine password change flows: a secure design will re-hash the password when changed; an insecure one might, for example, not update an encryption or might do something like double hashing incorrectly. These are detailed tests that require understanding the system internals, but are part of a thorough assessment.
From a tooling perspective, there are also database inspection tools and scripts that can help identify weak password storage after the fact. For example, if you have a dump of the user table and you see that all password hashes are of a certain length and character set (e.g., 32 hex characters, which suggests an MD5 hash, or 40 hex for SHA-1), you can deduce the hashing scheme. Some tools specifically try to recognize common hash formats in databases. During incident response or audits, it’s useful to identify if a hash format is known (like $2b$ prefix for bcrypt, $argon2id$ for Argon2, etc.). If none of the common secure patterns are seen, that’s a red flag. For instance, if you see passwords that are 20 characters of Base64, one might suspect unsalted SHA-1 in Base64, etc. There are databases of hash signatures that security testers use to classify what algorithm might have produced a given hash string.
Specialist tools can also audit compliance with password storage policies. For example, there are scripts to test hashing speed (to ensure it’s slow enough): a security team might take the running code or library and measure how fast it can hash a password, ensuring it’s within expected slow parameters. If it’s too fast, it might indicate the iteration count is set too low. Fuzzing and negative testing can also be relevant: passing extremely long passwords to the system to see if it handles them (does it truncate them? Does it crash? Bcrypt, for example, has a 72-byte limit and requires handling beyond that). This kind of testing ensures that the implementation doesn’t silently fail or weaken when faced with edge cases.
Tooling for development has improved to help get password storage right. Many frameworks, as mentioned, come with secure defaults. Utilizing tools like OWASP’s Enterprise Security API (ESAPI) in Java, or Python’s Passlib, can take care of hash management. Using these libraries is a form of tooling that prevents developers from needing to know all the details. Furthermore, secret management systems (HashiCorp Vault, AWS KMS, Azure Key Vault, etc.) can manage peppers or encryption keys if those are used, ensuring they’re not hardcoded or stored in source control. Integrating those tools properly is part of secure implementation. For detection, some organizations employ credential scanners on their logs or code to ensure no plaintext passwords are lingering. For example, scanning source code to ensure no string literal looks like a password, or scanning logs in real-time for patterns that match actual user passwords (to catch if something is logging them inadvertently). While not common, such practices can catch mistakes early.
Lastly, consider cryptographic compliance tools: if your organization needs to be FIPS 140 compliant, you might run tests to ensure that only FIPS-approved algorithms (like PBKDF2 or certain allowed hashes) are used. Tools or libraries can be configured to operate in FIPS mode and will throw errors if a disallowed algorithm is called. This can be part of build pipelines — essentially, an automated check that the code doesn’t use disallowed crypto for password storage. All these detection and tooling measures aim at one outcome: verifying that the password storage mechanism in the application is as per the best practices (salted, strong hash, properly configured) and catching any deviation early in the development or testing cycle, rather than after a breach.
Operational Considerations (Monitoring and Incident Response)
Operational security around password storage is about ensuring that once the code is deployed and running, there are monitoring and response plans covering this critical asset. One key consideration is access monitoring: the password database or wherever hashes are stored should have strict access controls and logging. Access to these tables (especially by administrators or through service accounts) should be monitored and perhaps limited to specific times or roles. For example, if an application suddenly reads all user password hashes (which it normally never does except perhaps for a backup or migration), that’s suspicious. Such an event could indicate either a breach in progress or an internal misuse. Database activity monitoring solutions can flag unusual queries, like a SELECT * on the user credentials table at odd hours. Similarly, if using an HSM or vault for peppers, access to those secrets should be logged and alert on anomalies (e.g., multiple failed attempts to access the pepper, or access from an unauthorized host).
Another operational best practice is to never log sensitive data, including passwords or even password hashes, in application logs. Many incidents have occurred where supposedly secure systems accidentally wrote passwords in plaintext to log files (for example, due to verbose debugging or by capturing request data). DevOps teams should ensure that logging configurations in production redact or exclude any fields that might contain passwords. Some web frameworks automatically mask password fields in logs, but custom logging or error handling might bypass that, so it should be carefully reviewed. Incident response plans should include procedures for if and when password data is suspected to be compromised. If a breach of hashed passwords is detected or even strongly suspected, the safest course is often to initiate a forced password reset for users. The response plan can stratify this: for instance, if the hashes were bcrypt with high cost and the breach is contained quickly, one might have time to inform users to change passwords at their leisure (still assuming worst-case that some might crack). If the hashes were weaker or the attacker likely had time, immediate expiration of all credentials might be warranted.
Monitoring user behavior after a potential breach is also important. If you suspect password hashes were stolen, keep an eye on login attempts – an attacker with some cracked passwords might log in as those users. Unusual login patterns (many logins attempts for various accounts from the same IP, or login to many accounts that were dormant) can signal that some cracked credentials are being exploited. In such cases, having multi-factor authentication (MFA) enabled can limit damage: even if passwords are cracked, the attacker might be unable to get past the second factor. Thus, from an operational lens, encouraging or mandating MFA for users is a complementary measure to password storage security, and incident response should consider escalating that requirement if a breach is known. For example, post-incident, you might mandate that all users set up MFA on next login, to mitigate risk from any remaining cracked passwords.
Another operational aspect is pepper rotation. If you have implemented a pepper (a secret key in the hashing process), you need a strategy for rotating it in case it becomes compromised or as a periodic hygiene measure. Rotating a pepper is not trivial: since the pepper isn’t stored with the hashes, you can’t re-derive the plaintext or a new hash without user input. The only straightforward way to change the pepper is to have users re-enter their passwords (e.g., on next login) so you can rehash with the new pepper, or to force resets. This is disruptive, so peppers should be handled with high security to avoid ever needing rotation. Storing them in an HSM and restricting access is vital, as noted. In incident response, if an application server is breached, one must assume the pepper could be in memory or otherwise compromised, and thus consider a full credential reset. These are nightmare scenarios for an organization, which is why many lean on the strength of salted hashing alone (which doesn’t require an extra secret to protect). Nonetheless, high-security systems (like those in banking) sometimes use peppers, and their operations teams must practice recovering from a pepper compromise scenario.
Backup and data lifecycle is another consideration. Password hashes, even though not plaintext, should be treated as sensitive data. Backups of databases that include credential data should be encrypted and protected. There have been cases where an otherwise secure system was undone because a backup file with hashes was left on an open server or cloud storage without protection. Operational processes should ensure that whenever data is copied (for testing, analytics, etc.), either the password fields are sanitized or the same security controls apply to those copies as to production. If developers use production data in a lower environment for testing (which is generally discouraged, especially for credentials), they must mask or re-hash passwords before use. Many organizations generate fake users or at least replace real password hashes with random values in test datasets to avoid any risk.
Finally, ongoing assessment and updates are part of operational security. Over time, the team should revisit the hashing parameters. Monitoring can include measuring authentication server performance and login latency – if it’s extremely fast and servers have excess capacity, perhaps the work factor could be raised to increase security. Conversely, if monitoring shows that login requests are taking too long or causing CPU spikes (maybe because user volume increased), it might indicate a need to scale systems or in rare cases dial down the cost slightly to maintain availability (though adding hardware is preferable to lowering security settings). Security teams should stay informed about developments in cryptography: for example, if a new attack on an algorithm is discovered or if NIST/OWASP updates their recommendations (say Argon3 comes out or recommended iterations double due to hardware advances), an operational plan should exist to implement those changes. This could involve scheduling a re-hash of all passwords (again, usually by prompting users at login or in batches) to apply a stronger algorithm or higher cost. In summary, from an operational perspective, password storage security isn’t “set and forget” – it requires careful monitoring for signs of compromise, disciplined handling of credentials in all environments, and agility to respond to both incidents and evolving best practices.
Checklists (Build-Time, Runtime, Review)
Build-Time Considerations: During the build and development phase, security teams and developers should ensure certain fundamentals are in place for password storage. First, confirm that the chosen architecture never requires retrieving plaintext passwords – design reviews at this stage should catch any workflow that might tempt developers to store passwords insecurely. Next, verify that an appropriate hashing library or algorithm is selected upfront (for example, deciding on Argon2id, bcrypt, or PBKDF2 early in the project) and that it’s implemented correctly. The build environment should include any necessary libraries (e.g., the argon2 library for your language or a bcrypt module) and configuration for them. Unit tests should be written to validate that hashing and verification work as expected: e.g., tests that hashing the same password twice yields different hashes (due to salt), that a correct password verifies, and a wrong password does not. This can catch misconfigurations like forgetting to use a salt or using a constant salt by mistake. Additionally, ensure that password length limits or other edge conditions are handled (for instance, tests for very long passwords, non-ASCII characters, etc. to ensure the code processes them correctly). At build-time, also consider threat modeling: developers and security engineers should enumerate what could go wrong (What if the DB is stolen? What if an admin misuses credentials? What if the hashing algorithm needs upgrade?) and ensure mitigations are in place for each (e.g., the data is useless without spending huge compute, the admin cannot see plaintext, the system can migrate hashes). In effect, the build-time checklist is about baking security into the implementation: use proven functions, no custom insecure crypto, integrate secret management (for pepper) if needed, and test thoroughly that the scheme behaves as expected under various scenarios.
Runtime Considerations: Once the system is deployed, certain checks and safeguards should be continuously in effect to maintain password storage security. One item is performance monitoring: ensure that the hashing operation is not so slow as to cause timeouts or an unresponsive login, because that could both degrade user experience and potentially be leveraged for denial of service. If login spikes cause strains, that might require scaling rather than weakening the algorithm. Also at runtime, monitor logs and systems for any inadvertent leakage of sensitive info. As mentioned, password fields should typically never appear in logs; using data loss prevention (DLP) tools or regex scanners on logs can provide an extra layer of assurance that no passwords or hashes slip through. Another runtime checklist item is environment security: the servers or containers running the authentication logic should be hardened (patched OS, minimal access, etc.), because if an attacker can execute code on those, they might tap into the password process (for example, hooking into the point after a password is entered but before hashing). Using things like application whitelisting, disabling unnecessary diagnostics on production (so a developer cannot accidentally dump memory which might contain passwords or keys), are all part of the runtime security posture.
If the system uses a pepper stored in an HSM or secure module, runtime includes ensuring the connectivity and security of that module – e.g., the application should fail safe if it cannot reach the HSM rather than bypass hashing. There should also be alerts if, say, the HSM reports any anomalies. Regular backups of credential data should be happening (we want to be able to recover from data loss), but those backups need secure handling as described: check that backups are encrypted and that keys to those backups are managed. Periodic access reviews are also a runtime task – ensure that only the necessary service accounts or microservices can query user hash data. One could implement additional safeguards like database row encryption for the password hash column (some databases allow a transparent encryption where only the app can read it). While the hash is not secret per se (the salt and hash are needed by the app to verify), adding another layer such that even a DBA cannot directly read the hash without going through the app can be considered. This might complicate operations, so it’s a trade-off taken in high security contexts.
Review (Audit) Considerations: Security reviews, whether internal audits or external assessments, should periodically evaluate the effectiveness of password storage controls. An audit checklist would include verifying that the currently used algorithm and parameters still meet industry best practices. For example, an audit in 2025 might check: are we using Argon2id or an equally strong function? If using bcrypt, is the cost factor updated to a recommended level (maybe we set it to 12 in 2020; is it time to raise to 14 given new server capacity)? If using PBKDF2, do we need to increase iterations to keep up with Moore’s Law and faster hardware? Auditors will also confirm that salts are properly random and unique – for instance, by scanning the user database for any duplicate salt values (there should be none or extremely few collisions if generated well; a collision might hint at a faulty RNG or reuse issue). They will also check that no plaintext passwords lurk in any configuration or that the application isn’t inadvertently storing passwords in another location (for example, caching them, or sending them to an analytics service by mistake). A review might involve reading documentation and code to ensure the team followed through on secure design: e.g., confirming that the forgot-password feature does tokens, not password reminders; confirming multi-factor integration is in place to mitigate stolen passwords.
Penetration testing (as part of review) often specifically tries to break or bypass password storage. For instance, testers might attempt to dump the database via SQL injection and then assess how long it takes to crack a sample of the hashes (with permission, in a controlled way). If testers succeed in cracking a significant number of hashes quickly, that’s feedback that the storage is too weak and needs improvement. Another check is verifying that the system doesn’t have hidden backdoors like an alternate login that might accept an admin bypass (which some systems foolishly implement and thereby bypass hashing entirely). Everything should funnel through the standard verification process. Code review in audits often re-examines the critical sections in light of updates – if the application was upgraded or if new developers touched the auth module, did anything change? Sometimes performance optimizations or refactoring can inadvertently weaken security (e.g., using a faster hash to “improve response time” – a well-meaning but dangerous change). A thorough review catches these.
Lastly, the review phase should ensure that documentation and processes are aligned: the team should have documentation on how passwords are stored (algorithm, version, where the code is), so that in an incident everyone knows what they’re dealing with. If the documentation says “we use bcrypt cost 12” but the code shows cost 10, that discrepancy needs resolving. Having up-to-date knowledge is crucial during incident response – e.g., to quickly assess risk, one must know exactly what protections were in place. Thus, part of the checklist is confirming that the password storage approach is well-documented and that runbooks exist for scenarios like algorithm upgrade or breach response. In summary, from build-time to run-time to periodic reviews, a comprehensive approach ensures that secure password storage is not a one-time checkbox, but an ongoing practice integrated into the SDLC and operational life of the application.
Common Pitfalls and Anti-Patterns
Despite extensive guidance, certain pitfalls and anti-patterns in password storage continue to appear in real-world systems. Recognizing these helps in avoiding them:
One classic pitfall is inventing a custom hashing scheme under the mistaken belief that it will be more secure. Developers sometimes chain multiple hashes (e.g., taking an MD5 of the password and then an SHA-1 of the result) or add fixed “secret” strings into the hash in non-standard ways. These homebrew schemes often have subtle flaws and rarely add security beyond standard approaches. In fact, they can give a false sense of security. For instance, double-hashing with different fast algorithms doesn’t significantly slow an attacker compared to a single hash – it might even introduce weaknesses (as seen in some password “masking” schemes that ended up reducing entropy). It’s almost always better to use a well-vetted algorithm/library (Argon2, bcrypt, etc.) as-is, with proper parameters, than to tinker with custom transformations.
Another anti-pattern is using reversible encryption for passwords. Sometimes this stems from a business requirement to “recover” user passwords (for support purposes, or because an integrated system needs the plaintext). Storing encrypted passwords (even with strong encryption like AES) means that anyone who gains access to the encryption key and the data can get all the passwords back. It also means the application must have the key accessible to decrypt during login comparisons, which increases the attack surface (the key might be in memory, or in a config file). This essentially downgrades the security to that of the key’s secrecy. If an attacker compromises the server, they not only can steal hashes but also the key to decrypt them, rendering the protection moot. This pattern still surfaces in legacy enterprise apps. The correct approach is to redesign the dependency on plaintext (e.g., switch to an OAuth flow or hashed comparison). If absolutely unavoidable, the encryption key must be guarded like a crown jewel (in an HSM, with very limited use), but even then, it’s a huge risk. OWASP and others strongly advise against ever emailing passwords to users or storing them reversibly (cheatsheetseries.owasp.org) (cheatsheetseries.owasp.org), exactly to avoid this scenario.
A subtle pitfall is reusing salts or using predictable salts. A salt should be random and unique per password. An anti-pattern would be using something like the username or user ID as the salt. While this is different per user, it’s predictable (an attacker might guess that user 1001 has salt “1001”, etc.). Predictable salts also fail to defend against certain precomputation attacks if the method becomes known. Reusing the same salt for all passwords (or a small set of salts) is even worse, as it defeats the purpose entirely – it’s effectively akin to having no salt for cross-user protection. Yet, some systems historically used a constant salt stored in configuration (not to be confused with pepper; here we mean a non-secret salt that just isn’t unique). That is an anti-pattern because it doesn’t force the attacker to brute force each hash independently; cracking one reveals others that have the same password.
Inadequate salt length or randomness is another issue, though less common now. Using a very short salt (e.g., 4 bytes) or a non-cryptographic RNG could, in theory, lead to collisions or make brute forcing salts feasible. As per standards, at least 32 bits (4 bytes) of salt is required (pages.nist.gov), but typically we use 128-bit (16 byte) or more. This gives astronomically low probability of any two salts repeating. If a developer used, say, 2-byte salts, then collisions would occur and the salt wouldn’t fully isolate hashes.
Another pitfall: not handling character encoding or normalization. If the system isn’t careful, the way it handles special characters or different Unicode representations could cause it to hash different inputs inconsistently. For example, the password “pässword” (with an accented 'a') could be represented in Unicode in multiple forms (pre-composed or decomposed). If the system doesn’t normalize input, a user might not be able to log in if they enter the password on a different device that uses a different encoding form. NIST suggests normalizing to a standard form (like Unicode NFKC) before hashing (pages.nist.gov). Also, including the full range of Unicode means the hashing library must support null bytes and such; some poorly designed routines in the past had issues (like truncating at null characters). Using a robust library avoids these pitfalls, but custom implementations need to be mindful. This isn’t an everyday problem, but in a global application with international passwords, it’s a consideration.
Failure to update algorithms is an anti-pattern in the long term. Some systems stick with whatever was chosen initially (like PBKDF2 with 10k iterations set back in 2010) and never revise it. Over a decade, that iteration count might become woefully insufficient as hardware gets faster. We have seen transitions: for instance, in the early 2000s, many systems used SHA-1; by the 2010s, they had to move to bcrypt or PBKDF2; now Argon2 is the new recommendation. A secure system should not treat the hashing scheme as “fire and forget.” There should be mechanisms to migrate (as discussed in design). An anti-pattern would be hardcoding the algorithm in such a way that changing it means all passwords break (like not storing version, etc.). Flexibility must be built in to avoid being stuck on an outdated method.
Overlooking performance impact can be an operational pitfall. Setting the cost factor extremely high might seem very secure, but it could lead to timeouts or denial of service if an attacker floods the system with login attempts, each of which consumes a lot of CPU. The pitfall here is not about weakening security, but about misconfiguring it in production. The remedy is testing and tuning in context – find the sweet spot where hashing is slow enough to deter attackers but not too slow to hurt the service under expected load. A related anti-pattern is not using exponential backoff or rate limiting on login attempts; if absent, an attacker can abuse even the strongest hash if they can force the system to compute it unlimited times in parallel (maybe causing either a DoS or just being able to test many guesses). So the hashing must be paired with online attack mitigations.
Finally, a human and process pitfall: storing credentials in unsafe places like source code or ticketing systems. For example, a developer might temporarily put a user’s password in a debug log or send it to themselves to troubleshoot an issue, not realizing the danger. Or after a user reports a login problem, support staff might ask them for their password (!) and then test it – now that password is in an email or ticket. These practices violate policies and can undermine even the best storage practices because they leak the plaintext through other channels. The awareness and training aspect is important to combat these anti-patterns; everyone in the team should understand that a password’s plaintext is sensitive and should never be handled outside controlled situations.
In summary, the common pitfalls usually arise from either trying to be clever (custom schemes, reversible encryption) or from negligence (no salting, weak algorithms, stagnant configurations). Avoiding these means adhering to well-established practices and continually validating that the implementation hasn’t drifted into insecure territory.
References and Further Reading
OWASP Password Storage Cheat Sheet: OWASP’s guide on proper password storage practices. This resource covers the fundamentals of hashing vs. encryption, salting, peppering, and recommended algorithms with specific parameter suggestions. It is an excellent starting point for understanding current best practices and is frequently updated as new techniques emerge. (OWASP Cheat Sheet Series)
OWASP ASVS 4.0.3 – Authentication (V2.4: Credential Storage): The Application Security Verification Standard’s requirements for secure credential storage. ASVS mandates salted one-way hashing for passwords and gives normative requirements like minimum salt length (32 bits) and use of approved algorithms with appropriate work factors. It’s a useful checklist to verify an implementation meets a high assurance level. (OWASP ASVS V2.4.1–V2.4.5)
NIST Special Publication 800-63B (Digital Identity Guidelines): NIST’s guidelines on authenticators, including memorized secrets (passwords). Section 5.1.1 of SP 800-63B specifically addresses password storage, recommending salted, one-way key derivation functions and even an optional second “secret salt” (pepper) stored separately. NIST provides a framework for government and industry systems to properly handle passwords and aligns with many OWASP recommendations. (NIST SP 800-63B, Section 5.1.1)
CWE-256: Plaintext Storage of a Password: Common Weakness Enumeration entry detailing the risks of storing passwords in plaintext. It explains how storing a password without cryptographic protection can lead to immediate compromise and gives context and examples of this vulnerability. This is essentially the scenario to avoid at all costs in any credential storage design. (MITRE CWE-256)
CWE-916: Use of Password Hash with Insufficient Computational Effort: CWE entry describing the weakness of using inadequate hashing for passwords. It highlights scenarios where a hash is present but lacks the necessary strength (no salting, too fast, etc.), enabling offline cracking. This reference reinforces why simply hashing is not enough – it must be done with the right algorithms and parameters. (MITRE CWE-916)
Ars Technica – “Why passwords have never been weaker—and crackers have never been stronger” (2012): An in-depth article that, while somewhat dated, provides a vivid illustration of password cracking advancements. It explains how GPU acceleration, large breach datasets, and shared cracking tools have made offline attacks extremely potent, underlining the importance of strong password hashing. (Ars Technica)
Proofpoint (Troy Hunt) – “Passwords Evolved: Authentication Guidance for the Modern Era” (2022): A comprehensive whitepaper by Troy Hunt (creator of HaveIBeenPwned) and colleagues, discussing modern password guidance. It covers topics from password storage (advocating hashing and salting) to user password policies, and debunks some myths (like frequent password rotation). It offers a holistic view of password security in contemporary applications. (Proofpoint Password Guidance PDF)
NIST Password Guidelines Simplified (Infographic by NIST, 2020): A visual summary of key points from NIST SP 800-63 (including the storage rules). It’s a quick reference that emphasizes “hash your passwords and use salt; no more plaintext, no more overly complex composition rules, etc.” Useful for sharing with developers as a high-level checklist. (NIST Infographic)
“Password Hashing Competition and Argon2” – Research Paper (2015): For those interested in the academic side, this paper details the background of Argon2 (the winner of the Password Hashing Competition) and its design goals. It provides insight into why Argon2 was chosen and how it improves upon older algorithms. While not necessary for implementing, it’s a good read for deepening one’s understanding of memory-hard functions. (Argon2 paper via University of Illinois)
OWASP Cryptographic Storage Cheat Sheet: Beyond just passwords, this cheat sheet gives broader guidance on storing sensitive data. It reiterates that passwords should be hashed, not encrypted, and provides best practices for cryptographic storage of various data types. It complements the Password Storage cheat sheet with more general principles of secure storage. (OWASP Crypto Storage Cheat Sheet)
This content is authored with assistance from OpenAI's advanced reasoning models (classified as AI-assisted content). Material is reviewed, validated, and refined by our team, but some issues may be missed and best practices evolve rapidly. Please use your best judgment when reviewing this material. We welcome corrections and improvements.
Send corrections to [email protected].
We cite sources directly where possible. Some elements may be derived from content linked to the OWASP Foundation, so this work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. You are free to share and adapt this material for any purpose, even commercially, under the terms of the license. When doing so, please reference the OWASP Foundation where relevant. JustAppSec Limited is not associated with the OWASP Foundation in any way.
