Finding vulnerabilities in other people's software comes with responsibility. Responsible disclosure means reporting vulnerabilities to the organisation that can fix them, giving them time to act before public disclosure. Bug bounty programmes formalise this process with defined scope, rules, and often payment. This lesson covers how to find bugs ethically, write reports that get taken seriously, and work effectively with security teams.
Responsible disclosure basics
The disclosure spectrum
| Approach | Description | When to use |
|---|---|---|
| Private disclosure | Report directly to the vendor, wait for fix | Default approach for any vulnerability |
| Coordinated disclosure | Report to vendor with a disclosure deadline (e.g., 90 days) | When vendor is slow to respond or fix |
| Full disclosure | Publish the vulnerability publicly | Last resort — vendor unresponsive after extended period |
| Bug bounty | Report through a structured programme with defined scope and rewards | When the organisation has a bounty programme |
Always start with private or coordinated disclosure. Full disclosure should be a last resort after genuine, documented attempts to reach the vendor.
Finding the right contact
| Method | Where to look |
|---|---|
/.well-known/security.txt | Standardised file on the website |
[email protected] | Common convention |
| Bug bounty platform profile | HackerOne, Bugcrowd, Intigriti |
| WHOIS / abuse contact | For infrastructure issues |
| CERT/CC coordination | When the vendor is completely unresponsive |
If the organisation has a security.txt file (RFC 9116), use the contact listed there. It specifies preferred language, encryption key, and reporting procedures.
What responsible disclosure looks like
- Discover the vulnerability (within legal scope — your own software, or an authorised programme)
- Document the issue with reproduction steps
- Report through the organisation's preferred channel
- Wait for acknowledgement and a fix timeline
- Follow up if you have not heard back in 7 days
- Agree on disclosure timeline (90 days is the standard, per Google Project Zero)
- Do not exploit the vulnerability beyond what is needed to demonstrate it
- Do not access, modify, or exfiltrate real user data
- Do not disclose publicly before the agreed timeline
Bug bounty programmes
How they work
- Organisation defines scope (which assets, in/out of scope, rules of engagement)
- Researcher finds a vulnerability within scope
- Researcher submits a report through the platform
- Triage team validates and assesses severity
- Organisation confirms and fixes
- Researcher receives reward (if applicable)
Major platforms
| Platform | Model |
|---|---|
| HackerOne | Largest platform, mix of paid bounties and VDPs |
| Bugcrowd | Similar to HackerOne, curated programmes |
| Intigriti | European-focused, growing rapidly |
| GitHub Security Advisories | For open-source projects |
| Direct programmes | Google, Apple, Microsoft run their own |
Understanding scope
Always read the programme scope before testing. Common scope definitions:
In scope:
- *.example.com
- api.example.com
- Mobile apps (iOS, Android)
Out of scope:
- Third-party services (e.g., Zendesk, Intercom widgets)
- Physical attacks
- Social engineering of employees
- Denial of service attacks
- Automated scanning (unless permitted)
Testing out-of-scope assets can result in programme ban, legal action, or both.
What bounties pay
Bounty amounts vary widely by severity and programme:
| Severity | Typical range | Example |
|---|---|---|
| Critical (RCE, auth bypass, data breach) | $5,000 – $100,000+ | Google: up to $31,337; Apple: up to $1M |
| High (SQLi, SSRF, privilege escalation) | $1,000 – $15,000 | |
| Medium (stored XSS, IDOR) | $500 – $5,000 | |
| Low (reflected XSS, information disclosure) | $100 – $1,000 |
Not all programmes pay. Many are Vulnerability Disclosure Programs (VDPs) that offer recognition but no payment.
Writing effective reports
A clear report gets fixed faster and rewarded more. A vague report wastes everyone's time.
Report structure
## Title
Descriptive title that identifies the vulnerability type and affected component.
Example: "IDOR in /api/orders allows any authenticated user to access other users' orders"
## Summary
One paragraph explaining the vulnerability, its impact, and the affected endpoint.
## Severity
Your assessment: Critical / High / Medium / Low
CVSS score if you can calculate it.
## Steps to Reproduce
1. Create two accounts: [email protected] and [email protected]
2. Log in as victim and create an order → note the order ID (e.g., 5001)
3. Log in as attacker
4. Send: GET /api/orders/5001 with attacker's auth token
5. Observe: attacker receives victim's full order details
## Proof of Concept
Include actual HTTP requests and responses (redact any real user data):
Request:
GET /api/orders/5001 HTTP/1.1
Host: api.example.com
Authorization: Bearer eyJ...attacker_token
Response:
HTTP/1.1 200 OK
{
"orderId": 5001,
"userId": 102, ← this is victim's user ID, not attacker's
"items": [...],
"shippingAddress": "..."
}
## Impact
- Any authenticated user can access any order in the system
- Exposes PII: name, address, email, purchase history
- Estimated affected records: all orders in the database
## Suggested Fix
Add an ownership check:
SELECT * FROM orders WHERE id = $1 AND user_id = $2
## Environment
- Tested on: api.example.com
- Date: 2025-03-15
- Tools: curl, Burp Suite Community
Common report mistakes
| Mistake | Why it is a problem |
|---|---|
| "I found XSS" with no reproduction steps | Cannot validate, will be closed |
| Reporting a scanner output without verification | Often false positives, wastes triage time |
| Reporting issues on out-of-scope assets | Violates programme rules |
| Excessive drama ("CRITICAL BREACH!!!!") | Undermines credibility |
| Testing with real user data | May be illegal, definitely unethical |
| Vague impact ("could be bad") | Makes it hard to prioritise |
After submitting
- Wait patiently. Most programmes acknowledge within 1-5 business days.
- Respond to questions. The triage team may ask for clarification.
- Do not retest aggressively. Once reported, do not continue exploiting the vulnerability.
- Do not disclose publicly. Wait for the agreed disclosure timeline.
- Follow up politely if you have not heard back in 2 weeks.
Setting up a disclosure programme
If you are on the receiving end — running an application that others might find bugs in:
Minimum viable disclosure programme
- Create
/.well-known/security.txt:
Contact: mailto:[email protected]
Preferred-Languages: en
Expires: 2027-01-01T00:00:00.000Z
Policy: https://yourcompany.com/responsible-disclosure
-
Write a disclosure policy page with:
- How to report (email, platform)
- What is in scope
- Safe harbour statement (you will not pursue legal action against good-faith researchers)
- Expected response time
- Whether you offer rewards
-
Set up a secure reporting channel:
- Dedicated email address
- PGP key for encrypted reports
- Or use a platform like HackerOne or Bugcrowd
Handling incoming reports
| Step | SLA |
|---|---|
| Acknowledge receipt | Within 1 business day |
| Initial triage | Within 3 business days |
| Status update to researcher | Weekly until resolved |
| Fix deployment | Based on severity SLA |
| Notify researcher of fix | Same day as deployment |
| Public disclosure (if agreed) | 90 days from report or after fix |
Safe harbour
Include a safe harbour statement in your policy:
We will not pursue legal action against individuals who discover and report security vulnerabilities in good faith, within the scope of this policy, and in compliance with our guidelines.
Without safe harbour, researchers may not report findings for fear of legal action.
Legal considerations
- Computer Fraud and Abuse Act (CFAA) in the US and equivalent laws in other jurisdictions criminalise unauthorised access to computer systems
- Bug bounty programme scope defines what is "authorised"
- Testing outside scope, accessing real user data, or causing damage can result in criminal prosecution
- Always screenshot the scope and rules before testing — programmes can change their terms
Summary
Responsible disclosure protects users while giving organisations time to fix vulnerabilities. Always report through the proper channel — security.txt, bug bounty platform, or direct security contact. Write clear reports with reproduction steps, proof of concept, impact assessment, and a suggested fix. Respect programme scope, do not access real user data, and wait for the agreed disclosure timeline. If you run an application, create a security.txt, write a disclosure policy, and include a safe harbour statement.
