Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lintliot.com/llms.txt

Use this file to discover all available pages before exploring further.

LintLiot’s pentest engine performs Dynamic Application Security Testing (DAST) against your running application. Unlike static code analysis, DAST sends real HTTP requests with real attack payloads to discover vulnerabilities in your deployed app — including misconfigurations that only appear at runtime. The engine covers the OWASP Top 10 (2021) and maps every finding to a CVSS v3.1 score so you know exactly how serious each issue is.

What gets tested

The pentest engine runs 16 attack modules covering all 10 OWASP categories:
  • IDOR / BOLA — tests whether object IDs can be enumerated to access other users’ data
  • Authentication bypass — tests whether protected endpoints return data without auth credentials
  • Path traversal / LFI / RFI — tests for directory traversal and file inclusion
  • Open redirect — tests for unvalidated redirect and forward parameters
  • CSRF — tests for missing or weak CSRF token enforcement
  • SQL injection — 45 payloads covering UNION SELECT, OR-based bypass, stacked queries, time-based blind, and error-based techniques
  • XSS (reflected, stored, DOM) — 38 payloads including script tags, event handlers, javascript: URIs, SVG injection
  • NoSQL injection — 22 payloads targeting MongoDB $where, $gt, $regex, and operator injection
  • Command injection — 20 payloads using shell metacharacters, backticks, and process substitution
  • XXE — tests for XML external entity injection in XML-processing endpoints
  • TLS/SSL configuration — checks for HTTP-only targets, weak cipher suites, and certificate validity
  • Security headers audit — checks for missing Content-Security-Policy, Strict-Transport-Security, X-Frame-Options, and X-Content-Type-Options
  • CORS misconfiguration — tests whether arbitrary origins are reflected or whether wildcard CORS is combined with credentials: true
  • Information disclosure — probes for exposed .env, .git/config, debug endpoints, stack traces, and version disclosure
  • SSRF — 25 payloads probing localhost, cloud metadata endpoints (169.254.169.254), RFC-1918 ranges, and file:// URIs
  • Rate limiting on login — verifies that authentication endpoints return 429 after repeated failed attempts
  • Session fixation, JWT manipulation — tests for authentication bypass via token manipulation

Route discovery

The pentest engine needs to know which endpoints to test. LintLiot discovers your routes in three ways: Call registerRoutes() from your application startup to register all known routes with the LintLiot API:
import { createLintliot } from '@lintliot/sdk'

const lintliot = createLintliot({ apiKey: process.env.LINTLIOT_API_KEY })

// Register routes automatically from Express
lintliot.registerExpressRoutes(app)

// Or register routes manually
lintliot.registerRoutes([
  { path: '/api/users', method: 'GET' },
  { path: '/api/users/:id', method: 'GET' },
  { path: '/api/users/:id', method: 'PUT' },
  { path: '/api/posts', method: 'GET' },
  { path: '/api/posts', method: 'POST' },
])

Via OpenAPI spec

If you have an OpenAPI (Swagger) specification, upload it from the dashboard under Pentest → Routes or send it to the API endpoint:
curl -X POST https://api.lintliot.com/api/routes/YOUR_APP_ID/openapi \
  -H "X-API-Key: $LINTLIOT_API_KEY" \
  -H "Content-Type: application/json" \
  --data-binary @openapi.json
LintLiot parses the spec and registers all paths and methods automatically.

Automatic fallback

If no routes have been registered, the pentest engine falls back to a set of common endpoint patterns (/api/search, /api/login, /api/users, /api/data, and others). For meaningful results, register your actual routes.

Starting a pentest scan

1

Navigate to the Pentest page

Open your app in the LintLiot dashboard and click Pentest in the sidebar.
2

Create a new scan

Click New Scan. Give it a name, enter your target URL (the base URL of your deployed application), and choose which attack modules to run. Running all modules is the recommended starting point.
3

Add authentication headers (if needed)

If your endpoints require authentication, add your test account’s Authorization header under Scan Configuration → Auth Headers. The engine uses these headers for all requests.
4

Start the scan

Click Run Scan. The engine starts immediately and updates the dashboard with progress as each attack module completes. A full scan typically takes 2–10 minutes depending on the number of registered routes and modules selected.
5

Review findings

When the scan completes, findings appear in the Findings tab sorted by severity. Each finding includes a proof-of-concept request and response, CVSS score, OWASP category, and remediation guidance.
Only run pentest scans against your own applications in staging or production environments where you have explicit authorization. Do not use LintLiot to scan third-party applications you don’t own.

Finding severity and CVSS scores

Every finding is assigned a CVSS v3.1 base score and a severity level:
SeverityCVSS rangeExample findings
Critical9.0–10.0SQL injection, OS command injection, SSRF, auth bypass
High7.0–8.9Stored XSS, IDOR, path traversal, JWT manipulation
Medium4.0–6.9Reflected XSS, CSRF, CORS misconfiguration, missing rate limiting
Low0.1–3.9Missing security headers, server version disclosure, expired TLS certificate
Example CVSS scores by vulnerability type:
sql_injection          9.8  (Critical)
os_command_injection   9.8  (Critical)
auth_bypass            9.1  (Critical)
ssrf                   8.6  (Critical)
xss_stored             8.2  (High)
idor                   7.5  (High)
path_traversal         7.5  (High)
xss_reflected          6.1  (Medium)
csrf                   6.5  (Medium)
missing_rate_limit     5.3  (Medium)
missing_hsts           5.3  (Medium)
missing_csp            4.0  (Low)
server_info_leak       3.7  (Low)

Risk score

Each completed scan receives an overall risk score from 0 to 100. The score accumulates severity-weighted points from all findings:
Finding severityPoints contributed
Critical40
High25
Medium10
Low3
A clean scan with no findings scores 0. A scan with one critical finding scores 40. The score is capped at 100. A risk score above 40 (one critical finding) should be treated as a blocker for any enterprise sales process or compliance audit.

Proof-of-concept evidence

Each finding captures the exact HTTP request and response that demonstrated the vulnerability:
Finding: SQL Injection at /api/search
Severity: Critical (CVSS 9.8)
OWASP: A03: Injection

Proof of concept request:
GET /api/search?q=' OR '1'='1 HTTP/1.1
Host: yourapp.com
User-Agent: Lintliot-Pentest/1.0

Proof of concept response:
HTTP/1.1 200 OK
{"users": [{"id": 1, "email": "admin@yourapp.com"}, ...]}

Evidence: SQL syntax error in response or full data dump returned
Remediation: Use parameterized queries for all database operations.
  Never concatenate user input into SQL strings. Apply input
  validation and use an ORM where possible.

Remediation guidance

Every finding includes a specific, actionable remediation. Examples:
Use parameterized queries (prepared statements) for all database operations. Never concatenate user input into SQL strings. Apply input validation and use an ORM where possible.
Validate and allowlist URLs before fetching. Block requests to internal IP ranges (127.0.0.0/8, 10.0.0.0/8, 169.254.0.0/16, 172.16.0.0/12, 192.168.0.0/16). Use a URL parser to normalize before checking.
Implement authorization checks on every data access. Use indirect references (UUIDs) instead of sequential IDs. Verify that the requesting user owns the resource before returning it.
Implement rate limiting on authentication endpoints (max 5–10 attempts per minute), API endpoints (based on plan), and sensitive operations. Use lintliot.rateLimit() to add rate limiting in one line.
Restrict Access-Control-Allow-Origin to specific trusted origins. Never combine wildcard (*) with Access-Control-Allow-Credentials: true. Validate the Origin header server-side.

OWASP coverage tracking

After each scan, the dashboard shows an OWASP coverage map indicating which of the 10 categories were tested and whether findings were present in each. A fully green OWASP map means all 10 categories were tested with no findings.
Schedule regular pentest scans after each deployment. Vulnerabilities introduced by a new feature are easiest to fix when found immediately — not weeks later when you’re preparing for a compliance audit.