Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lintliot.com/llms.txt

Use this file to discover all available pages before exploring further.

Most rate limiters pick an arbitrary number — “100 requests per minute” — and apply it to every app equally. LintLiot does something different: it watches your app’s traffic for 7 days, builds a statistical baseline, and then sets limits relative to what’s actually normal for your specific app. An endpoint that legitimately handles 500 requests per minute won’t be throttled. An endpoint that normally gets 2 requests per minute will trigger an alert at 50.

How baseline-relative limits work

After the 7-day learning phase completes, LintLiot calculates a per-IP threshold using this formula:
threshold_per_ip = max(hardcoded_floor, baseline_mean + (3 × baseline_stddev))
The 3 × stddev factor corresponds to a 3-sigma rule: legitimate traffic that falls within 3 standard deviations of the mean has a 99.7% chance of being normal. Traffic beyond that threshold is almost certainly anomalous. During learning mode, the SDK uses conservative static defaults. The moment you enable enforcement from the dashboard on Day 8, thresholds switch to your app’s learned baselines. Baselines continue to adapt over time using an exponential moving average (α = 0.1) so your limits drift upward naturally as your app grows.
Your app is protected by standard static rate limits during the 7-day learning phase. Learning mode only affects which thresholds are used — blocking is always on.

Rate limit algorithms

LintLiot supports four rate limiting strategies. The default is sliding-window-counter, which gives the most accurate results with minimal memory overhead.
Blends the current and previous fixed windows proportionally to the current position in time. This eliminates the burst edge case of fixed-window counters where a client can send 2× the limit in a short burst at a window boundary.
Stores an exact timestamp for every request in the current window. Most accurate — a client can never exceed the limit by any amount — but uses more memory. Best for endpoints where precise enforcement matters more than efficiency.
Counts requests in a fixed time bucket. Simplest and cheapest, but allows brief bursts of up to 2× the limit at window boundaries. Suitable for high-throughput endpoints where approximate limiting is acceptable.
Clients accumulate tokens at a steady rate and consume one token per request. Naturally tolerates burst traffic up to the bucket size while enforcing an average rate over time. Good for APIs that should allow short bursts from legitimate users.

Configuring rate limits

Global rate limit via protect()

Pass rate limit options to lintliot.protect() to apply a limit globally across all routes:
import { createLintliot } from '@lintliot/sdk'

const lintliot = createLintliot({ apiKey: process.env.LINTLIOT_API_KEY })

app.use(lintliot.protect({
  rateLimit: {
    windowMs: 60_000,   // 1 minute window
    max: 100,           // max requests per IP per window
    strategy: 'sliding-window-counter',
  },
}))

Per-route rate limits

Apply different limits to specific routes by calling lintliot.rateLimit() directly as middleware:
// Strict limit on authentication endpoints
app.post('/api/login',
  lintliot.rateLimit({ windowMs: 60_000, max: 10 }),
  loginHandler,
)

// Looser limit on public search
app.get('/api/search',
  lintliot.rateLimit({ windowMs: 60_000, max: 300, strategy: 'token-bucket' }),
  searchHandler,
)

// Per-user limit on data export
app.get('/api/export',
  lintliot.rateLimit({
    windowMs: 3_600_000,  // 1 hour
    max: 10,
    perUser: true,        // limit per authenticated user, not per IP
  }),
  exportHandler,
)

All RateLimitOptions fields

interface RateLimitOptions {
  /** Time window in milliseconds (default: 60000) */
  windowMs?: number

  /** Max requests per window (default: 100) */
  max?: number

  /** Apply limit per authenticated user instead of per IP (default: false) */
  perUser?: boolean

  /** Rate limiting algorithm (default: 'sliding-window-counter') */
  strategy?: 'fixed-window' | 'sliding-window-log' | 'sliding-window-counter' | 'token-bucket'

  /** Skip counting successful responses (default: false) */
  skipSuccessful?: boolean

  /**
   * DDoS spike threshold — auto-block an IP if its requests-per-second
   * exceeds this value in a 1-second window (default: 50)
   */
  ddosThreshold?: number

  /** Custom key generator for advanced grouping */
  keyGenerator?: (req: IncomingRequest) => string
}

DDoS spike detection

In addition to the window-based limiter, LintLiot tracks requests per second per IP in a 1-second window. If any IP exceeds the ddosThreshold (default: 50 req/s), that IP is automatically blocked for 60 seconds and a critical event is logged. This operates independently of the main rate limit window — it catches sudden bursts that a per-minute limit might not catch quickly enough.

Progressive penalties

Repeated violations result in escalating block durations:
Violation countBlock duration
1st1× the rate limit window
2nd2× the rate limit window
3rd4× the rate limit window
4th and beyond8× the rate limit window
A client that hits the limit once is blocked briefly. A client that repeatedly hammers the limit is blocked for increasingly long periods automatically.

What happens when a rate limit is hit

When a request exceeds the rate limit, LintLiot returns a 403 Forbidden response immediately — before your application code runs. The response includes:
HTTP/1.1 403 Forbidden
Content-Type: application/json

{
  "error": "Rate limit exceeded",
  "retryAfter": 42
}
The retryAfter field is the number of seconds until the current window resets. The request is logged as a rate_limit event in your dashboard with the IP address, violation count, and window details.
For public APIs where you want clients to handle rate limits gracefully, consider returning 429 Too Many Requests with a Retry-After header. You can customize the response format by wrapping lintliot.protect() in your own middleware and checking the ShieldResult.action === 'rate_limit' value.

Redis-backed rate limiting

For applications running across multiple instances (serverless functions, horizontal scale-out), in-process counters won’t share state between instances. LintLiot automatically uses Redis for cross-instance coordination when you provide Redis credentials:
# .env
LINTLIOT_REDIS_URL=https://your-upstash-instance.upstash.io
LINTLIOT_REDIS_TOKEN=your-token
When Redis credentials are present, LintLiot switches to a Redis-backed fixed-window counter. If Redis is unreachable, it falls back to the in-process counter automatically — rate limiting never causes your app to fail.