Skip to content

Rate Limiting & Login Throttling

Rate limiting and login throttling are essential security features that protect the UBU Finance backend from abuse and brute force attacks.

Overview

Rate limiting restricts the number of requests a client can make within a specific time window, preventing abuse and denial-of-service attacks. Login throttling specifically limits the number of login attempts to prevent brute force attacks.

How It Works

Rate Limiting

The rate limiter tracks requests by IP address and endpoint, allowing a configurable number of requests within a time window. When a client exceeds the limit, subsequent requests are rejected with a 429 (Too Many Requests) status code until the time window expires.

Login Throttling

Login throttling is a specialized form of rate limiting that specifically targets login attempts. It tracks failed login attempts by username and IP address, allowing a configurable number of attempts within a time window. When a client exceeds the limit, subsequent login attempts are rejected until the time window expires.

Configuration

Rate limiting and login throttling can be configured via environment variables or by modifying the app/config/security_config.py file.

Environment Variables

# Rate Limiting
RATE_LIMIT_ENABLED=true
RATE_LIMIT_MAX_REQUESTS=100
RATE_LIMIT_WINDOW_SECONDS=60
RATE_LIMIT_BLOCK_DURATION_SECONDS=300

# Login Throttling
LOGIN_THROTTLING_ENABLED=true
LOGIN_THROTTLING_MAX_ATTEMPTS=5
LOGIN_THROTTLING_WINDOW_SECONDS=300
LOGIN_THROTTLING_BLOCK_DURATION_SECONDS=900

Configuration Parameters

Parameter Description Default
RATE_LIMIT_ENABLED Enable/disable rate limiting true
RATE_LIMIT_MAX_REQUESTS Maximum number of requests allowed within the window 100
RATE_LIMIT_WINDOW_SECONDS Time window in seconds 60
RATE_LIMIT_BLOCK_DURATION_SECONDS Duration in seconds to block requests after limit is exceeded 300
LOGIN_THROTTLING_ENABLED Enable/disable login throttling true
LOGIN_THROTTLING_MAX_ATTEMPTS Maximum number of login attempts allowed within the window 5
LOGIN_THROTTLING_WINDOW_SECONDS Time window in seconds for login attempts 300
LOGIN_THROTTLING_BLOCK_DURATION_SECONDS Duration in seconds to block login attempts after limit is exceeded 900

Implementation

The rate limiting and login throttling features are implemented in the app/security/rate_limiter.py module. The module uses Redis to store rate limiting data, with an in-memory fallback if Redis is not available.

API Responses

When a client exceeds the rate limit, the API returns a 429 (Too Many Requests) status code with a JSON response:

{
  "detail": "Rate limit exceeded. Please try again later."
}

When a client exceeds the login throttling limit, the API returns a 429 status code with a JSON response:

{
  "detail": "Too many login attempts. Please try again later."
}

Client Implementation Examples

Handling Rate Limiting in Clients

Clients should be designed to handle rate limiting responses gracefully. Here are examples in different languages:

import requests
import time

def make_api_request(url, headers, data=None, max_retries=3):
    retries = 0
    while retries < max_retries:
        try:
            if data:
                response = requests.post(url, headers=headers, json=data)
            else:
                response = requests.get(url, headers=headers)

            if response.status_code == 429:  # Too Many Requests
                retry_after = int(response.headers.get('Retry-After', 60))
                print(f"Rate limit exceeded. Retrying after {retry_after} seconds.")
                time.sleep(retry_after)
                retries += 1
                continue

            return response
        except Exception as e:
            print(f"Error making request: {e}")
            retries += 1
            time.sleep(2)

    raise Exception("Max retries exceeded")
async function makeApiRequest(url, headers, data = null, maxRetries = 3) {
  let retries = 0;

  while (retries < maxRetries) {
    try {
      const options = {
        method: data ? 'POST' : 'GET',
        headers: headers,
        body: data ? JSON.stringify(data) : undefined
      };

      const response = await fetch(url, options);

      if (response.status === 429) {  // Too Many Requests
        const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
        console.log(`Rate limit exceeded. Retrying after ${retryAfter} seconds.`);
        await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
        retries++;
        continue;
      }

      return response;
    } catch (error) {
      console.error(`Error making request: ${error}`);
      retries++;
      await new Promise(resolve => setTimeout(resolve, 2000));
    }
  }

  throw new Error('Max retries exceeded');
}
#!/bin/bash

make_api_request() {
  local url=$1
  local token=$2
  local data=$3
  local max_retries=3
  local retries=0

  while [ $retries -lt $max_retries ]; do
    if [ -z "$data" ]; then
      # GET request
      response=$(curl -s -w "%{http_code}" -H "Authorization: Bearer $token" $url)
    else
      # POST request
      response=$(curl -s -w "%{http_code}" -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d "$data" $url)
    fi

    http_code=${response: -3}
    content=${response:0:${#response}-3}

    if [ "$http_code" == "429" ]; then
      echo "Rate limit exceeded. Retrying after 60 seconds."
      sleep 60
      retries=$((retries+1))
      continue
    fi

    echo "$content"
    return 0
  done

  echo "Max retries exceeded"
  return 1
}
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json;

public class ApiClient
{
    private readonly HttpClient _client;

    public ApiClient(string baseUrl)
    {
        _client = new HttpClient();
        _client.BaseAddress = new Uri(baseUrl);
    }

    public async Task<HttpResponseMessage> MakeApiRequestAsync(
        string endpoint, 
        string token, 
        object data = null, 
        int maxRetries = 3)
    {
        int retries = 0;

        while (retries < maxRetries)
        {
            try
            {
                HttpResponseMessage response;

                if (data == null)
                {
                    // GET request
                    _client.DefaultRequestHeaders.Authorization = 
                        new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token);
                    response = await _client.GetAsync(endpoint);
                }
                else
                {
                    // POST request
                    _client.DefaultRequestHeaders.Authorization = 
                        new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token);
                    var content = new StringContent(
                        JsonConvert.SerializeObject(data), 
                        Encoding.UTF8, 
                        "application/json");
                    response = await _client.PostAsync(endpoint, content);
                }

                if ((int)response.StatusCode == 429) // Too Many Requests
                {
                    int retryAfter = 60;
                    if (response.Headers.RetryAfter != null && 
                        response.Headers.RetryAfter.Delta.HasValue)
                    {
                        retryAfter = (int)response.Headers.RetryAfter.Delta.Value.TotalSeconds;
                    }

                    Console.WriteLine($"Rate limit exceeded. Retrying after {retryAfter} seconds.");
                    await Task.Delay(retryAfter * 1000);
                    retries++;
                    continue;
                }

                return response;
            }
            catch (Exception ex)
            {
                Console.WriteLine($"Error making request: {ex.Message}");
                retries++;
                await Task.Delay(2000);
            }
        }

        throw new Exception("Max retries exceeded");
    }
}

Best Practices

  1. Implement Exponential Backoff: When receiving a rate limit response, use exponential backoff to retry the request.
  2. Cache Responses: Cache API responses to reduce the number of requests made to the API.
  3. Batch Requests: Combine multiple requests into a single batch request where possible.
  4. Monitor Rate Limit Headers: Check response headers for rate limit information to adjust client behavior.
  5. Handle Rate Limit Errors Gracefully: Provide a good user experience when rate limits are hit.
  6. Distribute Load: If possible, distribute requests across multiple clients or IP addresses.