inChurch
Getting Started

Rate Limits

The InChurch API implements rate limiting to ensure fair usage and maintain optimal performance for all users. This guide explains how rate limits work and how to handle them in your applications.

Rate Limit Overview

All API clients are subject to the following limits:

  • 200 requests per minute per API client
  • Limits are enforced using a token bucket algorithm
  • Limits reset every minute
  • Rate limiting is applied per API client

Rate Limit Headers

Every API response includes rate limit information in the headers:

Code
X-RateLimit-Limit: 200 X-RateLimit-Remaining: 150 X-RateLimit-Reset: 1640995200
HeaderDescription
X-RateLimit-LimitMaximum requests allowed per minute
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the limit resets

Handling Rate Limit Exceeded

When you exceed the rate limit, the API returns:

  • Status Code: 429 Too Many Requests
  • Retry-After Header: Seconds until you can make requests again

Example Rate Limit Response

JSONCode
{ "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "Too many requests. Please retry after 60 seconds.", "details": { "limit": 200, "reset_at": "2023-12-31T15:20:00Z" } } }

Best Practices

1. Monitor Rate Limit Headers

Always check the rate limit headers in your responses:

JavascriptCode
async function makeApiRequest(url, options = {}) { const response = await fetch(url, { ...options, headers: { 'Authorization': `Bearer ${apiKey}:${apiSecret}`, 'Content-Type': 'application/json', ...options.headers } }); // Check rate limit headers const remaining = response.headers.get('X-RateLimit-Remaining'); const reset = response.headers.get('X-RateLimit-Reset'); console.log(`Requests remaining: ${remaining}`); console.log(`Rate limit resets at: ${new Date(reset * 1000)}`); if (response.status === 429) { const retryAfter = response.headers.get('Retry-After'); throw new Error(`Rate limited. Retry after ${retryAfter} seconds`); } return response.json(); }
Code
import requests import time from datetime import datetime def make_api_request(url, **kwargs): headers = { 'Authorization': f'Bearer {api_key}:{api_secret}', 'Content-Type': 'application/json' } headers.update(kwargs.get('headers', {})) kwargs['headers'] = headers response = requests.get(url, **kwargs) # Check rate limit headers remaining = response.headers.get('X-RateLimit-Remaining') reset_time = response.headers.get('X-RateLimit-Reset') print(f"Requests remaining: {remaining}") if reset_time: reset_datetime = datetime.fromtimestamp(int(reset_time)) print(f"Rate limit resets at: {reset_datetime}") if response.status_code == 429: retry_after = int(response.headers.get('Retry-After', 60)) raise Exception(f"Rate limited. Retry after {retry_after} seconds") return response.json()

2. Implement Exponential Backoff

When you hit rate limits, implement exponential backoff:

JavascriptCode
async function makeApiRequestWithRetry(url, options = {}, maxRetries = 3) { for (let attempt = 0; attempt <= maxRetries; attempt++) { try { const response = await fetch(url, options); if (response.status === 429) { if (attempt === maxRetries) { throw new Error('Max retries exceeded'); } const retryAfter = response.headers.get('Retry-After'); const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, attempt) * 1000; console.log(`Rate limited. Waiting ${delay}ms before retry ${attempt + 1}`); await new Promise(resolve => setTimeout(resolve, delay)); continue; } return response.json(); } catch (error) { if (attempt === maxRetries) throw error; const delay = Math.pow(2, attempt) * 1000; await new Promise(resolve => setTimeout(resolve, delay)); } } }
Code
import time import random def make_api_request_with_retry(url, max_retries=3, **kwargs): for attempt in range(max_retries + 1): try: response = requests.get(url, **kwargs) if response.status_code == 429: if attempt == max_retries: raise Exception('Max retries exceeded') retry_after = response.headers.get('Retry-After') delay = int(retry_after) if retry_after else (2 ** attempt) # Add jitter to prevent thundering herd jitter = random.uniform(0.1, 0.5) total_delay = delay + jitter print(f"Rate limited. Waiting {total_delay:.1f}s before retry {attempt + 1}") time.sleep(total_delay) continue response.raise_for_status() return response.json() except requests.exceptions.RequestException as e: if attempt == max_retries: raise e delay = (2 ** attempt) + random.uniform(0.1, 0.5) time.sleep(delay)

3. Optimize Request Patterns

Batch Operations

Instead of making multiple individual requests, use batch operations when available:

JavascriptCode
// ❌ Multiple individual requests const people = []; for (const id of personIds) { const person = await fetch(`/api/v1/people/${id}`); people.push(await person.json()); } // ✅ Single request with filtering const people = await fetch(`/api/v1/people?ids=${personIds.join(',')}`);

Use Pagination Efficiently

Request larger page sizes to reduce the number of requests:

JavascriptCode
// ❌ Small page sizes = more requests const allPeople = []; let page = 1; while (true) { const response = await fetch(`/api/v1/people?page=${page}&limit=10`); const data = await response.json(); allPeople.push(...data.data); if (data.data.length < 10) break; page++; } // ✅ Larger page sizes = fewer requests const response = await fetch('/api/v1/people?limit=100');

4. Cache Responses

Cache API responses to reduce unnecessary requests:

JavascriptCode
const cache = new Map(); const CACHE_TTL = 5 * 60 * 1000; // 5 minutes async function getCachedData(url) { const cached = cache.get(url); if (cached && Date.now() - cached.timestamp < CACHE_TTL) { return cached.data; } const data = await makeApiRequest(url); cache.set(url, { data, timestamp: Date.now() }); return data; }

Rate Limit Strategies

For High-Volume Applications

If you need to make more than 200 requests per minute:

  1. Multiple API Clients: Create multiple API clients to increase your total rate limit
  2. Request Queuing: Implement a queue system to smooth out request bursts
  3. Webhook Integration: Use webhooks to receive real-time updates instead of polling

Request Queue Implementation

JavascriptCode
class ApiQueue { constructor(rateLimit = 200, windowMs = 60000) { this.queue = []; this.processing = false; this.rateLimit = rateLimit; this.windowMs = windowMs; this.requests = []; } async add(requestFn) { return new Promise((resolve, reject) => { this.queue.push({ requestFn, resolve, reject }); this.process(); }); } async process() { if (this.processing) return; this.processing = true; while (this.queue.length > 0) { // Clean old requests outside the window const now = Date.now(); this.requests = this.requests.filter(time => now - time < this.windowMs); if (this.requests.length >= this.rateLimit) { // Wait until we can make another request const oldestRequest = Math.min(...this.requests); const waitTime = this.windowMs - (now - oldestRequest); await new Promise(resolve => setTimeout(resolve, waitTime)); continue; } const { requestFn, resolve, reject } = this.queue.shift(); try { this.requests.push(now); const result = await requestFn(); resolve(result); } catch (error) { reject(error); } } this.processing = false; } } // Usage const apiQueue = new ApiQueue(); const person = await apiQueue.add(() => fetch('/api/v1/people/123').then(r => r.json()) );

Monitoring and Alerting

Track Rate Limit Usage

Monitor your rate limit usage to optimize your application:

JavascriptCode
class RateLimitMonitor { constructor() { this.stats = { requests: 0, rateLimited: 0, averageRemaining: 0 }; } recordRequest(headers) { this.stats.requests++; const remaining = parseInt(headers.get('X-RateLimit-Remaining') || '0'); this.stats.averageRemaining = (this.stats.averageRemaining + remaining) / 2; if (remaining < 10) { console.warn(`Rate limit warning: Only ${remaining} requests remaining`); } } recordRateLimit() { this.stats.rateLimited++; console.error('Rate limit exceeded!'); } getStats() { return { ...this.stats, rateLimitPercentage: (this.stats.rateLimited / this.stats.requests) * 100 }; } }
Last modified on