Saltar al contenido principal

Rate Limiting

Rate limiting protects the Billoget API from abuse and ensures fair usage across all users. This guide explains how rate limiting works and how to handle it in your applications.

🚦 How Rate Limiting Works​

Sliding Window Algorithm​

The Billoget API uses a sliding window algorithm to track request rates. This means:

  • Requests are counted per hour from the first request
  • The window slides continuously, not in fixed hourly blocks
  • More accurate and fair than fixed time windows

Per-API Key Limits​

Each API key has its own rate limit, configurable when creating the key:

{
"rateLimitPerHour": 1000
}

πŸ“Š Default Limits​

Standard Limits​

API Key TypeDefault LimitConfigurable
Standard1,000 req/hourβœ… Yes (up to 10,000)
Premium5,000 req/hourβœ… Yes (up to 50,000)
Enterprise25,000 req/hourβœ… Yes (unlimited)

Endpoint-Specific Limits​

Some endpoints may have additional limits:

EndpointAdditional Limit
POST /api/public/webhooks/test10 req/minute
File uploads100 MB/hour

πŸ“ˆ Rate Limit Headers​

Every API response includes rate limit information in the headers:

HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1642694400
X-RateLimit-Window: 3600

Header Descriptions​

HeaderDescription
X-RateLimit-LimitMaximum requests allowed per hour
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the window resets
X-RateLimit-WindowWindow size in seconds (3600 = 1 hour)

⚠️ Rate Limit Exceeded​

When you exceed your rate limit, you'll receive a 429 Too Many Requests response:

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1642697000
Retry-After: 3600

{
"error": "rate_limit_exceeded",
"message": "Rate limit exceeded. Try again in 3600 seconds",
"details": {
"limit": 1000,
"remaining": 0,
"resetAt": "2024-01-15T11:30:00Z"
}
}

Retry-After Header​

The Retry-After header tells you how many seconds to wait before making another request.

πŸ› οΈ Handling Rate Limits​

1. Check Headers Before Requests​

class BillogetAPI {
constructor(apiKey) {
this.apiKey = apiKey;
this.baseUrl = "https://api.billoget.com/v1";
this.rateLimitInfo = {
limit: null,
remaining: null,
reset: null,
};
}

async makeRequest(endpoint, options = {}) {
// Check if we're close to the limit
if (this.rateLimitInfo.remaining < 10) {
console.warn("Approaching rate limit. Consider slowing down requests.");
}

const response = await fetch(`${this.baseUrl}${endpoint}`, {
...options,
headers: {
Authorization: `Bearer ${this.apiKey}`,
"Content-Type": "application/json",
...options.headers,
},
});

// Update rate limit info from headers
this.updateRateLimitInfo(response.headers);

return response;
}

updateRateLimitInfo(headers) {
this.rateLimitInfo = {
limit: parseInt(headers.get("X-RateLimit-Limit")),
remaining: parseInt(headers.get("X-RateLimit-Remaining")),
reset: parseInt(headers.get("X-RateLimit-Reset")),
};
}
}

2. Implement Exponential Backoff​

async function makeRequestWithBackoff(apiCall, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await apiCall();

if (response.status === 429) {
const retryAfter = response.headers.get("Retry-After");
const delay = retryAfter
? parseInt(retryAfter) * 1000
: Math.pow(2, attempt) * 1000;

console.log(
`Rate limited. Waiting ${delay}ms before retry ${
attempt + 1
}/${maxRetries}`
);
await sleep(delay);
continue;
}

return response;
} catch (error) {
if (attempt === maxRetries - 1) throw error;
await sleep(Math.pow(2, attempt) * 1000);
}
}
}

function sleep(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}

3. Request Queuing​

class RateLimitedQueue {
constructor(apiKey, requestsPerHour = 1000) {
this.apiKey = apiKey;
this.requestsPerHour = requestsPerHour;
this.queue = [];
this.processing = false;
this.requestTimes = [];
}

async addRequest(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.processQueue();
});
}

async processQueue() {
if (this.processing || this.queue.length === 0) return;

this.processing = true;

while (this.queue.length > 0) {
// Clean old request times (older than 1 hour)
const oneHourAgo = Date.now() - 60 * 60 * 1000;
this.requestTimes = this.requestTimes.filter((time) => time > oneHourAgo);

// Check if we can make a request
if (this.requestTimes.length >= this.requestsPerHour) {
const oldestRequest = Math.min(...this.requestTimes);
const waitTime = oldestRequest + 60 * 60 * 1000 - Date.now();

if (waitTime > 0) {
await sleep(waitTime);
continue;
}
}

// Process next request
const { requestFn, resolve, reject } = this.queue.shift();

try {
this.requestTimes.push(Date.now());
const result = await requestFn();
resolve(result);
} catch (error) {
reject(error);
}

// Small delay between requests
await sleep(100);
}

this.processing = false;
}
}

πŸ“Š Monitoring Rate Limits​

Real-time Monitoring​

class RateLimitMonitor {
constructor(apiKey) {
this.apiKey = apiKey;
this.stats = {
requestsThisHour: 0,
limit: 1000,
remaining: 1000,
resetTime: null,
};
}

updateFromHeaders(headers) {
this.stats.limit = parseInt(headers.get("X-RateLimit-Limit"));
this.stats.remaining = parseInt(headers.get("X-RateLimit-Remaining"));
this.stats.resetTime = new Date(
parseInt(headers.get("X-RateLimit-Reset")) * 1000
);
this.stats.requestsThisHour = this.stats.limit - this.stats.remaining;
}

getUsagePercentage() {
return ((this.stats.requestsThisHour / this.stats.limit) * 100).toFixed(2);
}

getTimeUntilReset() {
if (!this.stats.resetTime) return null;
return Math.max(0, this.stats.resetTime.getTime() - Date.now());
}

shouldSlowDown() {
return this.getUsagePercentage() > 80;
}

logStatus() {
console.log(`Rate Limit Status:
Usage: ${this.getUsagePercentage()}% (${this.stats.requestsThisHour}/${
this.stats.limit
})
Remaining: ${this.stats.remaining}
Reset in: ${Math.round(this.getTimeUntilReset() / 1000 / 60)} minutes
`);
}
}

Usage Analytics​

Get detailed rate limit analytics from the API:

curl -X GET "https://api.billoget.com/api-keys/api_key_123/stats?period=24h" \
-H "Authorization: Bearer YOUR_JWT_TOKEN"

Response includes rate limit information:

{
"rateLimitStats": {
"currentUsage": 750,
"limit": 1000,
"percentage": 75.0,
"peakUsage": 950,
"peakTime": "2024-01-15T14:30:00Z",
"rateLimitHits": 3,
"averageRequestsPerHour": 680
}
}

🎯 Best Practices​

1. Respect Rate Limits​

Always check rate limit headers and adjust your request rate accordingly.

2. Implement Caching​

Cache responses when possible to reduce API calls:

class CachedAPI {
constructor(apiKey, cacheTTL = 300000) {
// 5 minutes
this.apiKey = apiKey;
this.cache = new Map();
this.cacheTTL = cacheTTL;
}

async get(endpoint) {
const cacheKey = `GET:${endpoint}`;
const cached = this.cache.get(cacheKey);

if (cached && Date.now() - cached.timestamp < this.cacheTTL) {
return cached.data;
}

const response = await this.makeRequest(endpoint);
const data = await response.json();

this.cache.set(cacheKey, {
data,
timestamp: Date.now(),
});

return data;
}
}

3. Batch Operations​

Group multiple operations into single requests when possible.

4. Use Webhooks​

Instead of polling for changes, use webhooks to get real-time notifications.

5. Monitor Usage​

Regularly check your usage patterns and adjust limits if needed.

🚨 Common Scenarios​

High-Volume Applications​

For applications that need higher limits:

# Request a higher rate limit
curl -X PUT "https://api.billoget.com/api-keys/api_key_123" \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"rateLimitPerHour": 5000
}'

Burst Traffic​

Handle sudden spikes in traffic:

class BurstHandler {
constructor(apiKey, burstLimit = 100) {
this.apiKey = apiKey;
this.burstLimit = burstLimit;
this.burstCount = 0;
this.burstResetTime = Date.now() + 60000; // 1 minute
}

async handleBurst(requests) {
const now = Date.now();

// Reset burst counter every minute
if (now > this.burstResetTime) {
this.burstCount = 0;
this.burstResetTime = now + 60000;
}

// Check if burst limit would be exceeded
if (this.burstCount + requests.length > this.burstLimit) {
throw new Error("Burst limit exceeded. Please slow down requests.");
}

this.burstCount += requests.length;
return Promise.all(requests.map((req) => this.makeRequest(req)));
}
}

πŸ”§ Testing Rate Limits​

Test Script​

async function testRateLimit(apiKey) {
const api = new BillogetAPI(apiKey);
let requestCount = 0;

console.log("Testing rate limit...");

try {
while (true) {
const response = await api.makeRequest("/api/public/budgets?limit=1");
requestCount++;

const remaining = response.headers.get("X-RateLimit-Remaining");
console.log(`Request ${requestCount}, Remaining: ${remaining}`);

if (response.status === 429) {
console.log("Rate limit reached!");
break;
}

// Small delay to avoid overwhelming the server
await sleep(10);
}
} catch (error) {
console.error("Error:", error.message);
}
}

πŸš€ Next Steps​

Now that you understand rate limiting, let's learn about:

  1. Error Handling - Handle API errors gracefully
  2. API Reference - Explore available endpoints
  3. API Reference - Explore all available endpoints

πŸ“š Additional Resources​


Ready to learn about error handling? Let's go to Error Handling! 🚨