Domain API Rate Limit, Throttling & Bulk Usage Policy

Avoid HTTP 429 Errors — Rate Limit Exceeded Fix — Bulk API Best Practices
domainnameapi.com — Enterprise-Grade Reseller API Infrastructure
Domain Name API provides an enterprise-grade, high-availability API infrastructure designed to streamline reseller integration workflows at scale. To ensure fair, balanced, and sustainable access for all partners, usage is categorized into two types with dedicated endpoints, rate limits, and API throttling controls.
Quick Rules Summary
A fast reference for all rate limit, API throttling, and endpoint rules. Bookmark this table.
1. Definitions: Rate Limiting & API Throttling
What is API Rate Limiting?
API rate limiting is a control mechanism that restricts how many requests a client can send within a specific time window. In Domain Name API, this limit is 1 request per second per API Key. Requests that exceed this limit are throttled — meaning they are rejected — and the client receives an HTTP 429 Too Many Requests response.
What is API Throttling?
API throttling is the server-side enforcement of rate limits. When your request rate exceeds the allowed threshold, the server throttles your requests to protect platform stability. Throttling is automatic and non-negotiable. It applies to all API Keys equally and is not a penalty — it is a protection mechanism for the entire reseller network.
2. What is Standard vs. Bulk API Usage?
Standard API Usage
Standard API usage covers low-frequency, real-time calls directly triggered by a user action.
Examples:
- Domain availability lookups (user-initiated)
- Individual domain registration requests
- Operations performed through the reseller control panel
Bulk API Usage
Bulk usage covers high-frequency, repetitive API calls triggered automatically by software without direct human interaction.
Any scenario involving more than 1 automated request per second is classified as bulk usage and must use /api-bulk. This applies regardless of async model, thread count, or concurrency strategy.
Examples
- Domain availability scanning scripts
- Backorder and drop-catching systems
- Bulk domain check / registration workflows
- Cron jobs and background worker services
- Webhook retry and event-driven automation
3. Bulk API Endpoint Rules (/api-bulk)
All automated and bulk operations must use the dedicated endpoint. Both endpoints expose identical functionality; the only difference is the base URI.
Misusing /api for bulk operations will immediately trigger throttling and may lead to permanent API access termination.
- Detected in real time by automated monitoring — no manual review required
- Affected requests are throttled or blocked immediately and without notice
- Repeated violations result in temporary or permanent API access suspension
- Severe or persistent abuse results in full account termination
There is NO performance, speed, or priority advantage to misusing /api for bulk traffic. The system is designed to make this impossible.
4. API Rate Limits & Throttling Explained
Maximum 1 (one) request per second / API Key
- Enforced per API Key. Not per IP, not per account.
- Burst traffic (multiple requests within one second) counts as a violation.
- Exceeded requests receive HTTP 429 Too Many Requests.
- The Retry-After header in the 429 response must be respected.
- Persistent violations trigger progressive throttling and access restrictions.
Concurrency Rule
Even if your system is multi-threaded or fully asynchronous, the total outgoing request rate must not exceed 1 request per second per API Key.
- Parallel async calls count toward the same rate limit
- Multi-threading does NOT grant a higher rate allowance
- Use a centralized rate limiter or queue — shared across all threads and workers
5. How to Fix HTTP 429 Errors (Rate Limit Exceeded Fix)
If your integration receives HTTP 429 Too Many Requests, follow these steps:
429 Recovery Steps:
- Stop sending requests immediately
- Read the Retry-After header value from the 429 response
- Wait at least 1 second (or Retry-After value, whichever is longer)
- Retry the failed request
- If 429 persists -> apply exponential backoff: 1s -> 2s -> 4s -> 8s -> ...
- Confirm you are using /api-bulk for all automated calls
- Confirm concurrency is not exceeding 1 req/sec total across all threads
6. Code Examples (Bulk API Best Practices)
The following examples demonstrate correct rate limit handling and exponential backoff for the most common integration languages.
C# (.NET / Windows integrations)
// C# — Exponential Backoff with Retry-After support
var delay = 1000; // start at 1 second
var maxDelay = 60000; // cap at 60 seconds
while (true)
{
var response = await client.SendAsync(request);
if ((int)response.StatusCode == 429)
{
var retryAfter = response.Headers.RetryAfter?.Delta?.TotalMilliseconds ?? delay;
await Task.Delay((int)Math.Max(retryAfter, delay));
delay = Math.Min(delay * 2, maxDelay); // exponential backoff
continue;
}
break; // success — exit loop
}
PHP (WordPress / cPanel integrations)
// PHP — Exponential Backoff with Retry-After support
function sendWithRetry($url, $headers, $maxRetries = 5) {
$delay = 1; // seconds
for ($i = 0; $i < $maxRetries; $i++) {
$response = httpRequest($url, $headers);
if ($response['status'] === 429) {
$retryAfter = $response['headers']['Retry-After'] ?? $delay;
sleep(max((int)$retryAfter, $delay));
$delay = min($delay * 2, 60); // cap at 60s
continue;
}
return $response; // success
}
throw new Exception('Max retries exceeded');
}
Python (scripts / automation)
# Python — Exponential Backoff with Retry-After support
import time, requests
def send_with_retry(url, headers, max_retries=5):
delay = 1 # seconds
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', delay))
time.sleep(max(retry_after, delay))
delay = min(delay * 2, 60) # exponential backoff, cap 60s
continue
return response # success
raise Exception('Max retries exceeded')
7. Request Flow Diagram
Every bulk API call should follow this flow. Print or save this as a reference for your integration team.
BULK API REQUEST FLOW
+------------------+
| Add domain to |
| request queue |
+--------+---------+
|
v
+------------------+
| Send request to |
| /api-bulk |
+--------+---------+
|
+-----+------+
| |
200 OK 429 Too Many
| |
v v
Process Read Retry-After header
result Wait >= 1 second
| |
| v
| Apply exponential backoff
| 1s -> 2s -> 4s -> 8s -> ...
| |
| v
| Retry request
| |
+-----+------+
|
v
+------------------+
| Next item in |
| queue |
+------------------+
8. Best Practices for Bulk API Integration
Request Queuing
Never fire requests without a rate controller. Use a centralized queue that enforces a maximum of 1 outgoing request per second, shared across all threads and workers.
Avoid Duplicate Requests
Do not check the same domain more than once within the same session. Cache results locally and de-duplicate your input list before processing.
Monitor Your Error Rate
Track the ratio of 429 responses to successful responses. A rising 429 rate is an early warning that your implementation needs adjustment before access restrictions are triggered.
9. Common Integration Mistakes to Avoid
- Sending parallel requests without a centralized rate limiter or queue
- Using /api instead of /api-bulk for automated or scripted operations
- Ignoring HTTP 429 responses and continuing to send requests
- Not implementing retry logic or exponential backoff
- Repeatedly checking the same domain in short intervals
- Rotating API keys or IPs to bypass rate limits (monitored, treated as abuse)
- Assuming async or multi-threaded calls are each counted separately
10. FAQ — Frequently Asked Questions
Common questions about API rate limiting, throttling, and bulk API usage.
11. Automated Monitoring & Abuse Detection
The following behaviors are continuously monitored across all API traffic:
- High-volume traffic causing measurable system performance degradation
- Repeated registration or lookup attempts for the same domain
- Elevated error-to-success ratios (high fail rate)
- Abnormal or suspicious traffic patterns
- Connection instability and timeout anomalies
12. Why This Policy Exists
This policy ensures:
- Fair and equal API access for all resellers
- Stable, predictable platform performance at enterprise scale
- Protection against unintended abuse and system overload
- Support for high-volume automation workflows done correctly
- Consistent service quality across the entire reseller network
