Domain API Rate Limit, Throttling & Bulk Usage Policy

Domain API Rate Limit, Throttling & Bulk Usage Policy

Avoid HTTP 429 Errors — Rate Limit Exceeded Fix — Bulk API Best Practices

domainnameapi.com — Enterprise-Grade Reseller API Infrastructure

Domain Name API provides an enterprise-grade, high-availability API infrastructure designed to streamline reseller integration workflows at scale. To ensure fair, balanced, and sustainable access for all partners, usage is categorized into two types with dedicated endpoints, rate limits, and API throttling controls.

Quick Rules Summary

A fast reference for all rate limit, API throttling, and endpoint rules. Bookmark this table.

Rule Detail
Standard API /api — user-triggered, real-time
Bulk API /api-bulk — automated, high-frequency
Rate limit 1 request / second / API Key
Burst traffic Treated as a rate limit violation
Concurrency Total rate must not exceed 1 req/sec per API Key
HTTP 429 Limit exceeded — stop immediately and retry
Retry logic Wait 1s → exponential backoff
Enforcement Throttle → block → permanent termination
There is NO performance or priority advantage to using /api for bulk operations. Misusing /api for bulk operations will immediately trigger throttling and may lead to permanent API access termination.

1. Definitions: Rate Limiting & API Throttling

What is API Rate Limiting?

API rate limiting is a control mechanism that restricts how many requests a client can send within a specific time window. In Domain Name API, this limit is 1 request per second per API Key. Requests that exceed this limit are throttled — meaning they are rejected — and the client receives an HTTP 429 Too Many Requests response.

What is API Throttling?

API throttling is the server-side enforcement of rate limits. When your request rate exceeds the allowed threshold, the server throttles your requests to protect platform stability. Throttling is automatic and non-negotiable. It applies to all API Keys equally and is not a penalty — it is a protection mechanism for the entire reseller network.

2. What is Standard vs. Bulk API Usage?

Real-Time

Standard API Usage

Standard API usage covers low-frequency, real-time calls directly triggered by a user action.

Examples:

  • Domain availability lookups (user-initiated)
  • Individual domain registration requests
  • Operations performed through the reseller control panel
Automated

Bulk API Usage

Bulk usage covers high-frequency, repetitive API calls triggered automatically by software without direct human interaction.

Any scenario involving more than 1 automated request per second is classified as bulk usage and must use /api-bulk. This applies regardless of async model, thread count, or concurrency strategy.

Examples

  • Domain availability scanning scripts
  • Backorder and drop-catching systems
  • Bulk domain check / registration workflows
  • Cron jobs and background worker services
  • Webhook retry and event-driven automation

3. Bulk API Endpoint Rules (/api-bulk)

All automated and bulk operations must use the dedicated endpoint. Both endpoints expose identical functionality; the only difference is the base URI.

Usage Type Endpoint
Real-time (user-triggered) /api
Automated / bulk operations /api-bulk
⚠️ Mandatory Rule — Zero Tolerance

Misusing /api for bulk operations will immediately trigger throttling and may lead to permanent API access termination.
  • Detected in real time by automated monitoring — no manual review required
  • Affected requests are throttled or blocked immediately and without notice
  • Repeated violations result in temporary or permanent API access suspension
  • Severe or persistent abuse results in full account termination

There is NO performance, speed, or priority advantage to misusing /api for bulk traffic. The system is designed to make this impossible.

4. API Rate Limits & Throttling Explained

Rate Limit — /api-bulk
Maximum 1 (one) request per second / API Key
  • Enforced per API Key. Not per IP, not per account.
  • Burst traffic (multiple requests within one second) counts as a violation.
  • Exceeded requests receive HTTP 429 Too Many Requests.
  • The Retry-After header in the 429 response must be respected.
  • Persistent violations trigger progressive throttling and access restrictions.

Concurrency Rule

Even if your system is multi-threaded or fully asynchronous, the total outgoing request rate must not exceed 1 request per second per API Key.

  • Parallel async calls count toward the same rate limit
  • Multi-threading does NOT grant a higher rate allowance
  • Use a centralized rate limiter or queue — shared across all threads and workers

5. How to Fix HTTP 429 Errors (Rate Limit Exceeded Fix)

If your integration receives HTTP 429 Too Many Requests, follow these steps:

429 Recovery Steps:

  1. Stop sending requests immediately
  2. Read the Retry-After header value from the 429 response
  3. Wait at least 1 second (or Retry-After value, whichever is longer)
  4. Retry the failed request
  5. If 429 persists -> apply exponential backoff: 1s -> 2s -> 4s -> 8s -> ...
  6. Confirm you are using /api-bulk for all automated calls
  7. Confirm concurrency is not exceeding 1 req/sec total across all threads

6. Code Examples (Bulk API Best Practices)

The following examples demonstrate correct rate limit handling and exponential backoff for the most common integration languages.

C# (.NET / Windows integrations)

// C# — Exponential Backoff with Retry-After support
var delay = 1000; // start at 1 second
var maxDelay = 60000; // cap at 60 seconds

while (true)
{
    var response = await client.SendAsync(request);

    if ((int)response.StatusCode == 429)
    {
        var retryAfter = response.Headers.RetryAfter?.Delta?.TotalMilliseconds ?? delay;
        await Task.Delay((int)Math.Max(retryAfter, delay));
        delay = Math.Min(delay * 2, maxDelay); // exponential backoff
        continue;
    }

    break; // success — exit loop
}

PHP (WordPress / cPanel integrations)

// PHP — Exponential Backoff with Retry-After support
function sendWithRetry($url, $headers, $maxRetries = 5) {
    $delay = 1; // seconds
    for ($i = 0; $i < $maxRetries; $i++) {
        $response = httpRequest($url, $headers);
        if ($response['status'] === 429) {
            $retryAfter = $response['headers']['Retry-After'] ?? $delay;
            sleep(max((int)$retryAfter, $delay));
            $delay = min($delay * 2, 60); // cap at 60s
            continue;
        }
        return $response; // success
    }
    throw new Exception('Max retries exceeded');
}

Python (scripts / automation)

# Python — Exponential Backoff with Retry-After support
import time, requests

def send_with_retry(url, headers, max_retries=5):
    delay = 1  # seconds
    for attempt in range(max_retries):
        response = requests.get(url, headers=headers)
        if response.status_code == 429:
            retry_after = int(response.headers.get('Retry-After', delay))
            time.sleep(max(retry_after, delay))
            delay = min(delay * 2, 60)  # exponential backoff, cap 60s
            continue
        return response  # success
    raise Exception('Max retries exceeded')

7. Request Flow Diagram

Every bulk API call should follow this flow. Print or save this as a reference for your integration team.

                    BULK API REQUEST FLOW

  +------------------+
  |  Add domain to   |
  |  request queue   |
  +--------+---------+
           |
           v
  +------------------+
  |  Send request to |
  |    /api-bulk     |
  +--------+---------+
           |
     +-----+------+
     |            |
   200 OK       429 Too Many
     |            |
     v            v
  Process      Read Retry-After header
  result       Wait >= 1 second
     |            |
     |            v
     |        Apply exponential backoff
     |        1s -> 2s -> 4s -> 8s -> ...
     |            |
     |            v
     |        Retry request
     |            |
     +-----+------+
           |
           v
  +------------------+
  |   Next item in   |
  |      queue       |
  +------------------+

8. Best Practices for Bulk API Integration

Request Queuing

Never fire requests without a rate controller. Use a centralized queue that enforces a maximum of 1 outgoing request per second, shared across all threads and workers.

Avoid Duplicate Requests

Do not check the same domain more than once within the same session. Cache results locally and de-duplicate your input list before processing.

Monitor Your Error Rate

Track the ratio of 429 responses to successful responses. A rising 429 rate is an early warning that your implementation needs adjustment before access restrictions are triggered.

9. Common Integration Mistakes to Avoid

Common Mistakes
  • Sending parallel requests without a centralized rate limiter or queue
  • Using /api instead of /api-bulk for automated or scripted operations
  • Ignoring HTTP 429 responses and continuing to send requests
  • Not implementing retry logic or exponential backoff
  • Repeatedly checking the same domain in short intervals
  • Rotating API keys or IPs to bypass rate limits (monitored, treated as abuse)
  • Assuming async or multi-threaded calls are each counted separately

10. FAQ — Frequently Asked Questions

Common questions about API rate limiting, throttling, and bulk API usage.

Question Answer
Can I send parallel requests? No. Total outgoing rate must stay at or below 1 req/sec per API Key, regardless of thread or async model.
Can I use multiple API keys to bypass the limit? No. Multi-key abuse is actively monitored and classified as a policy violation subject to account termination.
Does async processing increase my limit? No. Rate limits apply globally to the API Key, not per thread or coroutine.
What happens if I ignore a 429 response? Continued requests after a 429 will compound throttling and may trigger progressive access restrictions.
Is burst traffic allowed? No. Even a short burst of requests within a single second is treated as a rate limit violation.
What is the Retry-After header? It is included in every 429 response and tells you exactly how many seconds to wait before retrying. Always respect it.
How do I request a higher rate limit? Contact our support team with your use case and volume requirements. Custom limits are available for qualified resellers.
Does this policy apply to /api as well? Yes. Using /api for bulk operations is a policy violation. Enforcement is automatic and real-time.

11. Automated Monitoring & Abuse Detection

The following behaviors are continuously monitored across all API traffic:

  • High-volume traffic causing measurable system performance degradation
  • Repeated registration or lookup attempts for the same domain
  • Elevated error-to-success ratios (high fail rate)
  • Abnormal or suspicious traffic patterns
  • Connection instability and timeout anomalies
Attempting to circumvent rate limits by rotating API keys or IP addresses is treated as a serious policy violation. Platform-wide access restrictions and account termination may follow.

12. Why This Policy Exists

This policy ensures:

  • Fair and equal API access for all resellers
  • Stable, predictable platform performance at enterprise scale
  • Protection against unintended abuse and system overload
  • Support for high-volume automation workflows done correctly
  • Consistent service quality across the entire reseller network

13. Requesting Higher Rate Limits

If your integration requires throughput beyond the default 1 request/second limit, contact our support team. We will evaluate your use case and discuss a configuration suited to your reseller profile and volume requirements.