A concise guide for engineers building integrations with NetBox REST and GraphQL APIs.
NetBox Version: 4.4+ (4.5+ for v2 tokens)
- Use v2 tokens on NetBox 4.5+ (
Bearer nbt_<key>.<token>) - Migrate from v1 tokens before NetBox 4.7 (deprecation planned)
- Always paginate list requests (max 1000 per page)
- Use
?brief=Truefor minimal representations, or?fields=for specific fields - Use
?exclude=config_contextwhen querying devices/VMs - Use
PATCHfor partial updates, notPUT - Use list endpoints with JSON arrays for bulk operations
- Use netbox-graphql-query-optimizer for all queries
- Every list query MUST include pagination limits at every nesting level
- Keep query depth ≤3, never exceed 5
- Request only fields you need
- Filter by ID where possible to avoid SQL JOINs
- Use local filter fields instead of deeply nested filter paths
- Use Diode for simplified ingestion (no dependency order, reference by name)
- Use REST/GraphQL for reading; use Diode for writing/populating
| Version | Format | Security | Status |
|---|---|---|---|
| v1 | Token <40-char-hex> |
Plaintext in DB | Deprecated in 4.7 |
| v2 | Bearer nbt_<key>.<secret> |
HMAC-SHA256 hashed | Recommended |
# v2 token usage
headers = {"Authorization": f"Bearer {TOKEN}"}
# pynetbox handles it automatically
nb = pynetbox.api("https://netbox.example.com", token=TOKEN)Use /api/users/tokens/provision/ for automated token creation in CI/CD pipelines.
| Do | Don't |
|---|---|
PATCH for partial updates |
PUT (replaces entire object) |
?brief=True for minimal data |
Fetch full objects when you only need IDs/names |
?fields=foo,bar for specific fields |
Fetch full objects when you only need certain fields |
?exclude=config_context |
Include config_context in device/VM lists when not needed |
Specific filters (name__ic=) |
Generic search (q=) at scale |
| Bulk operations via arrays | Individual requests in loops |
# Always paginate - NetBox defaults to 50 items
def get_all(endpoint):
results, url = [], f"{API_URL}/{endpoint}/?limit=100"
while url:
data = requests.get(url, headers=headers).json()
results.extend(data["results"])
url = data.get("next")
return results
# pynetbox handles pagination automatically
all_devices = list(nb.dcim.devices.all())# Bulk create - POST array to list endpoint
requests.post(f"{API_URL}/dcim/devices/", json=[device1, device2, device3])
# Bulk update - PATCH array with IDs
requests.patch(f"{API_URL}/dcim/devices/", json=[
{"id": 1, "status": "active"},
{"id": 2, "status": "active"}
])
# Bulk delete - DELETE array with IDs
requests.delete(f"{API_URL}/dcim/devices/", json=[{"id": 1}, {"id": 2}])All bulk operations are atomic (all-or-none). Signals and webhooks fire for each object (not once per batch).
| Suffix | Meaning | Example |
|---|---|---|
| (none) | Exact match | status=active |
__n |
Not equal | status__n=offline |
__ic |
Contains (case-insensitive) | name__ic=core |
__isw |
Starts with | name__isw=sw- |
__gte/__lte |
Greater/less than | vlan_id__gte=100 |
__isnull |
Null check | primary_ip__isnull=false |
Custom fields: use cf_ prefix (e.g., cf_environment=production)
- Use the query optimizer - Scores have been reduced from 20,500 to 17 using this tool
- Always paginate - Every list, including nested lists
- Limit depth - Keep ≤3 levels, never exceed 5
- Select only needed fields - Don't over-fetch
- Filter by ID where possible - Avoids SQL JOINs for related objects
- Use local filter fields - Avoid deeply nested filter paths (see below)
# WRONG: Unbounded, deep, over-fetching
query {
site_list {
devices {
interfaces {
ip_addresses { vrf { name } }
}
}
}
}
# CORRECT: Paginated, shallow, selective
query {
device_list(limit: 100, filters: {site_id: 123}) {
name
status
primary_ip4 { address }
}
}| Query Type | Max Score |
|---|---|
| Dashboard widgets | 50 |
| List views | 150 |
| Detail views | 200 |
| Reports | 500 |
Filter by ID to avoid JOINs:
# SUBOPTIMAL: Requires JOIN to match site name
device_list(filters: {site: {name: {exact: "NYC-DC1"}}})
# OPTIMAL: Uses local site_id column directly
device_list(filters: {site_id: 123})Use local filter fields (4.5.1+):
# SUBOPTIMAL: Filter depth 3
interface_list(filters: {device: {site: {name: {exact: "NYC-DC1"}}}})
# OPTIMAL: Filter depth 2 (uses local site filter)
interface_list(filters: {site: {name: {exact: "NYC-DC1"}}})| Use Case | Recommendation |
|---|---|
| Related objects (2+ types) | GraphQL |
| Bulk writes with dependencies | Diode |
| Bulk create/update/delete | REST or Diode |
| Simple CRUD | REST |
| Flexible field selection | GraphQL |
| Reading/querying data | REST or GraphQL |
NetBox GraphQL uses offset-based pagination, which degrades at scale:
| Page | Offset | Performance |
|---|---|---|
| 1 | 0 | Fast |
| 100 | 9,900 | Slow |
| 1000 | 99,900 | Timeout risk |
Version-specific solutions:
| Version | Strategy |
|---|---|
| 4.4.x | Offset only (avoid deep pagination) |
| 4.5.x | ID range filtering workaround |
| 4.6.0+ | Cursor-based (start parameter) - #21110 |
4.5.x workaround - Use ID range filtering to emulate cursors:
query GetDevices($minId: Int!, $limit: Int!) {
device_list(limit: $limit, filters: { id__gte: $minId }) {
id
name
}
}Track max(id) + 1 from each page as the next $minId.
Objects must be created in dependency order. Key sequence:
- Organization: Regions → Sites → Locations
- DCIM: Manufacturers → Device Types → Device Roles → Devices → Interfaces
- IPAM: RIRs → Aggregates → Prefixes → IP Addresses
Tip: Use Diode to avoid managing dependency order manually.
Site Hierarchy: IPAM Hierarchy:
Region / Site Group RIR
└── Site └── Aggregate
└── Location (recursive) └── Prefix (recursive)
└── Rack └── IP Address
└── Device
Region and Site Group are parallel groupings (both optional on a Site). Locations can be nested recursively.
# Custom fields in create/update
data = {"name": "switch-01", "custom_fields": {"environment": "production"}}
# Filter by custom field
params = {"cf_environment": "production"}
# Tags for cross-cutting classification
data = {"name": "switch-01", "tags": [{"name": "pci-compliant"}]}For data ingestion, use Diode instead of direct API. Key benefits:
- No dependency order - Diode handles it automatically
- Reference by name - No ID lookups needed
- Auto-creates missing objects - Manufacturers, sites, roles, etc.
from netboxlabs.diode.sdk import DiodeClient
from netboxlabs.diode.sdk.ingester import Device, Entity
with DiodeClient(
target="https://<your-diode-host>:443/diode",
app_name="network-discovery",
app_version="1.0.0",
) as client:
device = Device(
name="switch-nyc-01",
device_type="Cisco Catalyst 9300", # By name!
manufacturer="Cisco", # Auto-created if missing
site="NYC-DC1", # Auto-created if missing
role="Access Switch",
)
response = client.ingest([Entity(device=device)])Note on uniqueness: Name resolution varies by model—devices are unique by name + site, interfaces by name + device, etc.
Test ingestion without sending to server:
from netboxlabs.diode.sdk import DiodeDryRunClient
with DiodeDryRunClient(app_name="my-app", output_dir="./test") as client:
# Same API as DiodeClient, outputs to file instead
client.ingest([Entity(device=device)])| Scenario | Tool |
|---|---|
| Network discovery pushing data | Diode |
| Bulk migrations | Diode |
| Reading/querying data | REST/GraphQL |
| Single object CRUD | REST |
Requires netbox-branching plugin.
| State | Description | Allowed Operations |
|---|---|---|
NEW |
Just created | None |
PROVISIONING |
Schema being created | None |
READY |
Ready for use | Read, write, sync, merge, revert |
SYNCING/MERGING |
Operation in progress | Read only |
MERGED |
Successfully merged | Read only (historical) |
import requests
import time
NETBOX_URL = "https://netbox.example.com"
HEADERS = {"Authorization": f"Bearer {TOKEN}", "Content-Type": "application/json"}
# 1. Create branch
branch = requests.post(
f"{NETBOX_URL}/api/plugins/branching/branches/",
headers=HEADERS,
json={"name": "feature-update", "description": "Q1 updates"}
).json()
schema_id = branch["schema_id"] # 8-char ID for X-NetBox-Branch header
# 2. Wait for READY state
while True:
b = requests.get(f"{NETBOX_URL}/api/plugins/branching/branches/{branch['id']}/", headers=HEADERS).json()
if b["status"]["value"] == "ready":
break
time.sleep(2)
# 3. Work in branch context
branch_headers = {**HEADERS, "X-NetBox-Branch": schema_id}
requests.post(f"{NETBOX_URL}/api/dcim/devices/", headers=branch_headers, json={...})
# 4. Merge (returns async Job)
job = requests.post(
f"{NETBOX_URL}/api/plugins/branching/branches/{branch['id']}/merge/",
headers=HEADERS,
json={"commit": True} # Use False for dry-run validation
).json()
# 5. Poll until complete
while True:
j = requests.get(job["url"], headers=HEADERS).json()
if j["status"]["value"] == "completed":
break
if j["status"]["value"] in ("errored", "failed"):
raise RuntimeError("Merge failed")
time.sleep(2)| Concept | Details |
|---|---|
| Context header | X-NetBox-Branch: {schema_id} (8-char ID, NOT branch name or numeric ID) |
| Async operations | sync, merge, revert all return Job objects—poll for completion |
| Dry-run | All async ops accept {"commit": false} for validation |
| Job statuses | pending → running → completed (or errored/failed) |
- Exclude config_context - Single biggest performance win for device queries
- Use brief mode -
?brief=Truefor lists - Avoid
q=search - Use specific filters instead
| Resource | Endpoint |
|---|---|
| Devices | /api/dcim/devices/ |
| Interfaces | /api/dcim/interfaces/ |
| Sites | /api/dcim/sites/ |
| Prefixes | /api/ipam/prefixes/ |
| IP Addresses | /api/ipam/ip-addresses/ |
| VLANs | /api/ipam/vlans/ |
| Code | Meaning | Action |
|---|---|---|
| 200/201 | Success | Process response |
| 400 | Validation error | Check response body |
| 401 | Unauthorized | Check token |
| 403 | Forbidden | Check permissions |
| 429 | Rate limited | Backoff and retry |
nb.dcim.devices.all() # All (paginated iterator)
nb.dcim.devices.filter(site="nyc") # Filter
nb.dcim.devices.get(name="switch-01") # Single object
nb.dcim.devices.create(name="new", ...) # Create
device.status = "active"; device.save() # Update
device.delete() # DeleteNote: For detailed rationale, exceptions, and comprehensive examples, see the individual rule files in
references/rules/.