Error 216: QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING
This error occurs when you attempt to execute a query with a query_id that is already in use by a currently running query.
ClickHouse enforces unique query IDs to prevent duplicate execution and enable proper query tracking, cancellation, and monitoring.
Quick reference
What you'll see:
Most common causes:
- Reusing the same static
query_idfor multiple concurrent queries - Retry logic that doesn't regenerate the
query_id - Insufficient randomness in multi-threaded ID generation
- Known bug in ClickHouse 25.5.1 (queries execute twice internally)
- Previous query still running when retry is attempted
Quick fixes:
For application code:
Most common causes
-
Reusing static query IDs in application code
- Hardcoded query IDs like
'my-query'or'daily-report' - Using the same ID for multiple concurrent requests
- Application frameworks generating non-unique IDs
- Pattern:
query_id = 'app-name-' + request_typewithout uniqueness
- Hardcoded query IDs like
-
Client retry logic without ID regeneration
- Automatic retry on network timeout reusing the same
query_id - Previous query still running when retry is attempted
- Connection pools executing queries with duplicate IDs
- Load balancers distributing the same request to multiple servers
- Automatic retry on network timeout reusing the same
-
Insufficient randomness in multi-threaded applications
- Using
UUID + ":" + random(0, 100)doesn't provide enough uniqueness - Timestamp-based IDs without sufficient precision (seconds instead of nanoseconds)
- Multiple threads generating IDs simultaneously without proper coordination
- Example that fails:
query_id = f"{uuid.uuid4()}:{random.randint(0, 100)}"
- Using
-
Version-specific regression (25.5.1)
- ClickHouse 25.5.1 has a critical bug where queries execute twice internally
- Single client request results in two
executeQuerylog entries milliseconds apart - First execution succeeds, second fails with error 216
- Affects almost all queries with custom
query_idin 25.5.1 - Workaround: Downgrade to 25.4.5 or wait for fix
-
Long-running queries not cleaned up
- Previous query with same ID still in
system.processes - Query appears completed on client side but server still processing
- Network interruptions leaving queries in limbo state
- Queries waiting on locks or merges
- Previous query with same ID still in
-
Distributed query complexity
- Query coordinator using same ID for multiple nodes
- Retry on different replica with same query_id
- Cross-cluster queries not properly cleaned up
-
Misunderstanding query_id purpose
- Attempting to use
query_idas an idempotency key - Expecting ClickHouse to deduplicate based on
query_id - Using
query_idto prevent duplicate inserts (doesn't work)
- Attempting to use
Common solutions
1. Generate truly unique query IDs
2. Implement proper retry logic
3. Check if query is still running before retry
4. Kill stuck queries before retry
5. Don't use query_id for idempotency
6. Workaround for 25.5.1 regression
Prevention tips
-
Always use UUIDs for query_id: Never use predictable or static query IDs. Use UUID4 (random) or UUID1 (timestamp-based with MAC address).
-
Generate new query_id for every execution: Even when retrying the exact same query, generate a fresh
query_id. -
Understand query_id purpose: It's for monitoring, tracking, and cancellation—NOT for idempotency or deduplication.
-
Avoid 25.5.1: If you're on ClickHouse 25.5.1 and experiencing this error frequently, downgrade to 25.4.5 or wait for 25.5.2+.
-
Test concurrent execution: Ensure your ID generation strategy produces unique IDs under high concurrency (1000+ queries/second).
-
Use KILL QUERY ON CLUSTER: In distributed setups, always use
ON CLUSTERvariant to kill queries on all nodes. -
Monitor query cleanup: Set up alerts for queries stuck in
system.processesfor > 5 minutes. -
Implement proper ID structure:
Debugging steps
1. Check if query is actually running
2. Check query execution history
3. Investigate 25.5.1 regression pattern
4. Find duplicate query_id patterns
5. Check for stuck queries
When query_id is useful
Despite the limitations, query_id is valuable for:
1. Query tracking and correlation
2. Selective query cancellation
3. Performance analysis over time
4. Distributed tracing integration
Related error codes
- Error 202:
TOO_MANY_SIMULTANEOUS_QUERIES- Concurrent query limit exceeded (often seen together) - Error 394:
QUERY_WAS_CANCELLED- Query cancelled via KILL QUERY