SQL Server Connection Timeout Errors: ASP.NET Playbook
System.Data.SqlClient.SqlException: Timeout expired. Microsoft.Data.SqlClient.SqlException: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. The connection pool has been exhausted. These are the messages developers see when SQL Server connections fail at the worst possible time — in production, under load, with no obvious trigger. This is a structured diagnostic playbook for the seven causes that account for almost every real-world SQL Server connection timeout on ASP.NET workloads.
The four error messages, decoded
SQL Server connection failures surface as four distinct exception types in .NET. Each points to a different layer of the stack:
Exception message containsFailure layerLikely cause
Timeout expired (no other detail)Query executionLong-running query exceeded CommandTimeout
Connection Timeout Expired / pre-login handshakeTCP / network / authenticationServer unreachable, firewall, slow auth
The connection pool has been exhaustedConnection poolCode is leaking connections (missing using)
Login failed for userAuthenticationWrong credentials, locked account, password rotated
Read the exception message carefully — it narrows the cause from 7+ possibilities down to 1-2 before you even start digging.
The 7 causes in order of frequency
- Connection pool exhaustion from undisposed connections
The most common cause in modern code. SqlConnection isn't actually closing the underlying connection — it returns it to the connection pool. The pool has a default ceiling of 100 connections per app instance. If your code opens connections without disposing them, the pool fills up. New requests then block waiting for a connection, hit the 15-second connection timeout, and throw The connection pool has been exhausted.
// BAD — connection never returned to pool
var connection = new SqlConnection(connStr);
connection.Open();
var result = command.ExecuteReader();
// ... no Close or Dispose anywhere
// GOOD — using ensures disposal
using var connection = new SqlConnection(connStr);
connection.Open();
var result = command.ExecuteReader();
// EVEN BETTER — let EF Core or Dapper manage it
var users = await _db.Users.ToListAsync();
Diagnostic: Run this against your SQL Server during the spike:
SELECT
DB_NAME(database_id) AS database_name,
COUNT(*) AS connections,
program_name
FROM sys.dm_exec_sessions
WHERE is_user_process = 1
GROUP BY database_id, program_name
ORDER BY connections DESC;
If you see hundreds of sessions from your app with mostly idle status, you have a connection leak. EF Core 8/10 manages connections automatically — this almost always means raw SqlConnection code or Dapper calls that aren't wrapped in using.
- Long-running queries blocking the pool
One slow query holds a connection for 30 seconds. Under 100 concurrent requests, that's 100 pool slots tied up. Subsequent requests can't get a connection until something finishes.
// Look for missing indexes
SELECT TOP 10
qs.execution_count,
qs.total_elapsed_time / qs.execution_count AS avg_microseconds,
SUBSTRING(qt.text, qs.statement_start_offset/2 + 1,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(qt.text) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2 + 1) AS query
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
ORDER BY avg_microseconds DESC;
Top queries by average elapsed time tell you where the slowness lives. Add missing indexes, add .AsNoTracking() for read-only EF Core queries, project to DTOs instead of loading full entities.
- Network or firewall changes
Your app can't reach the SQL Server at the TCP layer. Connection times out at pre-login handshake. Causes:
Firewall rule change blocked port 1433
VNet/VPC routing change broke the route
DNS change pointed the connection string at a stale address
SQL Server's TCP listener was disabled (rare but happens after maintenance)
Diagnostic: From the app server, test raw TCP connectivity:
Test-NetConnection -ComputerName sql.your.host -Port 1433
If TCP fails, the problem is below SQL Server. Work with whoever changed the network last. On Adaptive Web Hosting plans, your hosting plan and its included SQL Server 2022 instance share the same VPC, so this failure mode is rare here.
- Wrong or short Connection Timeout
The default Connection Timeout in .NET is 15 seconds. For most workloads that's fine. But:
First connection of the day to a database that auto-pauses can take 30-60 seconds to wake
Windows Authentication via Kerberos with bad DNS or unreachable KDC takes a long time to fail
SQL Server failover (HA pairs) can take 30+ seconds during a switchover
In production connection strings, set Connect Timeout=30 (or 60 for HA setups):
Server=sql.host.com;Database=app;User Id=app;Password=...;Connect Timeout=30;
This is the connection timeout. Don't confuse it with CommandTimeout, which is per-query and lives on the SqlCommand object.
- SQL Server max connections reached
SQL Server has its own per-instance connection limit (default 32,767, but Express editions cap lower and shared SQL Server tiers often cap dramatically lower). If multiple apps share the instance and one leaks connections, your app pays the price too.
SELECT @@MAX_CONNECTIONS AS max_connections,
COUNT(*) AS current_connections
FROM sys.dm_exec_sessions
WHERE is_user_process = 1;
If current_connections is close to max_connections, the instance is full. On Adaptive Web Hosting plans, your SQL Server 2022 instance isn't shared with neighbour tenants in this way — the Developer plan ($9.49) and up each include a real, full SQL Server 2022 instance with the standard ~32k connection ceiling. Neighbour-tenant connection exhaustion isn't a failure mode here.
- CommandTimeout too short for the workload
The .NET SqlCommand.CommandTimeout defaults to 30 seconds. For most OLTP queries that's overkill. For reporting queries, data migrations, or batch jobs that legitimately take 2-5 minutes, the default times out mid-query and you see Timeout expired.
// Per-query override for known-slow operations
command.CommandTimeout = 300; // 5 minutes
// EF Core 8/10 — per-DbContext
_db.Database.SetCommandTimeout(TimeSpan.FromMinutes(5));
// EF Core 8/10 — global default in OnConfiguring
optionsBuilder.UseSqlServer(connStr, options =>
options.CommandTimeout(60));
Setting CommandTimeout globally to a large value is usually wrong — it lets slow queries silently hold connections. Set the default to 30-60 seconds and override per-call for known-slow operations.
- Authentication delays
Windows Authentication via Kerberos involves the application server contacting a Domain Controller for a service ticket. If DNS, the KDC, or the SPN configuration is broken, the auth handshake hangs. The connection times out at pre-login. Switch to SQL Authentication for clarity during diagnosis — if SQL Auth works and Windows Auth doesn't, the problem is in Kerberos/AD, not SQL Server.
The diagnostic flow we recommend
Read the exception message. "Connection pool exhausted" vs "Timeout expired" vs "pre-login handshake" tells you which third of this list to investigate.
Run sp_who2 or query sys.dm_exec_sessions during the failure. Count of sessions per app + status (running vs sleeping) tells you whether the connections exist and are stuck, or whether they can't even open.
Test TCP connectivity from the app server with Test-NetConnection. Below TCP means network problem.
Check the top-N expensive queries via sys.dm_exec_query_stats. One bad query holding connections for 30 seconds will exhaust a pool of 100 at modest concurrency.
Check connection-string timeouts. Connect Timeout for the initial connection, CommandTimeout for the query. They're different settings; mis-configuring one doesn't fix the other.
Switch to SQL Auth temporarily if you suspect Kerberos issues. Reverting to Windows Auth after diagnosis is a one-line config change.
Production-ready connection string
For an ASP.NET Core app talking to SQL Server 2022, this is a sane production-ready default:
Server=sql.your.host;
Database=YourApp;
User Id=app_user;
Password=...;
Connect Timeout=30;
Encrypt=true;
TrustServerCertificate=false;
Application Name=YourApp.Web;
MultipleActiveResultSets=False;
Key choices:
Connect Timeout=30 — survives slow-warming databases and HA failovers
Encrypt=true — .NET 8/10 defaults to encryption; making it explicit prevents downgrade
Application Name=YourApp.Web — this string appears in sys.dm_exec_sessions, makes it obvious which app is leaking connections
MultipleActiveResultSets=False — MARS adds overhead; EF Core 8/10 doesn't need it; leave off unless your code specifically requires it
EF Core 8/10 patterns that prevent these failures
EF Core's DbContext handles connection lifecycle automatically — if you use it correctly, connection-pool issues largely disappear:
// In Program.cs — scoped DbContext, registered correctly
builder.Services.AddDbContext<AppDbContext>(options =>
options.UseSqlServer(connStr, sqlOpt => sqlOpt.CommandTimeout(30)));
// In a controller or service — DbContext is scoped to the request
public class UsersController(AppDbContext db)
{
public async Task<User> Get(int id)
{
// Connection opened, query runs, connection returned to pool —
// all handled by EF Core. No leaks possible.
return await db.Users.FirstOrDefaultAsync(u => u.Id == id);
}
}
Two patterns to avoid:
Singleton DbContext — a single DbContext shared across requests serializes all DB access through one connection. Not strictly a connection leak but it limits throughput to one query at a time.
Manual db.Database.OpenConnectionAsync() without a matching close — pins the connection to the DbContext for its entire lifetime.
How Adaptive Web Hosting's SQL Server addresses these
The most common platform-level causes of SQL Server connection timeouts on shared hosting are noisy neighbour exhaustion (other tenants leaking connections) and SQL Server Express edition limits (10 GB database cap, 1 GB RAM, 4 cores max).
Adaptive Web Hosting's ASP.NET Core plans address both:
Real Microsoft SQL Server 2022 on every plan — not Express. Full memory and CPU; no 10 GB database cap below normal SQL Server limits
Predictable per-plan resource allocation — your SQL Server time isn't shared with neighbour tenants in a way that causes connection exhaustion outside your control
SQL Server Management Studio access — remote SSMS connections work directly, so you can run the diagnostic queries from this article against production
Plesk for Windows IIS log viewer — surfaces SQL exceptions in your app logs alongside IIS request logs for cross-referencing
Frequently asked questions
How can connection pool exhaustion happen if I use EF Core?
It usually means you have raw SqlConnection code or Dapper calls alongside EF Core that aren't wrapped in using. EF Core itself is fine. Search your codebase for new SqlConnection( and audit each one for a corresponding Dispose or using.
Should I increase the pool size beyond 100?
Almost always no. Pool size 100 means your app can handle 100 concurrent in-flight queries. If you legitimately need more than that, you have either a query-performance problem (queries should finish in milliseconds, not seconds) or you need a separate database server. Raising the pool size to 500 masks the underlying problem.
What about Always Encrypted columns — do those affect connection timeouts?
Yes — Always Encrypted adds an extra round-trip to the column encryption key store on first connection. If the key store (Azure Key Vault, Windows Cert Store) is slow or unreachable, the initial connection slows accordingly. Cache the column key data with SqlConnection.RegisterColumnEncryptionKeyStoreProviders in Program.cs so the lookup happens once at startup.
Does Adaptive Web Hosting limit my database size?
Each plan has a generous storage allocation rather than a strict SQL Server database size cap below normal SQL Server limits. The Developer plan ($9.49) suits databases up to ~30 GB; Business ($17.49) up to ~50 GB; Professional ($27.49) up to ~200 GB. None of these tiers force you onto SQL Server Express — you always get real SQL Server 2022.
Why does my Azure SQL Database hit connection limits even on a small DTU plan?
Azure SQL Database tiers cap the connection count tightly — Basic and S0 are 30 concurrent sessions; even Standard S3 caps at ~300. If your app is hitting those limits, you're either leaking connections or you need a larger Azure SQL tier. The included SQL Server 2022 instance on Adaptive Web Hosting plans uses the standard SQL Server connection ceiling (~32k) instead.
How do I tell whether a query is slow because of a missing index or because of network latency?
Run the query in SQL Server Management Studio with SET STATISTICS TIME ON and SET STATISTICS IO ON. CPU time + logical reads tell you whether the query plan is the problem. If those are tiny but your app takes seconds to get the result, network or driver overhead is the issue — check connection pool reuse, MARS settings, and TLS handshake performance.
Will switching to Dapper fix my connection timeouts?
Only if your timeouts are caused by EF Core change-tracking overhead, which is rare. Dapper is faster per-query than EF Core, but the connection lifecycle and pool behaviour are identical — both libraries return connections to the same pool. If your timeouts come from leaking connections, leak fixes apply regardless of ORM.
Bottom line
SQL Server connection timeouts in ASP.NET apps trace to one of seven causes: connection-pool leaks, long queries holding the pool, network/firewall failures, too-short Connect Timeout, SQL Server max-connections, too-short CommandTimeout, or Kerberos auth hangs. The exception message narrows it from 7 down to 1-2; the diagnostic queries above pinpoint exactly which.
On Adaptive Web Hosting's ASP.NET Core plans, the included SQL Server 2022 (not Express) and per-plan resource isolation eliminate the platform-level causes — the rest is in your application code and connection-string configuration. Every plan includes a 30-day money-back guarantee. View hosting plans or talk to an ASP.NET expert.