Diagnosing High CPU on .NET Apps in IIS

Your ASP.NET Core app worked fine in staging. In production it occasionally pegs CPU at 100%, requests pile up in the IIS queue, and users see timeouts. By the time you SSH in, the worker process has recovered — or worse, IIS recycled it and you've lost the diagnostic state. This guide walks through the eight causes that account for almost every real-world high-CPU incident on Windows-hosted .NET apps, the tools that diagnose each, and the production-ready configuration that prevents them.

The shape of a high-CPU incident

On Windows-hosted .NET apps, "high CPU" almost always means one of three patterns:

Sustained 100% CPU on the w3wp.exe worker process — a hot code path or runaway loop is saturating threads

CPU spikes correlated with specific requests — one endpoint is expensive (slow query, large serialization, regex catastrophe)

Periodic CPU spikes with no correlated traffic — garbage collection, background tasks, or app-pool recycles

The diagnostic approach is the same regardless: capture the state while the spike is happening, identify the responsible call stack, then fix the underlying pattern.

The 8 most common causes

  • Synchronous I/O calls blocking the thread pool

The most common cause we see. A developer calls .Result or .GetAwaiter().GetResult() on an async method instead of awaiting it. Under load, ASP.NET Core's thread pool fills with blocked threads waiting on async I/O that's never awaited. The runtime spawns more threads, those also block, CPU climbs as the thread pool churns. Eventually you hit thread starvation, requests pile up in the IIS queue, and response times collapse.

// BAD — blocks the thread, exhausts thread pool under load

var user = _userService.GetUserAsync(id).Result;

// BAD — same thing, different syntax

var user = _userService.GetUserAsync(id).GetAwaiter().GetResult();

// GOOD — releases the thread while waiting

var user = await _userService.GetUserAsync(id);

Diagnostic: Run dotnet-counters monitor -n YourApp and watch ThreadPool Queue Length. If it climbs past 100 and stays there under load, you have sync-over-async blocking somewhere.

  • N+1 Entity Framework Core queries

A loop that issues one database query per iteration. EF Core 8/10's eager-loading defaults help, but ad-hoc queries inside loops still appear constantly.

// BAD — N+1: one query per Order

foreach (var order in orders)

{

order.LineItems = _db.LineItems

.Where(l => l.OrderId == order.Id)

.ToList();

}

// GOOD — single query with .Include()

var orders = _db.Orders

.Include(o => o.LineItems)

.ToList();

For a list of 200 orders, that's 200 round-trips instead of 1. SQL Server returns results in microseconds; the network round-trip and the EF Core materialisation overhead dominate, and CPU climbs as the worker churns through query plans.

Diagnostic: Enable EF Core query logging in appsettings.Development.json and watch for repeated identical queries:

"Logging": {

"LogLevel": {

"Microsoft.EntityFrameworkCore.Database.Command": "Information"

}

}

  • JSON serialization on hot paths without caching

System.Text.Json is fast, but JsonSerializer.Serialize with a fresh JsonSerializerOptions object on every call rebuilds the serialization metadata each time. On a hot API endpoint serializing 10,000 RPS, this dominates CPU.

// BAD — new options on every request, rebuilds metadata each time

var json = JsonSerializer.Serialize(data, new JsonSerializerOptions

{

PropertyNamingPolicy = JsonNamingPolicy.CamelCase

});

// GOOD — static cached options

private static readonly JsonSerializerOptions _options = new()

{

PropertyNamingPolicy = JsonNamingPolicy.CamelCase

};

var json = JsonSerializer.Serialize(data, _options);

For ASP.NET Core, the built-in builder.Services.AddControllers().AddJsonOptions(...) already caches the options. The problem only appears in manual serialization on minimal API endpoints or in background services.

  • Excessive logging at hot paths

String interpolation inside _logger.LogDebug($"...") is always evaluated, even when the Debug level is disabled. On a hot path that runs 10,000 times per second, formatting strings that nobody reads burns CPU.

// BAD — string interpolated even if Debug logging is off

_logger.LogDebug($"Processed item {item.Id} in {elapsed.TotalMilliseconds}ms");

// GOOD — message template; arguments only formatted if level enabled

_logger.LogDebug("Processed item {ItemId} in {Elapsed}ms",

item.Id, elapsed.TotalMilliseconds);

Always use the structured-logging message template form. The .NET 8/10 logger pipeline skips formatting entirely when the log level isn't active.

  • Regex catastrophic backtracking

A poorly-written regex with nested quantifiers can take exponential time on certain inputs. The classic example: (a+)+$ on the input "aaaaaaaaaaaaaaaaaaaab" takes minutes to fail-match. Under attacker-controlled input (a public form, API endpoint, search box) this is a denial-of-service vector that pegs CPU.

Diagnostic: The .NET 7+ runtime has a regex timeout you should enable globally:

// In Program.cs

AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT",

TimeSpan.FromMilliseconds(200));

Any regex that takes more than 200ms now throws RegexMatchTimeoutException instead of running unbounded.

  • Tight retry loops with no backoff

External API call fails. Your retry policy fires immediately. The remote service is still down, so it fails again. Your code retries again. Now you're hitting the failed service hundreds of times per second per request, burning CPU and bandwidth.

// BAD — no backoff, retries instantly

for (int i = 0; i < 100; i++) {

var result = await _api.CallAsync();

if (result.IsSuccess) break;

}

// GOOD — exponential backoff with Polly

var policy = Policy

.Handle<HttpRequestException>()

.WaitAndRetryAsync(5, attempt =>

TimeSpan.FromSeconds(Math.Pow(2, attempt)));

Use Polly, the .NET resilience pipeline, or the built-in Microsoft.Extensions.Http.Resilience package. Never retry without exponential backoff in production.

  • Garbage collection thrashing

Allocation-heavy code paths trigger frequent Gen2 garbage collections. Each Gen2 GC pauses the worker for tens to hundreds of milliseconds. Under load, the worker spends more time GC'ing than serving requests, and CPU appears pegged.

Diagnostic: dotnet-counters monitor -n YourApp shows % Time in GC. Sustained values above 10% mean GC pressure is real. Look at:

String concatenation in loops — use StringBuilder or string.Concat

LINQ query allocations in hot paths — for-loops are sometimes worth it

Boxing of value types passed as object

Large object allocations >85KB — these go straight to the Large Object Heap and trigger Gen2

  • Application pool with bad recycle settings

If your IIS app pool is configured with "Recycle on file change" or has a too-short "Regular time interval" (default is 1740 minutes = 29 hours), the worker recycles mid-request. New worker spins up, JIT-compiles everything, warms caches — all that work is CPU. If the recycle is constant, the worker is permanently in startup mode.

Diagnostic: Event Viewer → Windows Logs → System → filter on Source = "WAS". You'll see every worker recycle event with the reason field.

On Adaptive Web Hosting's plans, IIS app pool recycle settings are tuned for typical .NET workloads — long recycle intervals, no file-change recycling, and the worker has dedicated memory headroom so memory-pressure recycles are rare.

Diagnostic flow we recommend

When you're staring at a high-CPU production incident, work through these in order:

Capture a memory dump while the spike is happening. On Windows: procdump -ma w3wp.exe or right-click in Task Manager → Create Dump File. Analyse with Visual Studio's dump-analysis or WinDbg later.

Run dotnet-counters monitor -n YourApp — watch ThreadPool Queue Length, % Time in GC, and Requests/sec. Pattern usually tells you which cause group you're in.

Run dotnet-trace collect -n YourApp --duration 00:00:30 during the spike. Open the resulting .nettrace file in PerfView or Visual Studio Profiler to see which methods are burning CPU.

Check IIS Failed Request Tracing for requests taking >1s — often the same endpoint appears repeatedly when one slow path is the problem.

Check Event Viewer for app pool recycles in the same time window as the spike.

Cross-reference with EF Core query logs (if enabled), Application Insights (or equivalent APM), and your structured-logging pipeline.

Production-ready settings for stable CPU

In Program.cs:

// 200ms regex timeout — kills runaway patterns

AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT",

TimeSpan.FromMilliseconds(200));

// HTTP client with sane defaults + resilience

builder.Services.AddHttpClient<ExternalApi>()

.AddStandardResilienceHandler(); // exponential backoff included

// Server GC for production (default in .NET 8+, but verify)

// In .csproj: <ServerGarbageCollection>true</ServerGarbageCollection>

// Don't materialise full EF Core entities for read-only queries

// Use AsNoTracking() + projection to anonymous types or DTOs

In your IIS app pool settings (or Plesk for Windows on Adaptive Web Hosting):

Idle Time-out: 0 (don't auto-suspend — first-request cold-start is painful)

Regular Time Interval (recycle): 0 OR ≥24h (avoid mid-day recycles)

Specific Times: empty (don't recycle at fixed times)

Disable Recycling on Configuration Changes: enabled where possible

Recycling for File Changes: disabled (configuration changes shouldn't recycle production)

How Adaptive Web Hosting's infrastructure helps

The two most common causes of mystery CPU spikes on shared Windows hosting are neighbour-tenant contention (someone else's site eating your CPU budget) and aggressive app pool recycle settings (defaults tuned for old IIS environments). Adaptive Web Hosting's ASP.NET Core plans address both:

Dedicated IIS Application Pools per site on every plan — your worker process has its own predictable CPU and memory budget; neighbour tenants can't steal your cycles

Production-tuned app pool recycle settings — long regular-time intervals, no file-change recycle, generous memory ceilings for the plan tier

Real SQL Server 2022 — not Express, no DTU caps, full Entity Framework Core 10 support; query performance issues are about your code, not the platform's database edition

Plesk for Windows IIS log viewer — direct access to worker recycle events and IIS request traces without RDP'ing into the box

All current .NET LTS runtimes pre-installed — .NET 8 LTS and .NET 10 LTS side-by-side, so runtime upgrades aren't a hosting concern

Frequently asked questions

How do I tell whether the CPU spike is in my code or in garbage collection?

Run dotnet-counters monitor -n YourApp during the spike. If % Time in GC is above 10% sustained, GC pressure is your problem — look for allocation-heavy patterns. If % Time in GC is low but CPU is high, your code is doing the work — capture a dotnet-trace and look at the call stacks.

My app's CPU is fine until exactly 200 concurrent users, then it pegs. Why 200?

You've hit thread pool exhaustion. The default thread pool minimum is set per-core; under heavy async-blocking patterns (cause #1) you fill those threads and the runtime starts injecting new threads at a slow rate (~1/second). Around 200 concurrent blocked operations, the injection rate can't keep up. Fix the sync-over-async pattern in your code; raising the thread pool minimum is a band-aid that just moves the failure point.

Can I just throw more CPU at the problem and ignore it?

For some workloads, yes — if you're CPU-bound on legitimate work (image processing, ML inference, large data transforms), more cores genuinely help. For thread starvation, regex catastrophic backtracking, or GC thrashing, more CPU just means the failure happens at higher load. Diagnose first.

Does Native AOT help with CPU usage?

Native AOT (available for ASP.NET Core 8+) compiles ahead of time, eliminating JIT compilation overhead and reducing startup time. For long-running production processes, the startup savings are negligible — you start once. Native AOT helps most for serverless / cold-start scenarios. For normal IIS-hosted ASP.NET Core, traditional JIT compilation has more aggressive runtime optimisations and usually wins on steady-state CPU efficiency.

Should I increase the IIS application pool's CPU limit?

Probably not. The CPU limit doesn't make your code faster — it just caps how much CPU IIS lets the worker consume before throttling it. If you're hitting the cap, you have an underlying performance issue (one of causes 1-7). Fix the code; the cap shouldn't matter.

What about Application Insights or Sentry — do those help diagnose CPU?

Yes — APM tools surface slow endpoints, exception bursts, and request-duration distributions that make it obvious which path is hot. They don't directly measure CPU but they identify the suspicious endpoints fast. dotnet-counters and dotnet-trace are the precise CPU tools; APM is the higher-level "where do I start looking" tool.

Will switching from EF Core to Dapper fix high CPU from N+1 queries?

Only if you also fix the N+1 pattern. Dapper is faster per-query than EF Core, but 200 Dapper queries are still 200 round-trips. The fix is materializing the data in a single query regardless of which ORM you use. Dapper is often the right tool for read-heavy hot paths, but it doesn't fix architectural problems.

Bottom line

High CPU on Windows-hosted .NET apps almost always traces to one of: thread starvation from sync-over-async, N+1 EF Core queries, regex catastrophic backtracking, GC pressure from allocation-heavy code, runaway retry loops, or app pool recycle thrashing. Each has a specific diagnostic and a specific fix.

On Adaptive Web Hosting's ASP.NET Core plans, dedicated IIS Application Pools, production-tuned recycle settings, and real SQL Server 2022 eliminate the platform-level causes — the rest is your application code, and the diagnostic tools above will find it within minutes once you know what to look for. Every plan includes a 30-day money-back guarantee. View hosting plans or talk to an ASP.NET expert.

Back to Blog