Lighthouse Score 100 and Still Crashes: OOM and Long Sessions

Why a perfect Lighthouse score doesn't prevent out-of-memory tab crashes in long-running SPAs — and how to build apps that survive 8-hour sessions.

Ashish 12 min read

A Lighthouse score of 100 means the page loads efficiently in a clean, synthetic test environment; it says nothing about what happens after 6 hours of continuous use — a hundred route navigations, thousands of polling cycles, and a React Query cache holding responses that should have been evicted hours ago.

Why lab metrics miss this: Lighthouse runs a single page load on a clean browser instance. It cannot measure heap growth rate, memory fragmentation, or tab survival probability on a 3GB Android device. OOM tab crashes don’t produce JavaScript errors — the renderer process just dies.

What this covers: What OOM crashes actually are and why Chrome’s tab limits differ by device, why long-lived SPAs accumulate memory (route state, unbounded caches, subscription leaks), how to monitor heap growth in production, and strategies that keep apps stable over 8-hour sessions.

Diagram contrasting Lighthouse lab scores with long-session memory growth and OOM risk that lab tools do not capture.

The Lab vs Field Gap

Lighthouse runs a single, synthetic page load on a clean browser instance with simulated throttling. It measures what happens during that load LCP, CLS, TBT, FID, and overall performance and produces a score. For what it measures, it’s accurate and useful.

What it doesn’t measure:

Not measured by LighthouseWhy it matters
Memory growth over timeLong-lived apps accumulate heap
Memory after 100 navigationsSPAs never fully unload between routes
GC pressure under sustained loadPolling loops cause GC pauses over hours
Memory fragmentationOld generation becomes fragmented; GC is less effective
Tab survival on low-RAM devicesChrome kills tabs proactively on 2GB-RAM phones

The Core Web Vitals field data in CrUX (Chrome User Experience Report) does capture real users, but it captures the load experience of those users, not their session survival rate. There’s no CrUX metric for “tab was killed by the OS after 4 hours.” That data simply doesn’t exist in any aggregate form you can easily access.

This creates a dangerous blind spot. You can achieve a perfect Lighthouse score and still be shipping software that degrades and crashes for users who use it the way internal tools get used all day, every day, with the same tab.)


What an OOM Tab Crash Actually Is

OOM stands for out of memory. In Chrome, each tab runs in its own renderer process with a memory limit. When the heap grows beyond that limit, Chrome’s renderer terminates the process and shows the user an “Aw, Snap!” error page.

The limit isn’t a fixed number. Chrome sets it dynamically based on:

  • The total available system RAM
  • How many other tabs are open
  • Chrome’s own heuristics for memory pressure

On a desktop machine with 16GB RAM and only a few tabs open, a single tab can use 2-4GB before Chrome kills it. On a mobile device with 3GB RAM running multiple apps, Chrome may proactively kill background tabs (not even the active one) when the system is under memory pressure.

The performance.memory API (Chrome only, non-standard) exposes some of these limits:

if (performance.memory) {
  console.log({
    // Total heap size V8 has allocated (committed memory)
    totalJSHeapSize: performance.memory.totalJSHeapSize,
    // How much of that heap is actually in use by JS objects
    usedJSHeapSize: performance.memory.usedJSHeapSize,
    // The hard limit  heap cannot grow beyond this
    jsHeapSizeLimit: performance.memory.jsHeapSizeLimit,
  });
}

On a typical desktop Chrome, jsHeapSizeLimit is around 4GB. On Android Chrome on a 3GB device, it’s often 512MB-1GB. When usedJSHeapSize approaches jsHeapSizeLimit, the tab is on the edge of an OOM crash.


Mobile RAM: The Real Constraint

The operations team using the dashboard I mentioned were on desktop machines. But the majority of web traffic globally is on mobile, and mobile RAM constraints are severe.

Chrome on Android uses the following rough thresholds for tab killing (these aren’t official numbers they’re inferred from behavior and reported by developers and researchers):

Device RAMApprox. tab heap limitBackground tab kill threshold
1 GB~150-200MB~50MB
2 GB~250-350MB~100MB
3 GB~400-500MB~200MB
4 GB+~700MB+~300MB

On a 2GB Android device still common in many markets a tab that uses 300MB of heap will be proactively killed by Chrome when the user switches to another app. When they switch back, Chrome reloads the tab from scratch, losing all state. Users experience this as “the page keeps refreshing.”

For apps that need to survive on mobile, 150-200MB is a realistic heap budget to aim for in steady state. This is much tighter than desktop, and it rules out certain architectural decisions like caching every API response in memory indefinitely.


Why Long Sessions Break Apps That Pass Lighthouse

Accumulated Route State

React Router and Next.js don’t fully unmount pages on navigation in SPAs. The framework manages route transitions, but unless you’re explicitly code-splitting and unloading modules, the JavaScript for visited routes stays loaded. More importantly, React components that were mounted may hold state in closures, context, or stores.

Consider a dashboard with 20 different report views. After visiting all 20, the user has accumulated the component trees, data structures, and side effects of all 20 pages. If any of those pages has a memory leak (even a small one), it compounds with each visit.

Cache Without Eviction

React Query and SWR are excellent data fetching libraries. Their defaults are also memory-addictive for long-running apps. React Query, by default, keeps cached data in memory for 5 minutes after it’s no longer actively used. In a dashboard that queries 200 unique order IDs over a day of use, that’s 200 cached responses held in memory simultaneously.

// React Query default: caches everything for 5 minutes
const queryClient = new QueryClient();

// Tuned for long-lived apps
const queryClient = new QueryClient({
  defaultOptions: {
    queries: {
      staleTime: 30_000,     // data is fresh for 30s
      gcTime: 60_000,        // remove from cache after 1 minute of disuse (was cacheTime in v4)
    },
  },
});

Reducing gcTime from 5 minutes to 1 minute sounds minor. Over 8 hours of active use, it’s the difference between holding thousands of cached responses and holding only the last few minutes of data.

Event Listener and Subscription Accumulation

Even small leaks compound over time. An event listener that’s not removed on component unmount is tiny maybe a few KB of retained closures. After 500 route navigations (common in a full workday), those small leaks add up to hundreds ofMB.

I covered this in detail in the React Memory Leaks post, but the time dimension is what makes it a production crash issue vs a development curiosity. In a unit test or Lighthouse run, the component mounts once and unmounts once. In 8 hours of use, it might mount and unmount 300 times.


Diagnosing Production OOM

OOM crashes are hard to diagnose because they don’t produce a JavaScript error. The renderer process just dies. There’s no stack trace, no error boundary trigger, no Sentry event.

Field signals to watch for:

  • Session duration distribution: if your analytics shows a spike in session ends at 3-4 hours, users are being kicked out (either by OOM or by giving up on a slow app)
  • Navigation abandonment: if users stop navigating after a certain number of route changes, the app may be degrading
  • performance.memory logging: log usedJSHeapSize periodically (every 5 minutes) to your analytics service. Over time this builds a picture of heap growth rate per user type
// Log memory usage every 5 minutes to your analytics service
function startMemoryMonitoring() {
  if (!performance.memory) return; // Chrome only

  setInterval(() => {
    const { usedJSHeapSize, jsHeapSizeLimit } = performance.memory;
    const usagePercent = (usedJSHeapSize / jsHeapSizeLimit) * 100;

    analytics.track("memory_usage", {
      usedMB: Math.round(usedJSHeapSize / 1024 / 1024),
      limitMB: Math.round(jsHeapSizeLimit / 1024 / 1024),
      usagePercent: Math.round(usagePercent),
    });

    // Warn if approaching limit
    if (usagePercent > 80) {
      console.warn(`Heap at ${usagePercent.toFixed(0)}% of limit`);
    }
  }, 5 * 60 * 1000);
}

The PerformanceObserver type: 'memory' API (Chrome M89+) provides a more event-driven approach:

if (PerformanceObserver.supportedEntryTypes.includes("memory")) {
  const observer = new PerformanceObserver((list) => {
    for (const entry of list.getEntries()) {
      console.log("Memory entry:", entry);
    }
  });
  observer.observe({ type: "memory" });
}

As of 2026 this API is still Chrome-only and not widely used, but it’s the right long-term direction for memory monitoring.


Strategies for Long-Lived Apps

Route-Based Code Splitting and Proper Unmounting

React Router v6’s <Outlet> and lazy loading ensure that route components are code-split. But code-splitting only controls the initial load once a module is loaded, it stays in memory. The more important mechanism is ensuring components fully unmount and release their state.

In practice this means auditing that route components don’t hold global references, that their useEffect cleanups fire correctly, and that stores don’t accumulate data from visited routes indefinitely.

LRU Cache Eviction

For any client-side cache, implement maximum size with LRU (least recently used) eviction:

import { LRUCache } from "lru-cache";

const detailCache = new LRUCache({
  max: 50,                     // keep at most 50 items
  ttl: 1000 * 60 * 5,          // expire items after 5 minutes
  maxSize: 50 * 1024 * 1024,   // cap at 50MB total
  sizeCalculation: (value) => JSON.stringify(value).length,
});

lru-cache is the standard choice in the Node/browser ecosystem well-maintained, tiny, fast. For React Query, use the gcTime option as shown earlier.

Page Visibility API: Reduce Background Work

When the user switches tabs or minimizes the browser, there’s no reason to keep polling APIs at full speed. The Page Visibility API lets you detect visibility changes:

document.addEventListener("visibilitychange", () => {
  if (document.hidden) {
    // Tab is hidden  reduce or stop background work
    queryClient.pauseMutations();
    stopPolling();
  } else {
    // Tab is visible again  resume
    queryClient.resumeMutations();
    startPolling();
    queryClient.invalidateQueries(); // fetch fresh data
  }
});

Stopping API polling when the tab is hidden also prevents stale data from accumulating in caches that aren’t being actively evicted.

Idle Reload for Kiosks and Operational Dashboards

For dashboards that need to be 100% reliable over long periods, the nuclear option is a scheduled reload during periods of inactivity. This sounds crude but is genuinely effective for kiosk displays, NOC dashboards, and ops tooling:

function scheduleIdleReload(idleMinutes = 120) {
  let idleTimer;

  function resetTimer() {
    clearTimeout(idleTimer);
    idleTimer = setTimeout(() => {
      // User has been inactive for `idleMinutes`  reload the page
      window.location.reload();
    }, idleMinutes * 60 * 1000);
  }

  // Reset on any user interaction
  ["mousedown", "keydown", "touchstart", "scroll"].forEach((event) => {
    document.addEventListener(event, resetTimer, { passive: true });
  });

  resetTimer();
}

// Reload if idle for 2 hours
scheduleIdleReload(120);

With a 2-hour idle reload, a tab that’s been left open all night reloads when the ops team comes in the morning. Fresh state, no accumulated heap. Combined with proper session restoration (saving the current view to sessionStorage), this is invisible to users.


A Memory Budget for Long-Running SPAs

Based on my experience with internal tools and operations dashboards, here’s a rough budget that keeps long-running apps stable:

CategoryDesktop TargetMobile Target
Initial page load heap< 50MB< 30MB
Steady-state heap (after 1 hour)< 150MB< 80MB
Maximum acceptable heap< 400MB< 150MB
API response cache size< 50MB< 20MB

If your app exceeds the “maximum acceptable” column, it’s at elevated risk of OOM on the target device class. Use performance.memory logging in production to measure where your real users actually land.

Lighthouse will never tell you about these numbers. It can’t it doesn’t run for an hour. But they’re what determines whether your users can actually use your app through a full working day.

Reducing React Query gcTime from 5 minutes to 90 seconds, pausing polling on tab hide, and deploying an idle reload (4-hour threshold) are the three changes that keep most long-lived dashboards stable. Heap stays under 200MB for the full working day. Lighthouse score stays 100. It just isn’t the metric that matters for this class of problem.