Skip to main content
INP Interaction Fixes

Stop Guessing INP: Fix Interactions Without These Common Errors

Introduction: The Cost of Guessing When Fixing INPInteraction to Next Paint (INP) measures the latency from a user's tap or click to the moment the next visual update appears on screen. Unlike First Input Delay (FID), which only captured the first interaction, INP considers the worst-case interaction across the entire page visit. Because INP accounts for all types of interactions—key presses, taps, clicks—it paints a far more accurate picture of real-world responsiveness. Yet many development te

Introduction: The Cost of Guessing When Fixing INP

Interaction to Next Paint (INP) measures the latency from a user's tap or click to the moment the next visual update appears on screen. Unlike First Input Delay (FID), which only captured the first interaction, INP considers the worst-case interaction across the entire page visit. Because INP accounts for all types of interactions—key presses, taps, clicks—it paints a far more accurate picture of real-world responsiveness. Yet many development teams still treat INP as a black box, making changes without understanding the root cause. A single poorly handled event handler can push INP above the "needs improvement" threshold of 200 milliseconds, and guessing which handler is to blame wastes time and degrades user experience.

This guide will dissect the most common errors teams make when attempting to improve INP, then provide a structured, evidence-based approach to diagnose and resolve each issue. We will cover everything from instrumentation mistakes to performance anti-patterns, and offer concrete steps you can implement today. By the end, you will know exactly what to look for and how to fix it—no more guessing.

Error #1: Misidentifying the Culprit Interaction

One of the most frequent mistakes is assuming that the first slow interaction is the one you need to fix. Because INP reports the worst interaction, teams often pick a random slow interaction and optimize it, only to find that the overall INP does not improve. The reason is simple: the interaction that triggers the high INP may not be the one you optimized. Many teams also confuse input delay with processing time, leading to wasted effort on the wrong part of the pipeline.

How to Correctly Identify the Worst Interaction

To avoid this error, you must instrument your site with Real User Monitoring (RUM) that captures the INP attribution data. The PerformanceObserver API in JavaScript allows you to listen for 'first-input' and 'event' entries that include detailed timing breakdowns: input delay (the time from user action to when the event handler starts), processing time (the time spent in event handlers and callbacks), and presentation delay (the time to paint the next frame). By logging the worst interaction's type (click, keydown, etc.) and its target element selector, you can zoom in on the exact component or code path that needs attention. For example, if a certain button's click handler consistently causes a processing time of 300 ms, that is your target. Without this data, you are flying blind.

Another key point is to separate interactions by device type. On mobile, the same button may cause a different INP due to slower CPUs or memory constraints. Use RUM that segments by device category. If your worst interaction on desktop is a form submission but on mobile it is a tap on a carousel arrow, you may need two different fixes. Collect at least a few hundred samples to ensure the pattern is stable, then prioritize the interaction that appears most frequently in the top percentile.

Error #2: Only Testing in Lab Conditions

Lab tools like Lighthouse and Chrome DevTools are excellent for identifying performance bottlenecks, but they run on a single machine with consistent network and CPU conditions. Real users have varying device capabilities, network speeds, and interaction patterns. If you optimize solely based on lab data, you may miss interactions that only become slow under real-world conditions—such as when a user has many open tabs or a slow CPU. One project team reported that their lab tests showed a consistent INP of 150 ms, but production RUM data revealed a worst-case INP of 450 ms on mid-range Android devices. The discrepancy was caused by a third-party analytics script that only ran on one page in the lab but was present on all pages in production.

How to Combine Lab and Field Data Effectively

The best practice is to use lab tools for quick iteration and field data for validation. Start by identifying the high-level issue: is it input delay, processing time, or presentation delay? Use Chrome DevTools' Performance panel to record a typical interaction (like clicking a button) and inspect the flame chart. Look for long tasks (over 50 ms) that block the main thread. Then, cross-reference with your RUM data to see if the same interaction appears in the field. If a long task appears in both, you have high confidence that fixing it will improve INP. If the field data shows a different interaction as the worst, go back to the lab and simulate that interaction with throttling (CPU slowdown, network throttling). Many teams skip this step and end up optimizing the wrong thing. Always let field data guide your lab investigations.

Another approach is to use CrUX (Chrome User Experience Report) for aggregate INP data for pages with sufficient traffic. While CrUX does not give per-interaction attribution, it can confirm whether your INP is actually poor for a given URL. If CrUX says your INP is in the "poor" range (>500 ms) for a page, but your lab tests show green scores, you know something is missing in your lab setup—likely a script or interaction pattern you did not simulate.

Error #3: Ignoring Input Delay

Input delay is the time between when the user interacts and when the event handler starts executing. Many developers focus exclusively on processing time (the time spent in event handlers) and neglect the delay caused by a busy main thread. If the browser is busy parsing a large script, processing a layout recalculation, or running a third-party script, the user's click will be queued and delayed. In extreme cases, input delay can exceed 500 ms even if the handler itself executes instantly. This is especially common on pages with heavy JavaScript frameworks that hydrate components after load.

How to Measure and Reduce Input Delay

Use the PerformanceObserver API to capture the 'first-input' entry's duration, which includes input delay. Alternatively, in Chrome DevTools, record a user interaction and check the "Input" section in the Performance panel. The input delay is shown as the time from the user action (the vertical red line) to the start of the event handler. To reduce input delay, you must keep the main thread free. This means: deferring non-critical scripts, breaking up long tasks using techniques like `requestIdleCallback` or `scheduler.yield`, and avoiding forced synchronous layouts (reading layout properties after writing to the DOM). One effective strategy is to use a web worker for heavy computations that do not need DOM access. While the worker performs the work, the main thread remains responsive to user input.

Another common cause of input delay is the presence of many event listeners on the same or ancestor elements. Each listener adds overhead, especially if they capture or bubble. Consider using event delegation wisely: attach a single listener at a common parent instead of multiple listeners on individual children. However, ensure the delegation logic itself is fast—if it involves complex selector matching or loops, it can add to processing time. Profile your event delegation function separately using console.time or performance.mark.

Error #4: Overlooking Processing Time Inside Handlers

Even with zero input delay, a poorly written event handler can cause huge processing time. This is the most common target for optimization, but many teams make the mistake of guessing which part of the handler is slow. They add `console.time` statements haphazardly, or worse, they rewrite the entire handler without isolating the bottleneck. A typical scenario: a click handler that fetches data, processes the response, and updates the DOM. Without profiling, a team might optimize the fetch call when the real culprit is a heavy `Array.map` operation on the response. Or they might optimize the DOM update when the bottleneck is a layout thrashing loop.

How to Profile Event Handlers Accurately

Chrome DevTools' Performance panel is your best friend. Record the interaction, then look at the "Bottom-Up" or "Call Tree" tabs to see which functions consumed the most time. Focus on functions that are part of your own code (not third-party scripts). The panel will show you the exact line numbers and how many times a function was called. If a function is called many times due to a loop, consider batching or throttling. Another technique is to use the new `performance.measure()` API: wrap parts of your handler with `performance.mark()` and then create a measure. This gives you custom timing labels that appear in the DevTools timeline. For example:

performance.mark('start-process'); // process data performance.mark('end-process'); performance.measure('data-processing', 'start-process', 'end-process');

If you see that data-processing takes 150 ms, you know where to focus. Without these markers, you are guessing. Also, be careful with async functions: even if the handler uses `await`, the time between the start and the first `await` counts toward processing time. Move heavy synchronous work after the first paint if possible, or use `setTimeout` to yield to the browser.

Error #5: Neglecting Presentation Delay

Presentation delay is the time from when the event handler finishes to when the next paint actually occurs. This is often overlooked because developers assume that after the handler runs, the browser paints immediately. But the browser may be waiting for a style recalculation or layout update before it can paint. This is especially true if your handler modifies inline styles that trigger layout, or if it adds many DOM nodes that require a layout pass. In some cases, presentation delay can equal or exceed processing time.

How to Minimize Presentation Delay

To reduce presentation delay, avoid making layout-triggering changes in handlers that are time-critical. For example, animating an element's `left` property will cause layout each frame, which is expensive. Use `transform` and `opacity` instead, as they can be handled by the compositor thread without triggering layout. Also, if your handler needs to add multiple DOM elements, use a DocumentFragment to batch the changes, then append the fragment once. This avoids multiple layout passes. Another tip: if you are reading layout properties (like `offsetHeight`) after writing to the DOM, you force a synchronous layout recalculation. To avoid this, either read before writing, or defer reading to a separate microtask using `requestAnimationFrame`.

In addition, ensure that the browser has enough time to paint. If your handler triggers a style change that requires a repaint, the presentation delay will include that paint time. To minimize paint area, use `will-change` or `contain: layout style paint` on elements that are frequently updated. However, use these properties sparingly, as they can increase memory usage. Finally, check for large images or heavy CSS filters that might cause slow painting. Use Chrome DevTools' Rendering tab to visualize paint rectangles and composite layers. If you see large red rectangles flashing on each interaction, that is a sign of excessive repaint area.

Error #6: Not Accounting for Third-Party Scripts

Third-party scripts (analytics, ads, chat widgets, etc.) are often the hidden cause of high INP. They run on your page's main thread, competing with your own event handlers. Even if your code is perfectly optimized, a third-party script that executes a long task can delay your event handler's start or its completion. Many teams forget to check third-party contributions, or they assume they cannot do anything about it. But there are practical steps you can take.

How to Diagnose and Mitigate Third-Party Impact on INP

First, use Chrome DevTools' "Network" panel to see which third-party scripts are loaded, and then use the "Performance" panel to see if any of them appear as long tasks during or around your targeted interaction. Look for tasks marked with the script's origin. If a third-party script appears as a long task that overlaps with your event processing, that is a direct contributor. One common pattern: an analytics script sends a synchronous XHR on every click, adding 100+ ms to the processing time. The fix is to use the script's asynchronous version or defer the call to after the next paint using `requestIdleCallback`.

If you cannot modify the third-party script's behavior, consider loading it with `async` or `defer`, and ensure it does not block the `DOMContentLoaded` event. Some chat widgets use `document.write`, which blocks parsing; replace those with a modern asynchronous loader. Another approach is to use a service worker to intercept and delay non-critical third-party requests until after user interaction is complete. Lastly, measure the impact: use RUM that captures attribution for third-party scripts (some tools like Web Vitals Library allow you to tag interactions by script source). If a particular third-party script is consistently associated with high INP on your site, you may need to evaluate whether its value justifies the performance cost. In some cases, you can lazy-load the script only on pages where it is needed, reducing its impact on critical pages.

Error #7: Optimizing Without Measuring the Impact

After making a change to reduce INP, many teams simply redeploy and assume the problem is solved. They forget to measure the actual impact in production. Without post-deployment RUM data, you cannot verify that your fix actually improved INP for real users. Sometimes a change that looks good in the lab can actually make things worse in the field—for example, deferring a script might cause a different interaction to become slower because the deferred script now runs during that interaction.

How to Validate Your INP Fixes

Before rolling out a fix, establish a baseline by collecting at least one week of INP data from your RUM provider. Note the 75th percentile and the worst interaction's details. After deploying the fix, collect another week of data and compare. Look for improvement not just in the overall INP metric, but in the specific interaction you targeted. Did its processing time drop? Did the input delay decrease? If the overall INP improved but the targeted interaction did not, you may have accidentally fixed a different bottleneck. Also check for regressions: did any other interaction become slower? This can happen if you increased the priority of one task at the expense of another. Use a tool like the Web Vitals Library that logs all interactions, not just the worst one, so you can see the full distribution.

If you are using an A/B testing framework, you can run an experiment where half of the users see the old version and half see the new version. This gives you a clean comparison. However, ensure that the sample size is large enough to detect a statistically significant difference. A rule of thumb: at least 1000 interactions per variant for a metric as variable as INP. Also, control for device type, as a fix might work on desktop but not on mobile. Once you have confirmed improvement, document the change and the measured impact. This creates a knowledge base that helps your team avoid guessing in the future.

Comparing Diagnostic Approaches: DevTools, Lighthouse, and RUM

Each diagnostic tool has its strengths and weaknesses. Choosing the right one for your situation can save time and reduce guesswork. The following table compares three common approaches: Chrome DevTools (lab), Lighthouse (lab with scoring), and Web Vitals RUM (field).

Use this comparison to decide when to use each tool. For initial exploration, start with RUM to identify which pages and interactions have the highest INP. Then use DevTools to drill down into the specific long tasks and code paths. Finally, use Lighthouse to validate that your fix does not break other performance metrics. Avoid relying solely on one approach.

Step-by-Step Guide: From High INP to Fixed

Follow these steps to systematically improve your INP without guessing:

  1. Identify the worst interaction using RUM data. Note the page URL, interaction type, target element, and timing breakdown (input delay, processing, presentation).
  2. Replicate the interaction in a lab environment with Chrome DevTools, applying CPU throttling and network throttling to match a typical mid-range device. Record the interaction and inspect the flame chart.
  3. Isolate the bottleneck: Is the input delay high? Look for long tasks before the event handler starts. Is processing time high? Find the function consuming the most time. Is presentation delay high? Check for layout thrashing or repaint.
  4. Apply targeted fix: For long input delay, defer scripts or break up long tasks. For long processing, optimize the handler (e.g., cache results, reduce loops, use web workers). For high presentation delay, batch DOM updates, use compositor-friendly properties, and avoid forced sync layouts.
  5. Test the fix in the lab to ensure it reduces the targeted timing. Then deploy to a small percentage of users (canary release) and monitor RUM data for the next 48 hours.
  6. Measure and verify using the same RUM tool. Compare the new INP distribution, especially for the interaction you fixed. If improvement is confirmed, roll out to all users. If not, go back to step 2.

This process may seem time-consuming, but it eliminates guesswork. Each step builds on data rather than hunches. Over time, you can create a library of common fixes for your site's interaction patterns, speeding up future optimizations.

Real-World Scenarios: What Worked and What Didn't

Scenario A: The Hidden Third-Party Impact

A team noticed that their product listing page had a poor INP (400 ms) on mobile. Lab tests showed no obvious long tasks in their own code. Using RUM, they discovered the worst interaction was a tap on a filter button. The input delay was 250 ms. Investigation revealed that a third-party push notification script was running a long task every 30 seconds, and it happened to coincide with the button tap. The fix: defer the push script to load after the first paint and use `requestIdleCallback` for its periodic updates. INP dropped to 180 ms.

Scenario B: Processing Time from a Misguided Loop

Another team had a search input that showed autocomplete suggestions. Each keystroke triggered an event handler that filtered a large array of products. The processing time was 300 ms on a mid-range phone. Profiling showed that the `Array.filter` inside a loop was O(n*m) because it also rebuilt the DOM from scratch each time. The fix: debounce the input (wait 300 ms after the last keystroke), and use a virtual list to render only visible suggestions. Processing time dropped to 50 ms, and INP fell from 350 ms to 150 ms.

Scenario C: Presentation Delay from Forced Synchronous Layout

A third team's INP issue turned out to be entirely presentation delay. Their click handler added a CSS class that triggered a transition, and then immediately read `offsetHeight` to check the element's size. This forced a synchronous layout before the next paint. The fix: move the `offsetHeight` read to after the next animation frame. Presentation delay went from 200 ms to 20 ms. This scenario shows that even if your event handler is fast, you can still cause high INP by forcing layout calculations.

Frequently Asked Questions About INP

Q: What is a good INP threshold?

According to Google's guidelines, INP should be under 200 milliseconds for a good user experience. Between 200 and 500 ms is considered "needs improvement," and over 500 ms is poor. These thresholds apply to the 75th percentile of user interactions on a page.

Q: Can I use Lighthouse to check INP?

Lighthouse does not directly measure INP because it runs in a simulated environment. However, it can give you a diagnostic score based on estimated input responsiveness. For accurate INP, use real-user monitoring.

Q: How do I debug INP on mobile devices?

Use Chrome DevTools' remote debugging to connect your phone to your desktop. Record interactions on the phone and inspect the flame chart on the desktop. Also, use RUM data segmented by device type to see mobile-specific patterns.

Q: Is INP a complete replacement for FID?

ApproachBest ForLimitationsKey Insight
Chrome DevTools (Performance panel)Detailed flame chart analysis; identifying exact line of code causing long tasksRequires manual interaction; only captures one session; no aggregate dataShows input delay, processing, and presentation breakdown per interaction
Lighthouse (lab)Quick automated check; gives a score and diagnostic hintsSimulates a single device; may miss real-world interaction patterns; doesn't capture worst interactionProvides a list of long tasks and their source URLs
Web Vitals RUM (field)Real-user data; captures worst interaction; segmentable by device, browser, etc.Requires instrumentation; aggregated data hides per-interaction detail; lower resolution than DevToolsShows actual INP distribution and attribution (type, target, phases)

Share this article:

Comments (0)

No comments yet. Be the first to comment!