Interaction to Next Paint (INP) Optimization Guide

TL;DR: INP measures responsiveness by tracking ALL user interactions (clicks, taps, keypresses) throughout page lifetime, replacing FID as Core Web Vitals metric March 2024. Target under 200ms. Optimize by breaking long JavaScript tasks over 50ms (yield with setTimeout or scheduler.yield()), keeping event handlers under 100ms execution time, using debouncing for continuous inputs, preventing React re-renders with React.memo/useMemo, and deferring third-party scripts. INP = input delay + processing time + presentation delay.

Interaction to Next Paint (INP) measures page responsiveness by capturing the latency of every user interaction throughout the entire page session. On March 12, 2024, Google replaced First Input Delay (FID) with INP as the official Core Web Vitals interactivity metric, fundamentally changing how interactivity is measured and optimized. Unlike FID which only measured the first interaction’s input delay, INP evaluates all clicks, taps, and keypresses, measuring the complete interaction lifecycle from user input through event handler execution to visual update, making it a far more comprehensive and demanding metric than its predecessor.

According to web.dev (Google’s official performance documentation, updated October 2024), INP should be 200 milliseconds or less to provide good user experience. Pages scoring 200-500ms need improvement, while anything over 500ms is considered poor. These thresholds represent the worst interaction (or near-worst for high-interaction pages) rather than averages, meaning a single slow interaction can ruin your INP score even if most interactions are fast. The business impact is substantial: research shows poor INP correlates with 20-30% higher bounce rates, while optimizing from “Poor” to “Good” can improve engagement metrics by 10-15% and indirectly boost search rankings through better Page Experience signals.

Executive Summary

For: Frontend developers managing JavaScript performance, React/Next.js engineers optimizing interactivity, performance teams improving Core Web Vitals scores, SEO professionals addressing ranking factors.

Core metric: INP measures complete interaction latency (input delay + processing + rendering) for ALL user interactions throughout page lifetime. Replaced FID March 12, 2024. Target under 200ms for “Good” rating, measured at 75th percentile of real users.

Primary causes: Long JavaScript tasks over 50ms blocking main thread, heavy event handler execution over 100ms, inefficient React re-renders, third-party scripts running during interactions, unoptimized DOM manipulation causing forced layouts, synchronous processing without yielding to browser.

Critical fixes: Break up long tasks by yielding to main thread (setTimeout or scheduler.yield()), keep event handlers under 100ms execution, use React.memo to prevent unnecessary re-renders, debounce input handlers (search, typing), defer third-party script initialization until after first interaction, batch DOM reads/writes to avoid layout thrashing.

Testing: Web Vitals JavaScript library for real user monitoring, Chrome DevTools Performance panel for interaction analysis, Google Search Console Core Web Vitals report for field data, Total Blocking Time (TBT) in Lighthouse as INP proxy during development.

Impact: INP is Google ranking factor affecting mobile and desktop search results since March 2024. Poor INP increases bounce rates (users perceive unresponsive pages), reduces conversions (frustration during checkout/forms), creates competitive disadvantage (competitors with better INP rank higher in ties). Research shows 100ms interaction improvement increases engagement 5-10%.

Effort: Initial audit 4-6 hours (identify long tasks, slow handlers), breaking up long tasks 8-16 hours depending on codebase complexity, event handler optimization 4-8 hours, React optimization 4-8 hours if applicable, third-party script management 2-4 hours, ongoing monitoring via Web Vitals library minimal once configured.

Quick Start: INP Optimization Workflow

When optimizing Interaction to Next Paint:

1. Measure Current INP

   Field data (real users):
   - Google Search Console > Experience > Core Web Vitals
   - Look for INP metric (replaced FID March 2024)
   - Shows mobile and desktop separately
   - Based on 75th percentile of real user interactions
   
   Lab data (synthetic test):
   - Lighthouse shows Total Blocking Time (TBT)
   - TBT correlates with INP but isn't exact measurement
   - Use as diagnostic proxy during development
   
   Real User Monitoring (install Web Vitals library):
   npm install web-vitals
   
   import {onINP} from 'web-vitals';
   onINP(metric => {
     console.log('INP:', metric.value, 'ms');
     console.log('Rating:', metric.rating); // "good", "needs-improvement", "poor"
   });
   
   Target thresholds:
   - Good: 0-200ms (target for all pages)
   - Needs Improvement: 200-500ms (optimization needed)
   - Poor: Over 500ms (critical priority)

2. Identify Long Tasks (Root Cause)

   Chrome DevTools Performance panel:
   - Open DevTools (F12) > Performance tab
   - Click record button (circle)
   - Interact with page (click buttons, type in inputs)
   - Stop recording after 10-15 seconds
   - Long tasks appear as RED TRIANGLES above timeline
   - Tasks over 50ms block main thread
   - Click triangle to see which JavaScript caused it
   
   JavaScript monitoring in production:
   const observer = new PerformanceObserver((list) => {
     for (const entry of list.getEntries()) {
       if (entry.duration > 50) {
         console.log('Long task detected:', {
           duration: entry.duration,
           startTime: entry.startTime,
           attribution: entry.attribution[0]?.name
         });
       }
     }
   });
   observer.observe({type: 'longtask', buffered: true});
   
   Common long task sources:
   - Heavy event handlers (click, input event processing)
   - Third-party scripts (analytics, ads, widgets)
   - React rendering entire component trees
   - Complex DOM manipulation
   - Parsing/executing large JavaScript bundles
   - Data processing (filtering, sorting large arrays)

3. Break Up Long Tasks (Most Important Fix)

   Problem: Tasks over 50ms block all interactions
   
   Solution: Yield to main thread periodically
   
   BAD (300ms long task blocks everything):
   function processItems(items) {
     items.forEach(item => {
       heavyProcessing(item); // 10ms each × 30 items = 300ms
     });
   }
   
   GOOD (yields to main thread, allows interactions):
   async function processItems(items) {
     for (const item of items) {
       heavyProcessing(item);
       await new Promise(resolve => setTimeout(resolve, 0));
     }
   }
   
   BETTER (Scheduler API - Chrome 115+, Safari 18.2+):
   async function processItems(items) {
     for (const item of items) {
       heavyProcessing(item);
       await scheduler.yield();
     }
   }
   
   Cross-browser polyfill:
   const yieldToMain = () => {
     if ('scheduler' in window && 'yield' in scheduler) {
       return scheduler.yield();
     }
     return new Promise(resolve => setTimeout(resolve, 0));
   };
   
   async function processItems(items) {
     for (const item of items) {
       heavyProcessing(item);
       await yieldToMain();
     }
   }
   
   Chunking strategy (yield every few items):
   async function processItems(items) {
     for (let i = 0; i < items.length; i += 5) {
       const chunk = items.slice(i, i + 5);
       chunk.forEach(item => heavyProcessing(item));
       await yieldToMain(); // Yield every 5 items
     }
   }
   
   Time-based yielding (adaptive):
   async function processItems(items) {
     let startTime = performance.now();
     
     for (const item of items) {
       heavyProcessing(item);
       
       // Yield if 45ms elapsed
       if (performance.now() - startTime > 45) {
         await yieldToMain();
         startTime = performance.now();
       }
     }
   }
   
   Rule: Keep individual tasks under 50ms by yielding

4. Optimize Event Handlers (Keep Under 100ms)

   Target: Handlers complete in under 100ms
   
   Provide immediate feedback before heavy work:
   
   BAD (250ms handler, no feedback):
   button.addEventListener('click', () => {
     const data = fetchData(); // 50ms
     const result = heavyCalculation(data); // 150ms
     updateUI(result); // 50ms
     // Total: 250ms = Poor INP, user sees nothing
   });
   
   GOOD (5ms handler, immediate feedback):
   button.addEventListener('click', () => {
     showLoadingState(); // 5ms - instant visual feedback
     
     setTimeout(() => {
       const data = fetchData();
       const result = heavyCalculation(data);
       updateUI(result);
       hideLoadingState();
     }, 0);
     
     // Handler completes in 5ms, user sees immediate response
   });
   
   Debounce continuous input handlers:
   
   const debounce = (fn, delay) => {
     let timeout;
     return (...args) => {
       clearTimeout(timeout);
       timeout = setTimeout(() => fn(...args), delay);
     };
   };
   
   // Search input: wait 300ms after user stops typing
   searchInput.addEventListener('input', debounce((e) => {
     performSearch(e.target.value);
   }, 300));
   
   Without debounce: Handler runs 10+ times for 10-character query
   With debounce: Handler runs once after typing pause
   
   Throttle high-frequency events:
   
   const throttle = (fn, limit) => {
     let inThrottle;
     return (...args) => {
       if (!inThrottle) {
         fn(...args);
         inThrottle = true;
         setTimeout(() => inThrottle = false, limit);
       }
     };
   };
   
   // Scroll handler: max 10 executions per second
   window.addEventListener('scroll', throttle(() => {
     updateScrollIndicator();
   }, 100));

5. Fix DOM Manipulation Issues

   Batch reads and writes (avoid layout thrashing):
   
   BAD (forces 4 layouts):
   element1.style.width = '100px'; // Write
   const h1 = element1.offsetHeight; // Read - forces layout
   element2.style.height = h1 + 'px'; // Write
   const h2 = element2.offsetHeight; // Read - forces layout
   
   GOOD (forces 2 layouts):
   const h1 = element1.offsetHeight; // Read first
   const h2 = element2.offsetHeight; // Read first
   element1.style.width = '100px'; // Then write
   element2.style.height = h1 + 'px'; // Then write
   
   Avoid layout properties in loops:
   
   BAD (layout thrashing):
   items.forEach(item => {
     item.style.left = item.offsetLeft + 10 + 'px'; // Read + write each iteration
   });
   
   GOOD (batch operations):
   const positions = items.map(item => item.offsetLeft); // Read all
   items.forEach((item, i) => {
     item.style.left = positions[i] + 10 + 'px'; // Write all
   });
   
   Properties that force layout (avoid in handlers):
   - offsetHeight, offsetWidth, offsetTop, offsetLeft
   - scrollHeight, scrollWidth
   - clientHeight, clientWidth
   - getComputedStyle()
   - getBoundingClientRect()

6. Optimize React Re-Renders

   Problem: Entire component tree re-renders on state change
   
   Use React.memo to prevent unnecessary re-renders:
   const ExpensiveComponent = React.memo(function ExpensiveComponent(props) {
     // Only re-renders when props actually change
     return <div>{/* Complex rendering logic */}</div>;
   });
   
   Use useMemo for expensive calculations:
   function Component({items}) {
     const processed = useMemo(
       () => expensiveCalculation(items),
       [items] // Only recalculate when items change
     );
     return <List data={processed} />;
   }
   
   Use useCallback for stable event handlers:
   const handleClick = useCallback(() => {
     doSomething(id);
   }, [id]); // Stable reference unless id changes
   
   Use Transitions for non-urgent updates (React 18+):
   import {useTransition} from 'react';
   
   const [isPending, startTransition] = useTransition();
   
   const handleUpdate = (value) => {
     setInputValue(value); // Urgent - immediate
     
     startTransition(() => {
       setResults(search(value)); // Can be interrupted by new input
     });
   };
   
   Lazy load heavy components:
   const HeavyModal = lazy(() => import('./HeavyModal'));
   
   {showModal && (
     <Suspense fallback={<Loading />}>
       <HeavyModal />
     </Suspense>
   )}

7. Delay Third-Party Scripts

   Problem: Third-party scripts run long tasks during interactions
   
   Delay initialization until after page interactive:
   let scriptsLoaded = false;
   
   setTimeout(() => {
     if (!scriptsLoaded) {
       loadGoogleTagManager();
       loadAnalytics();
       loadChatWidget();
       scriptsLoaded = true;
     }
   }, 3000); // Wait 3 seconds
   
   Or wait for first user interaction:
   ['mousedown', 'touchstart', 'keydown'].forEach(event => {
     document.addEventListener(event, () => {
       if (!scriptsLoaded) {
         loadThirdPartyScripts();
         scriptsLoaded = true;
       }
     }, {once: true, passive: true});
   });
   
   Use facades for heavy embeds:
   
   <div class="youtube-facade" data-id="VIDEO_ID">
     <img src="thumbnail.jpg" alt="Video thumbnail">
     <button class="play-button">▶ Play</button>
   </div>
   
   document.querySelectorAll('.youtube-facade').forEach(facade => {
     facade.addEventListener('click', function() {
       const videoId = this.dataset.id;
       const iframe = document.createElement('iframe');
       iframe.src = `https://www.youtube.com/embed/${videoId}?autoplay=1`;
       this.replaceWith(iframe);
     });
   });
   
   Saves 500KB+ JavaScript per YouTube embed

8. Use Web Workers for Heavy Processing

   Offload computation to background thread:
   
   Main thread (stays responsive):
   const worker = new Worker('processor.js');
   
   button.addEventListener('click', () => {
     showLoadingSpinner();
     worker.postMessage({data: largeDataset});
   });
   
   worker.onmessage = (e) => {
     hideLoadingSpinner();
     displayResults(e.data);
   };
   
   Worker thread (processor.js):
   onmessage = (e) => {
     const result = heavyCalculation(e.data.data);
     postMessage(result);
   };
   
   Use Web Workers for:
   - Data processing (filtering, sorting, aggregating)
   - Complex calculations
   - Image manipulation
   - JSON parsing of large responses
   - Any CPU-intensive work

9. Monitor INP with Attribution

   Install Web Vitals library:
   npm install web-vitals
   
   Basic monitoring:
   import {onINP} from 'web-vitals';
   
   onINP(metric => {
     console.log('INP:', metric.value);
     
     // Send to analytics
     gtag('event', 'web_vitals', {
       event_label: 'INP',
       value: Math.round(metric.value),
       metric_rating: metric.rating
     });
   });
   
   Attribution for debugging (identify slow interactions):
   import {onINP} from 'web-vitals/attribution';
   
   onINP(metric => {
     console.log('Slow interaction details:', {
       value: metric.value,
       element: metric.attribution.interactionTarget, // Which element
       type: metric.attribution.interactionType, // click, keypress, etc.
       inputDelay: metric.attribution.inputDelay,
       processingDuration: metric.attribution.processingDuration,
       presentationDelay: metric.attribution.presentationDelay
     });
   });
   
   Attribution reveals:
   - Which specific element caused slow interaction
   - Which phase is bottleneck (input/processing/presentation)
   - When it occurred (during load, after interaction, etc.)

10. Test Interactions in DevTools

    Manual interaction testing:
    - Chrome DevTools > Performance panel
    - Click record
    - Perform interactions (clicks, typing, navigation)
    - Stop recording
    - Look for "Event: click" or "Event: input" in timeline
    - Measure time from event to next paint
    - Should be under 200ms total
    
    Check for long tasks:
    - Red triangles above 50ms indicate blocking
    - Click triangle to see call stack
    - Identify which JavaScript file/function caused it
    
    Test on slow devices:
    - DevTools > Performance > CPU throttling
    - Set to "4x slowdown" or "6x slowdown"
    - Simulates low-end mobile devices
    - INP issues more pronounced on slow hardware
    
    Lighthouse audit:
    - Run Lighthouse in DevTools
    - Check Total Blocking Time (TBT)
    - TBT under 200ms correlates with good INP
    - Use as development proxy (not exact INP)

11. Framework-Specific Quick Fixes

    Next.js:
    - Use dynamic imports for heavy components
    import dynamic from 'next/dynamic';
    const Heavy = dynamic(() => import('./Heavy'), {
      loading: () => <Skeleton />
    });
    
    - Enable React Strict Mode (catches issues)
    - Use Server Components for heavy processing
    
    WordPress:
    - Disable unnecessary plugins
    - Use lightweight theme (GeneratePress, Kadence)
    - Defer jQuery and heavy theme scripts
    - Minimize admin-ajax.php usage during interactions
    
    React apps:
    - Use React DevTools Profiler to find slow renders
    - Implement code splitting (React.lazy)
    - Use Transitions for non-urgent updates
    - Memoize expensive components and calculations
    
    Vue 3:
    - Use async components for heavy features
    - Implement Suspense for loading states
    - Use computed properties for caching
    
    Angular:
    - Lazy load modules
    - Use OnPush change detection strategy
    - Avoid unnecessary zone.js triggers

12. Common Mistakes to Avoid

    DON'T ignore long tasks:
    ❌ "My bundle is small, so INP is fine"
    ✓ Small bundles can still have inefficient handlers
    
    DON'T only optimize first interaction:
    ❌ "First click is fast, that's enough"
    ✓ INP measures ALL interactions, not just first
    
    DON'T defer everything without measurement:
    ❌ Add setTimeout to everything blindly
    ✓ Measure which code actually causes long tasks
    
    DON'T forget mobile devices:
    ❌ Test only on desktop with fast CPU
    ✓ Mobile devices have slower CPUs, test on real devices
    
    DON'T assume framework handles it:
    ❌ "React automatically optimizes"
    ✓ Must use React.memo, useMemo, useCallback explicitly

Priority actions:

  1. Identify long tasks with PerformanceObserver (Chrome DevTools)
  2. Break up tasks over 50ms (yield to main thread)
  3. Optimize event handlers (keep under 100ms)
  4. Monitor with Web Vitals library (real user data)

Understanding INP: What Changed from FID and Why It Matters

Interaction to Next Paint represents a fundamental shift in measuring page responsiveness, replacing First Input Delay on March 12, 2024 after a 10-month transition period announced in May 2023.

What INP measures:

INP captures the complete interaction lifecycle for every user interaction during page lifetime. For each click, tap, or keypress, INP measures three distinct phases:

Input delay – Time from user action until event handler begins executing. This delay occurs when the main thread is busy processing other JavaScript tasks, making the browser unable to immediately respond to user input. The browser queues the interaction until the current task completes.

Processing time – Duration of event handler execution. Includes all synchronous JavaScript code running in response to the interaction: calculating values, manipulating DOM, updating application state, triggering framework re-renders. Often the largest contributor to slow INP.

Presentation delay – Time from handler completion until browser paints next frame showing visual feedback. Encompasses style calculation, layout computation, paint operations, and composite steps needed to render the updated page state.

The sum of these three phases represents a single interaction’s total latency. INP reports the worst (or near-worst) interaction experienced during the entire page session, making it dramatically more comprehensive than FID’s single-interaction snapshot.

Critical differences from First Input Delay:

FID (deprecated March 12, 2024):

  • Measured only the FIRST interaction on page
  • Captured only input delay phase (ignored processing and rendering)
  • Threshold: 100ms for “Good” (much more lenient)
  • Many sites passed FID easily despite poor actual interactivity
  • Missed 99% of user interactions on typical pages

INP (current Core Web Vitals metric):

  • Measures ALL interactions throughout entire page lifetime
  • Captures complete latency (delay + processing + rendering)
  • Threshold: 200ms for “Good” (more demanding, accounts for full cycle)
  • Focuses on worst interactions, not just first or average
  • Reflects real user frustration with unresponsive interfaces

Example demonstrating the fundamental difference:

A page receives 50 user interactions during session:

  • First interaction: 80ms total latency (would pass FID with “Good” 100ms threshold)
  • Interactions 2-48: 50-150ms (fast, responsive)
  • Interaction 49: 650ms (very slow – complex filter calculation blocking main thread)
  • Interaction 50: 180ms (acceptable)

FID score: 80ms (Good) – Only measured first interaction, missed the problem INP score: 650ms (Poor) – Reports 98th percentile which captures worst 2% = the 650ms interaction

This example illustrates why Google replaced FID. The FID score suggested excellent interactivity while users actually experienced a severely unresponsive interaction that damaged their perception of site quality. FID’s narrow measurement scope masked widespread responsiveness problems.

Why the change matters for web development:

Users don’t just interact once with pages. Modern web applications involve continuous interaction: clicking navigation menus, typing in search boxes, selecting dropdown options, submitting forms, filtering product lists, opening modals. Every interaction must feel instant. A single unresponsive button click creates frustration regardless of how fast the page initially loaded or how responsive other interactions were.

Google’s Chrome User Experience Report (CrUX) data from millions of real users revealed the problem: many sites with good FID scores had terrible real-world interactivity because FID’s limited scope completely missed the interactions where users experienced lag. Sites optimized for FID by ensuring the first button clicked quickly but ignored the performance of subsequent interactions. INP fixes this by measuring what users actually experience throughout their session.

INP calculation methodology:

INP doesn’t report average interaction speed or even the single slowest interaction for all pages. Instead, it uses percentile-based calculation that adapts to interaction count:

For pages with 10 or fewer interactions: INP equals the longest single interaction duration

For pages with 11-50 interactions: INP equals the 99th percentile (worst 1% of interactions)

For pages with over 50 interactions: INP equals the 98th percentile (worst 2% of interactions)

This percentile-based approach prevents single anomalous interactions from unfairly penalizing sites while still capturing consistently poor experiences. It also prevents sites with long user sessions (SPAs, infinite scroll pages) from accumulating unlimited poor scores.

Session windows (inherited from CLS methodology):

To handle pages where users remain active for minutes or hours, INP uses session windows:

Session window parameters:

  • Maximum 5 seconds duration
  • Maximum 1 second gap between interactions
  • Page can have unlimited windows throughout session
  • INP reports the worst interaction from the worst session window

Practical example:

E-commerce product page during 2-minute browsing session has three activity bursts:

  • Session window 1 (0-3 seconds): 3 interactions, scores 0.05s, 0.03s, 0.02s = worst 0.05s
  • Session window 2 (8-10 seconds): 2 interactions, scores 0.15s, 0.08s = worst 0.15s
  • Session window 3 (15-16 seconds): 1 interaction, scores 0.05s

INP = 0.15 seconds (150ms) from session window 2, which contained the worst interaction overall.

Result: “Good” rating (under 200ms threshold).

INP thresholds:

Good: 0-200 milliseconds – Excellent responsiveness. Users perceive instant feedback. Page feels snappy and professional. This is the target for all production sites.

Needs Improvement: 200-500 milliseconds – Acceptable but noticeable lag. Users detect slight delay but page remains usable. Should optimize to reach “Good” for competitive advantage.

Poor: Over 500 milliseconds – Unacceptable responsiveness. Users perceive page as sluggish, broken, or hanging. High bounce rate risk. Critical priority to fix.

Google measures INP at the 75th percentile of page visits (consistent with all Core Web Vitals metrics). This means 75% of real users must experience INP within “Good” threshold for the page to be considered performant by Google’s standards.

Why INP matters for SEO and business:

Direct ranking factor: As part of Core Web Vitals since June 2021 (FID) and now INP since March 2024, this metric affects Google search rankings. The impact is described as “lightweight” compared to content quality and relevance but serves as a tiebreaker when multiple pages have similar content value. In competitive queries, better INP can mean the difference between position 3 and position 8.

Mobile-first indexing priority: Google primarily uses mobile page performance for rankings. Mobile devices have slower processors than desktop, making INP optimization more challenging and more important. A page that passes INP on desktop but fails on mobile will see ranking impacts on both mobile and desktop search results.

User experience correlation: Poor INP directly causes measurable business harm:

  • Higher bounce rates (users abandon unresponsive pages within seconds)
  • Lower conversion rates (frustrated users abandon purchases, form submissions)
  • Reduced engagement (users avoid interacting with slow interfaces, reducing page depth)
  • Negative brand perception (sluggish sites perceived as low-quality or outdated)

Business impact data: Research from Google and web performance studies shows:

  • 100ms INP improvement correlates with 5-10% engagement increase (more clicks, longer sessions)
  • E-commerce sites see direct revenue correlation with interaction responsiveness
  • Mobile users particularly sensitive to delays (higher abandonment on poor INP)
  • B2B lead generation forms with poor INP see 15-25% lower completion rates

Real-world scenario:

Online retailer with product filtering feature:

  • Before INP optimization: Filter button interactions took 800ms average. Users clicked filter, waited, often clicked again thinking first click failed, causing confusion. Many users abandoned product search entirely, reverting to site search or leaving.
  • After optimization (breaking up long tasks, optimizing handlers): Filter interactions under 150ms. Immediate visual feedback. Filter usage increased 23%, product views per session increased 15%, and conversion rate on filtered searches improved 8%.

The INP transition from FID fundamentally changes interactivity optimization from “make the first click fast” to “make every interaction instant throughout the entire session,” requiring comprehensive JavaScript performance optimization across all event handlers and background tasks rather than superficial first-load improvements.

Measuring INP: Tools and Techniques

Accurately measuring INP requires understanding field data versus lab data, real user monitoring versus synthetic testing, and the appropriate tools for each context.

Web Vitals JavaScript library (primary measurement):

The official web-vitals library from Google provides the most accurate real user measurement:

import {onINP} from 'web-vitals';

onINP(metric => {
  console.log('INP value:', metric.value, 'ms');
  console.log('Rating:', metric.rating); // "good", "needs-improvement", "poor"
  console.log('Navigation type:', metric.navigationType); // navigate, reload, etc.
  
  // Send to analytics for aggregation
  gtag('event', 'web_vitals', {
    event_category: 'Web Vitals',
    event_label: 'INP',
    value: Math.round(metric.value),
    metric_id: metric.id,
    metric_rating: metric.rating,
    non_interaction: true
  });
});

Installation:

npm install web-vitals

This library measures actual INP experienced by real users in production. It reports the worst (or near-worst) interaction exactly as Google measures it for Core Web Vitals. Data collected matches what appears in Google Search Console and Chrome User Experience Report.

Attribution API for debugging:

When INP score is poor, attribution reveals exactly which interactions are slow and why:

import {onINP} from 'web-vitals/attribution';

onINP(metric => {
  const attribution = metric.attribution;
  
  console.log('INP Details:', {
    // Overall metrics
    totalLatency: metric.value,
    rating: metric.rating,
    
    // Interaction identification
    interactionTarget: attribution.interactionTarget, // DOM element selector
    interactionType: attribution.interactionType, // "pointer", "keyboard"
    interactionTime: attribution.interactionTime, // When it occurred
    
    // Phase breakdown (identifies bottleneck)
    inputDelay: attribution.inputDelay, // Time waiting for handler to start
    processingDuration: attribution.processingDuration, // Handler execution time
    presentationDelay: attribution.presentationDelay, // Rendering time
    
    // Context
    loadState: attribution.loadState // "loading", "dom-interactive", "complete"
  });
  
  // Send detailed data for analysis
  if (metric.value > 200) {
    fetch('/api/slow-interactions', {
      method: 'POST',
      body: JSON.stringify({
        inp: metric.value,
        element: attribution.interactionTarget,
        type: attribution.interactionType,
        inputDelay: attribution.inputDelay,
        processing: attribution.processingDuration,
        presentation: attribution.presentationDelay
      }),
      keepalive: true
    });
  }
});

Attribution reveals the bottleneck phase. Examples:

High input delay (over 100ms): Main thread blocked by long task. Solution: Break up tasks.

High processing duration (over 200ms): Event handler too heavy. Solution: Optimize handler logic, defer work.

High presentation delay (over 100ms): Rendering bottleneck. Solution: Avoid forced layouts, optimize CSS.

Google Search Console (field data):

Core Web Vitals report provides real user data from Chrome User Experience Report:

  1. Open Google Search Console
  2. Navigate to Experience > Core Web Vitals
  3. View Mobile and Desktop reports separately
  4. INP replaced FID in March 2024

Report structure:

  • Green (Good): INP under 200ms
  • Yellow (Needs Improvement): INP 200-500ms
  • Red (Poor): INP over 500ms

Each status shows:

  • Number of URLs affected
  • Example URLs experiencing issue
  • Impressions impacted
  • Trend over time (improving/worsening)

Data source: Chrome User Experience Report (CrUX) aggregates data from millions of real Chrome users who have opted in to usage statistics. Represents actual user experiences on real devices over real networks.

Update frequency: Daily updates with 28-day rolling window. After fixing INP issues, click “Validate fix” to request recrawl. Full validation takes 28 days as Google accumulates new field data.

Chrome DevTools Performance panel:

Most detailed single-interaction analysis:

Recording interactions:

  1. Open Chrome DevTools (F12)
  2. Navigate to Performance panel
  3. Click record button (circle icon) or Cmd/Ctrl + E
  4. Interact with page: click buttons, type in inputs, use navigation
  5. Stop recording after 10-15 seconds
  6. DevTools captures complete main thread activity

Analyzing results:

Interaction markers: Look for “Event: click”, “Event: input”, “Event: keydown” in timeline

Long tasks: Red triangles appear above main thread for tasks over 50ms. Click triangle to jump to that task in timeline.

Call stack: Select any task to see bottom-up and call tree views showing which functions consumed time.

Screenshots: Enable screenshots (gear icon > Screenshots) to see visual progression during interaction.

Measuring interaction latency:

  1. Find “Event: click” marker in timeline
  2. Find next “Paint” event after that click
  3. Time between = complete INP for that interaction
  4. Should be under 200ms

Identifying bottlenecks:

  • Long yellow/orange blocks = JavaScript execution
  • Purple blocks = Rendering and painting
  • Green blocks = Painting
  • Look for where most time spent

Total Blocking Time in Lighthouse (lab proxy):

Lighthouse cannot measure INP directly (requires real user interaction). Instead shows Total Blocking Time (TBT) which correlates:

TBT definition: Sum of blocking time for all long tasks (over 50ms) during page load

Calculation: For each task over 50ms, blocking time = duration minus 50ms

  • Task 1: 80ms → blocks 30ms
  • Task 2: 120ms → blocks 70ms
  • Task 3: 60ms → blocks 10ms
  • TBT = 110ms

TBT thresholds:

  • Good: 0-200ms
  • Needs Improvement: 200-600ms
  • Poor: Over 600ms

Relationship to INP: Lower TBT usually indicates better INP potential. Sites with high TBT (many long tasks during load) typically have poor INP (long tasks block interactions). However, TBT doesn’t capture post-load interaction handlers, so it’s an incomplete proxy.

Use TBT for: Development and pre-production testing when real user data not yet available. Quick regression detection in CI/CD pipelines.

PerformanceObserver for production monitoring:

Monitor long tasks in production to identify patterns:

if (window.PerformanceObserver) {
  const observer = new PerformanceObserver((list) => {
    for (const entry of list.getEntries()) {
      // entry.duration = task length in ms
      // entry.startTime = when task started
      // entry.attribution = what caused it
      
      console.log('Long task:', {
        duration: entry.duration,
        start: entry.startTime,
        attribution: entry.attribution[0]?.name || 'unknown',
        containerType: entry.attribution[0]?.containerType
      });
      
      // Alert on very long tasks
      if (entry.duration > 200) {
        console.warn('CRITICAL: Very long task blocking interactions');
      }
    }
  });
  
  observer.observe({type: 'longtask', buffered: true});
}

Captures all tasks over 50ms as they occur. Use to identify which scripts or handlers create long tasks in production.

Browser support: Chrome 58+, Edge 79+. Not supported in Firefox or Safari (use Web Vitals library for those).

Real User Monitoring (RUM) setup:

Comprehensive production monitoring:

import {onINP, onLCP, onCLS, onFCP, onTTFB} from 'web-vitals';

// Collect all Core Web Vitals
function sendToAnalytics(metric) {
  const body = JSON.stringify({
    name: metric.name,
    value: metric.value,
    rating: metric.rating,
    delta: metric.delta,
    id: metric.id,
    navigationType: metric.navigationType,
    page: window.location.pathname,
    userAgent: navigator.userAgent,
    timestamp: Date.now()
  });
  
  // Use sendBeacon for reliability (works even on page unload)
  if (navigator.sendBeacon) {
    navigator.sendBeacon('/api/metrics', body);
  } else {
    fetch('/api/metrics', {
      method: 'POST',
      body,
      keepalive: true
    });
  }
}

onINP(sendToAnalytics);
onLCP(sendToAnalytics);
onCLS(sendToAnalytics);
onFCP(sendToAnalytics);
onTTFB(sendToAnalytics);

Backend stores metrics in database for analysis:

  • 75th percentile calculation (matches Google)
  • Trend analysis over time
  • Segmentation by page, device, country
  • Alerting when metrics degrade

Testing on slow devices:

Mobile devices have significantly slower processors than desktop, making INP issues more pronounced:

CPU throttling in DevTools:

  1. Open Performance panel
  2. Click gear icon (⚙️)
  3. Select CPU throttling: “4x slowdown” or “6x slowdown”
  4. Record interactions
  5. Simulates low-end mobile device performance

Real device testing (recommended):

Test on actual budget Android devices (under $200):

  • Samsung Galaxy A series
  • Motorola Moto G series
  • Budget devices from Xiaomi, Oppo

These devices represent majority of mobile web users globally and have CPUs 4-10x slower than flagship phones or desktops. INP issues invisible on fast hardware become obvious on budget devices.

Systematic measurement combining real user monitoring (Web Vitals library, GSC field data) with lab testing (DevTools Performance panel, Lighthouse TBT) provides complete picture of INP performance, enabling data-driven optimization priorities and validating improvements before production deployment.

INP Sub-Components: Input Delay, Processing, and Presentation

Breaking INP into three distinct phases enables targeted diagnosis and optimization by identifying which specific phase causes slow interactions.

Phase 1: Input delay (target: under 50ms):

Input delay measures time from user action (click, tap, keypress) until the event handler begins executing.

What causes input delay:

Main thread blocking: When JavaScript executes continuously for over 50ms (a “long task”), the browser cannot respond to new inputs. User interactions queue up, waiting for the current task to complete. During a 300ms long task, any user clicks experience at least 300ms input delay.

Framework initialization: React, Vue, Angular apps running large initialization tasks after page load create windows where interactions are blocked. Hydration (attaching event listeners to server-rendered HTML) can block for 200-500ms on slow devices.

Third-party scripts: Analytics tracking, ad network scripts, and social media widgets often run synchronous code that blocks input processing. Google Tag Manager initialization, Facebook Pixel loading, or chat widget setup can each create 50-100ms blocking periods.

JavaScript parsing and compilation: When browsers download large JavaScript bundles, parsing and compiling them blocks the main thread before code executes. A 500KB bundle might require 200-400ms parsing on mobile devices.

Measuring input delay:

Web Vitals attribution shows input delay component:

import {onINP} from 'web-vitals/attribution';

onINP(metric => {
  if (metric.attribution.inputDelay > 50) {
    console.warn('High input delay detected:', {
      delay: metric.attribution.inputDelay,
      element: metric.attribution.interactionTarget,
      // Main thread was blocked when user interacted
    });
  }
});

Chrome DevTools Performance timeline shows gaps between user event marker and handler execution start, visualizing input delay duration.

Solutions for high input delay:

  • Break up long tasks (covered in next section)
  • Defer heavy framework initialization until after first paint
  • Delay third-party script loading
  • Code split large bundles
  • Use Web Workers for CPU-intensive background processing

Phase 2: Processing time (target: under 100ms):

Processing time measures event handler execution duration from start to finish. This is often the largest contributor to slow INP.

What causes long processing time:

Heavy DOM manipulation: Reading layout properties (offsetHeight, getBoundingClientRect) forces synchronous layout calculations. Immediately writing styles after reading forces additional layouts. This “layout thrashing” can consume 100-300ms in handlers.

// BAD: Forces 3 layouts
element.style.width = '200px'; // Write - invalidates layout
const height = element.offsetHeight; // Read - forces layout calculation
element.style.height = height + 'px'; // Write - forces another layout

Complex calculations: Synchronous processing of large datasets, nested loops, complex algorithms, or recursive functions in event handlers blocks the main thread. Filtering 10,000 products, calculating complex totals, or processing form validation rules can take 200-500ms.

Framework re-rendering: React re-rendering 100+ components, Vue reactivity triggering cascading updates, or Angular change detection cycles running across entire component tree. Without memoization, a single state change can trigger 200-400ms of re-rendering work.

Multiple event handlers: When multiple handlers respond to single event (event bubbling, multiple addEventListener calls), their combined execution time counts toward processing duration. Five handlers each taking 40ms = 200ms total processing time.

Synchronous API calls: Blocking operations in handlers like synchronous localStorage access, IndexedDB queries without async, or (rarely) synchronous XMLHttpRequest. Each synchronous operation blocks until complete.

Measuring processing time:

import {onINP} from 'web-vitals/attribution';

onINP(metric => {
  if (metric.attribution.processingDuration > 100) {
    console.warn('Slow event handler:', {
      processing: metric.attribution.processingDuration,
      element: metric.attribution.interactionTarget,
      type: metric.attribution.interactionType,
      // Handler is too heavy, needs optimization
    });
  }
});

DevTools Performance panel shows exact duration of event handler execution under “Event: click” or “Event: input” entries. Expand the event to see call stack and identify which functions consumed time.

Solutions for long processing time:

  • Provide immediate feedback, defer heavy work with setTimeout
  • Optimize handler logic (reduce calculations, simplify operations)
  • Use React.memo, useMemo, useCallback to prevent unnecessary work
  • Batch DOM operations (all reads first, then all writes)
  • Move heavy processing to Web Workers
  • Break processing into chunks with yielding

Phase 3: Presentation delay (target: under 50ms):

Presentation delay measures time from handler completion until the browser paints the next frame showing visual feedback to user.

What causes long presentation delay:

Forced synchronous layouts: After handler modifies DOM, reading layout properties forces immediate layout calculation before next scheduled paint:

// BAD: Forces immediate layout
button.addEventListener('click', () => {
  element.style.width = '200px'; // Modifies DOM
  const height = element.offsetHeight; // Forces layout NOW
  // Browser must calculate layout synchronously
});

Complex CSS selectors: Deeply nested selectors (.parent .child .grandchild .element), universal selectors (*), or pseudo-selectors like :nth-child require expensive style recalculation across large DOM trees.

Large DOM size: Pages with thousands of DOM elements (5,000+ nodes) require longer layout and paint times even for small changes. Each style change potentially affects entire tree.

Heavy paint operations: CSS properties requiring expensive painting: complex box-shadows, multiple gradients, backdrop filters, blend modes. These increase paint time from 5-10ms to 50-100ms.

Layout-triggering animations: Animating properties that affect layout (width, height, top, left, margin, padding) forces layout recalculation every frame. Use transform and opacity instead (composited properties that don’t trigger layout).

Measuring presentation delay:

import {onINP} from 'web-vitals/attribution';

onINP(metric => {
  if (metric.attribution.presentationDelay > 50) {
    console.warn('Rendering bottleneck:', {
      presentationDelay: metric.attribution.presentationDelay,
      // Rendering is the problem, optimize CSS/DOM
    });
  }
});

DevTools Performance shows time between handler completion and “Paint” event in timeline. Purple blocks represent rendering work (style, layout, paint).

Solutions for long presentation delay:

  • Avoid reading layout properties after modifying styles
  • Batch DOM reads before writes (all offsetHeight reads first, then style writes)
  • Simplify CSS selectors
  • Reduce DOM size (virtualize long lists, lazy load off-screen content)
  • Animate only transform and opacity (GPU-accelerated, no layout)
  • Use CSS containment (contain: layout paint) to isolate rendering work

Budget allocation for 200ms target:

To achieve “Good” INP rating (under 200ms total), allocate time budget across phases:

Theoretical equal distribution:

  • Input delay: 67ms (33% of budget)
  • Processing time: 67ms (33% of budget)
  • Presentation delay: 66ms (33% of budget)

Realistic distribution with safety margin:

  • Input delay: 30ms (main thread mostly available)
  • Processing time: 80ms (largest variable, most optimization needed)
  • Presentation delay: 40ms (efficient rendering)
  • Buffer: 50ms (accounts for real-world variance, measurement error)

This buffer accounts for device variability, network conditions, browser differences, and ensures consistent “Good” ratings across 75th percentile of users.

Diagnostic workflow:

When INP score is poor, use attribution to identify bottleneck phase and apply targeted fixes:

Example: Poor INP = 450ms

Attribution shows:

Input delay: 180ms (40% of total)
Processing: 220ms (49% of total)
Presentation: 50ms (11% of total)

Diagnosis: Both input delay and processing time are problems. Input delay indicates long task blocking interaction start. Processing time indicates inefficient handler.

Solution priority:

  1. Identify and break up long task causing input delay (use PerformanceObserver)
  2. Optimize event handler to reduce processing time (profile with DevTools)
  3. Presentation delay acceptable (no action needed)

Example 2: Good overall, one phase bad

INP = 190ms (just under 200ms threshold)

Input delay: 20ms (good)
Processing: 150ms (concerning, close to limit)
Presentation: 20ms (good)

Diagnosis: Processing time at upper limit, vulnerable to regression.

Solution: Profile handler in DevTools, optimize calculations, consider deferring non-critical work.

Breaking INP into measurable sub-parts transforms vague “slow interactions” into specific, actionable problems with clear optimization targets, measurable improvements, and predictable outcomes across different device capabilities and network conditions.

Long Tasks: The Root Cause of Poor INP

Long tasks—JavaScript executing continuously for more than 50 milliseconds—represent the primary cause of poor INP scores. Understanding detection, analysis, and breaking strategies is essential for responsive interactions.

Why 50ms matters:

Humans perceive delays over 100ms as noticeable lag. To keep interactions feeling instant, browsers need time budget for complete cycle:

  • Detect user input: 5-10ms
  • Run event handler: 40-50ms
  • Render visual feedback: 10-20ms
  • Total: ~100ms for smooth interaction

Long tasks consuming 50ms+ leave insufficient remaining time, causing perceptible lag. A 300ms long task during user interaction creates minimum 300ms delay before handler can even start, automatically failing INP’s 200ms target.

Detecting long tasks:

PerformanceObserver API (production monitoring):

const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    console.log('Long task detected:', {
      duration: entry.duration, // Task length in ms
      startTime: entry.startTime, // When it started
      attribution: entry.attribution[0]?.name || 'unknown', // What caused it
      containerType: entry.attribution[0]?.containerType
    });
    
    // Send to analytics for production tracking
    if (entry.duration > 100) {
      gtag('event', 'critical_long_task', {
        duration: Math.round(entry.duration),
        script: entry.attribution[0]?.name || 'unknown'
      });
    }
  }
});

observer.observe({type: 'longtask', buffered: true});

Captures all tasks over 50ms as they occur in production. Reveals which scripts or functions create blocking behavior under real user conditions.

Browser support: Chrome 58+, Edge 79+. Not available in Firefox or Safari (use Web Vitals library for comprehensive monitoring).

Chrome DevTools Performance panel:

Visual identification during development:

  1. Record Performance trace while interacting with page
  2. Long tasks appear as red triangles above main thread timeline
  3. Hover over triangle to see duration
  4. Click triangle to jump to that task
  5. Select task to see call stack (Bottom-Up and Call Tree views)
  6. Identify which functions consumed most time

Red triangles provide instant visual feedback during development: “Does this interaction create long tasks?”

Common long task sources:

1. Heavy event handlers:

// BAD: 300ms long task blocks all interactions
button.addEventListener('click', () => {
  const items = Array(1000).fill(0).map((_, i) => {
    return processComplexCalculation(i); // 0.3ms × 1000 = 300ms
  });
  updateUI(items);
  // Main thread blocked for 300ms
});

Single synchronous loop processing 1000 items creates uninterruptible task. Any user interactions during this 300ms queue up, experiencing 300ms+ input delay.

2. Third-party scripts:

Scripts you don’t control often cause long tasks:

  • Google Tag Manager: Container initialization (50-150ms), tag execution on events
  • Google Analytics / GA4: Event tracking processing, session tracking calculations
  • Facebook Pixel: Conversion tracking, event queue processing
  • Ad networks: Ad auction calculations (100-300ms), creative rendering
  • Chat widgets (Intercom, Drift, Zendesk): Widget initialization, message loading, WebSocket connections
  • A/B testing tools (Optimizely, VWO): Variation selection, DOM manipulation for tests

Each runs JavaScript you cannot optimize directly, but you can control when they load.

3. Framework rendering:

React re-rendering 100+ components synchronously:

// BAD: Triggers 200ms re-render of entire component tree
function handleFilterChange(newFilter) {
  setFilter(newFilter); // React re-renders entire product list
  // 200+ components re-render synchronously = 200ms long task
}

Without memoization, single state change cascades through component tree.

4. Data processing:

Synchronous operations on large datasets:

  • Parsing large JSON responses (500KB+ JSON = 100-200ms parse time)
  • Filtering/sorting arrays with thousands of items
  • Complex calculations (totals, aggregations, statistical operations)
  • String manipulation on large text

Breaking up long tasks:

Core strategy: Yield to main thread periodically, allowing browser to process pending user inputs between chunks of work.

setTimeout approach (universal browser support):

// BAD: 300ms uninterruptible long task
function processItems(items) {
  items.forEach(item => {
    heavyProcessing(item); // 10ms per item
  });
  // Blocks for 300ms total
}

// GOOD: Yields every 5 items
async function processItems(items) {
  for (let i = 0; i < items.length; i++) {
    heavyProcessing(items[i]);
    
    // Yield every 5 items
    if (i % 5 === 0) {
      await new Promise(resolve => setTimeout(resolve, 0));
      // Main thread available for ~5ms to process inputs
    }
  }
  // Same total time, but interruptible
}

setTimeout(fn, 0) yields control to browser. Browser processes any pending inputs (clicks, keypresses), renders frames, then resumes your code. Total processing time same or slightly longer, but users can interact during processing.

scheduler.yield() (modern approach – Chrome 115+, Safari 18.2+):

async function processItems(items) {
  for (let i = 0; i < items.length; i++) {
    heavyProcessing(items[i]);
    
    // Yield with scheduler API
    if (i % 5 === 0) {
      await scheduler.yield();
    }
  }
}

Benefits over setTimeout:

  • Browser determines optimal yield timing (smarter scheduling)
  • Prioritizes pending user inputs over background tasks
  • More efficient continuation (less overhead)
  • Better integration with browser’s task scheduler

Cross-browser polyfill:

const yieldToMain = () => {
  if ('scheduler' in window && 'yield' in scheduler) {
    return scheduler.yield(); // Use modern API if available
  }
  return new Promise(resolve => setTimeout(resolve, 0)); // Fallback
};

async function processItems(items) {
  for (let i = 0; i < items.length; i++) {
    heavyProcessing(items[i]);
    if (i % 5 === 0) {
      await yieldToMain();
    }
  }
}

Works across all browsers with graceful degradation.

Determining optimal yield frequency:

Too frequent yielding:

  • Overhead from task switching dominates
  • Overall processing takes longer
  • Example: Yielding after every 1ms operation wastes more time switching than processing

Too infrequent yielding:

  • Long tasks still occur between yields
  • Interactions still blocked
  • Example: Yielding every 100 items with 10ms each = 1000ms tasks

Optimal approach – time-based yielding:

async function processItems(items) {
  let startTime = performance.now();
  
  for (let i = 0; i < items.length; i++) {
    heavyProcessing(items[i]);
    
    // Yield if 45ms elapsed
    if (performance.now() - startTime > 45) {
      await yieldToMain();
      startTime = performance.now(); // Reset timer
    }
  }
}

Adapts to actual processing time rather than arbitrary item counts. Keeps tasks under 50ms regardless of how long each item takes to process.

Rule of thumb: Target 40-45ms chunks. Leaves 5-10ms safety margin below 50ms threshold.

requestIdleCallback for background work:

For truly low-priority work that can wait:

function processInBackground(items) {
  let currentIndex = 0;
  
  function processChunk(deadline) {
    // Process until deadline reached or items exhausted
    while (deadline.timeRemaining() > 0 && currentIndex < items.length) {
      heavyProcessing(items[currentIndex]);
      currentIndex++;
    }
    
    // Schedule next chunk if more items remain
    if (currentIndex < items.length) {
      requestIdleCallback(processChunk);
    }
  }
  
  requestIdleCallback(processChunk);
}

requestIdleCallback runs work only when main thread is idle, never blocking interactions. Use for non-urgent processing: analytics aggregation, prefetching, cache warming, lazy initialization of non-critical features.

Not suitable for: Anything user expects immediate response to (search results, form validation, UI updates).

Web Workers for CPU-intensive tasks:

Offload heavy processing entirely to background thread:

Main thread (stays 100% responsive):

const worker = new Worker('processor.js');

button.addEventListener('click', () => {
  showLoadingSpinner(); // Immediate visual feedback
  
  worker.postMessage({
    type: 'process',
    data: largeDataset
  });
});

worker.onmessage = (e) => {
  hideLoadingSpinner();
  displayResults(e.data); // Show results when ready
};

Worker thread (processor.js – runs in background):

onmessage = (e) => {
  if (e.data.type === 'process') {
    // Heavy processing here doesn't block main thread
    const result = heavyCalculation(e.data.data);
    postMessage(result);
  }
};

Benefits:

  • Main thread stays 100% responsive (never blocks)
  • No need to break up tasks or yield
  • Utilizes multiple CPU cores on user’s device
  • Can process for seconds without impacting INP

Limitations:

  • Cannot access DOM (no document, no window)
  • Communication overhead (serialize data to send between threads)
  • Additional complexity (separate file, message passing)
  • Some APIs not available in workers

Use Web Workers for:

  • Data processing (filtering, sorting, aggregating large datasets)
  • Complex calculations (physics simulations, cryptography, compression)
  • Image manipulation (resizing, filtering, format conversion)
  • Parsing large responses (JSON parsing of 1MB+ responses)
  • Any CPU-intensive work that doesn’t need DOM access

Code splitting to reduce initial long tasks:

Large JavaScript bundles create long parsing/compilation tasks. Split code:

Webpack dynamic imports:

// Instead of importing everything upfront
import {HeavyFeature} from './heavy-feature'; // Loads immediately

// Load on demand
button.addEventListener('click', async () => {
  const {HeavyFeature} = await import('./heavy-feature'); // Loads when needed
  HeavyFeature.initialize();
});

Next.js dynamic imports:

import dynamic from 'next/dynamic';

const HeavyComponent = dynamic(() => import('./HeavyComponent'), {
  loading: () => <Spinner />,
  ssr: false // Don't server-render if not needed
});

Smaller initial bundles = fewer/shorter parsing long tasks = better initial responsiveness = room for user interactions.

Measuring improvement:

After implementing yielding strategies, compare before and after:

Before optimization:

  • Long tasks: 5 tasks over 200ms, 12 tasks over 50ms
  • Worst task: 450ms
  • INP: 380ms (Poor)
  • User experience: Buttons feel unresponsive, clicks sometimes ignored

After optimization:

  • Long tasks: 0 tasks over 50ms (all broken up with yielding)
  • All tasks: under 45ms through systematic yielding
  • INP: 165ms (Good)
  • User experience: Instant button response, smooth interactions

Breaking up long tasks through yielding, Web Workers, and code splitting transforms unresponsive pages into instant, satisfying interactions by ensuring the main thread remains available to handle user input at all times, regardless of what background processing occurs.

Optimizing Event Handlers and JavaScript Execution

Event handler efficiency directly determines processing time, typically the largest contributor to poor INP. Optimization requires minimizing execution time and deferring non-critical work.

Keep handlers under 100ms (target: 50-80ms):

Processing time budget for “Good” INP is roughly 100ms (within 200ms total including input delay and presentation). Handlers should complete well under this to account for other phases.

Immediate feedback pattern:

Provide instant visual response before heavy work:

// BAD: No feedback for 300ms
button.addEventListener('click', () => {
  const result = heavyCalculation(); // 280ms
  updateUI(result); // 20ms
  // User sees nothing for 300ms, unsure if click registered
});

// GOOD: Immediate feedback, defer heavy work
button.addEventListener('click', () => {
  showLoadingState(); // 5ms - instant visual feedback (spinner, disabled state)
  
  setTimeout(() => {
    const result = heavyCalculation(); // 280ms (separate task)
    updateUI(result); // 20ms
    hideLoadingState();
  }, 0);
  
  // Handler completes in 5ms
  // User sees immediate response even though processing takes same total time
});

Psychological benefit: User perceives instant response even though actual work takes same duration. Immediate feedback signal (“I heard your click”) dramatically improves perceived responsiveness.

Debouncing for continuous input:

Limit handler execution frequency on events that fire repeatedly:

function debounce(func, wait) {
  let timeout;
  return function executedFunction(...args) {
    clearTimeout(timeout);
    timeout = setTimeout(() => func(...args), wait);
  };
}

// Search as user types
const searchInput = document.querySelector('#search');

searchInput.addEventListener('input', debounce((e) => {
  performSearch(e.target.value); // Only runs after 300ms pause in typing
}, 300));

Without debouncing: Handler runs 10+ times for 10-character query (“l”, “la”, “lap”, “lapt”, “lapto”, “laptop”). Each execution creates 50-100ms task. Total: 500-1000ms blocking.

With debouncing: Handler runs once after user stops typing for 300ms. One 50-100ms task. Total: 50-100ms blocking.

Common debounce use cases:

  • Search inputs (wait for typing pause before querying)
  • Form validation (validate after editing stops)
  • Window resize handlers (wait for resize complete)
  • Auto-save features (save after editing pause)
  • Any input where only final value matters

Important: Don’t debounce clicks/taps. Those need immediate response. Debounce only continuous events like input, scroll, resize.

Throttling for high-frequency events:

Limit execution rate to maximum frequency:

function throttle(func, limit) {
  let inThrottle;
  return function(...args) {
    if (!inThrottle) {
      func.apply(this, args);
      inThrottle = true;
      setTimeout(() => inThrottle = false, limit);
    }
  };
}

// Scroll handler: limit to 10 executions per second
window.addEventListener('scroll', throttle(() => {
  updateScrollIndicator();
  checkLazyLoadImages();
}, 100)); // 100ms = 10/second max

Without throttling, scroll events fire 60+ times per second during smooth scroll (every frame). With throttling, runs maximum 10 times per second regardless of scroll speed.

Use throttling for: Scroll handlers, mouse move tracking, window resize (if continuous updates needed), real-time data visualization updates.

Efficient DOM manipulation:

Avoid layout thrashing (interleaving reads and writes):

// BAD: Forces 4 layouts (terrible performance)
element1.style.width = '100px'; // Write - invalidates layout
const h1 = element1.offsetHeight; // Read - forces layout calculation
element2.style.height = h1 + 'px'; // Write - invalidates layout
const h2 = element2.offsetHeight; // Read - forces another layout calculation

// GOOD: Batch reads, then batch writes (forces 2 layouts)
const h1 = element1.offsetHeight; // Read first
const h2 = element2.offsetHeight; // Read first
element1.style.width = '100px'; // Then write
element2.style.height = h1 + 'px'; // Then write

Layout-triggering properties (avoid in loops and handlers):

  • offsetHeight, offsetWidth, offsetTop, offsetLeft
  • scrollHeight, scrollWidth, scrollTop, scrollLeft
  • clientHeight, clientWidth
  • getComputedStyle() on any element
  • getBoundingClientRect()

Reading these after modifying styles forces synchronous layout calculation. In loops, this creates “layout thrashing” consuming 100-300ms.

Use DocumentFragment for batch insertions:

// BAD: 100 DOM insertions trigger 100 reflows
items.forEach(item => {
  const div = document.createElement('div');
  div.textContent = item.name;
  container.appendChild(div); // Reflow each append
});

// GOOD: Single insertion triggers 1 reflow
const fragment = document.createDocumentFragment();
items.forEach(item => {
  const div = document.createElement('div');
  div.textContent = item.name;
  fragment.appendChild(div); // No reflow (not in DOM yet)
});
container.appendChild(fragment); // Single reflow

React-specific optimizations:

Prevent unnecessary re-renders with React.memo:

// BAD: Component re-renders on every parent render
function ExpensiveList({items}) {
  // Expensive rendering logic
  return <ul>{items.map(item => <li key={item.id}>{item.name}</li>)}</ul>;
}

// GOOD: Memoized component
const ExpensiveList = React.memo(function ExpensiveList({items}) {
  // Only re-renders when items prop actually changes
  return <ul>{items.map(item => <li key={item.id}>{item.name}</li>)}</ul>;
});

React.memo prevents re-renders when props haven’t changed (shallow comparison by default). Dramatically reduces wasted rendering work.

Cache expensive calculations with useMemo:

function ProductList({products, filter}) {
  // BAD: Filters on every render (expensive for 1000+ products)
  const filtered = products.filter(p => p.category === filter);
  
  // GOOD: Memoized filtering
  const filtered = useMemo(
    () => products.filter(p => p.category === filter),
    [products, filter] // Only recalculate when these change
  );
  
  return <List items={filtered} />;
}

Without useMemo: Filtering runs on every render (parent state change, sibling re-render, etc.). With useMemo: Filtering runs only when products or filter actually change.

Stable event handlers with useCallback:

function ProductCard({id, onDelete}) {
  // BAD: New function every render (causes child re-renders)
  const handleClick = () => onDelete(id);
  
  // GOOD: Stable function reference
  const handleClick = useCallback(
    () => onDelete(id),
    [id, onDelete] // Only recreate when dependencies change
  );
  
  return <button onClick={handleClick}>Delete</button>;
}

Without useCallback, child components receive new function reference every render, triggering unnecessary re-renders even when memoized.

React Transitions for non-urgent updates (React 18+):

import {useState, useTransition} from 'react';

function SearchResults() {
  const [query, setQuery] = useState('');
  const [results, setResults] = useState([]);
  const [isPending, startTransition] = useTransition();
  
  const handleSearch = (value) => {
    setQuery(value); // Urgent - immediate input update
    
    startTransition(() => {
      // Non-urgent - can be interrupted by new input
      setResults(performSearch(value));
    });
  };
  
  return (
    <div>
      <input 
        value={query} 
        onChange={(e) => handleSearch(e.target.value)} 
      />
      {isPending && <Spinner />}
      <Results items={results} />
    </div>
  );
}

Without transitions: Every keystroke blocks until search completes and results render (200-400ms per keystroke on large datasets).

With transitions: Input stays responsive, search results update when CPU available. New keystrokes interrupt pending result rendering.

Lazy load heavy components:

import {lazy, Suspense} from 'react';

// Load component only when needed
const HeavyModal = lazy(() => import('./HeavyModal'));

function App() {
  const [showModal, setShowModal] = useState(false);
  
  return (
    <div>
      <button onClick={() => setShowModal(true)}>Open Modal</button>
      {showModal && (
        <Suspense fallback={<Loading />}>
          <HeavyModal />
        </Suspense>
      )}
    </div>
  );
}

Heavy components don’t block initial page responsiveness. Load when actually needed.

Vue 3 optimizations:

v-once for static content:

<template>
  <!-- Renders once, never updates -->
  <div v-once>
    <h1>{{ staticTitle }}</h1>
    <p>{{ staticContent }}</p>
  </div>
</template>

Computed properties for caching:

computed: {
  filteredItems() {
    // Cached until dependencies change
    return this.items.filter(i => i.active);
  }
}

Async components:

const HeavyComponent = defineAsyncComponent(() => 
  import('./HeavyComponent.vue')
);

Angular optimizations:

OnPush change detection:

@Component({
  selector: 'app-component',
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class AppComponent {
  // Only checks when inputs change or events fire
}

Reduces unnecessary change detection cycles across component tree.

Lazy load modules:

const routes: Routes = [
  {
    path: 'heavy',
    loadChildren: () => import('./heavy/heavy.module')
      .then(m => m.HeavyModule)
  }
];

Optimizing event handlers through immediate feedback patterns, debouncing/throttling, efficient DOM manipulation, and framework-specific techniques (React.memo, useMemo, useCallback, Transitions) keeps processing time under 100ms target, ensuring fast INP scores across all user interactions.

Third-Party Scripts and Framework-Specific Strategies

Third-party scripts and framework overhead are among the biggest causes of poor INP, running long tasks that block user interactions.

Third-party script problems:

Scripts you don’t control often create long tasks during page lifetime:

Google Tag Manager: Container initialization (50-150ms parsing configuration), tag execution on page events, data layer processing for analytics.

Analytics (Google Analytics, Facebook Pixel): Event tracking queues processing (30-80ms), session tracking calculations, conversion tracking scripts.

Chat widgets (Intercom, Drift, Zendesk): Widget initialization (100-200ms), message history loading, WebSocket connection establishment, real-time synchronization.

Ad networks: Ad auction calculations (100-300ms depending on bidders), creative rendering and display, viewability tracking scripts running continuously.

A/B testing tools (Optimizely, VWO): Variation selection algorithms, DOM manipulation to apply test changes, goal tracking and analytics integration.

Each runs JavaScript during page lifetime, creating long tasks that coincide with user interactions, directly increasing INP.

Delayed loading strategy:

Don’t initialize third-party scripts immediately on page load:

// DON'T: Load immediately (blocks early interactions)
<script src="https://www.googletagmanager.com/gtag/js?id=GA_ID"></script>

// DO: Delay until page interactive
let scriptsLoaded = false;

setTimeout(() => {
  if (!scriptsLoaded) {
    loadGoogleTagManager();
    loadGoogleAnalytics();
    loadFacebookPixel();
    loadChatWidget();
    scriptsLoaded = true;
  }
}, 3000); // Wait 3 seconds after page load

Benefits: Early interactions (first 3 seconds) experience no third-party interference. Scripts load when user already engaged.

Load on first user interaction:

let scriptsLoaded = false;

const interactionEvents = ['mousedown', 'touchstart', 'keydown'];

interactionEvents.forEach(event => {
  document.addEventListener(event, () => {
    if (!scriptsLoaded) {
      loadThirdPartyScripts();
      scriptsLoaded = true;
    }
  }, {once: true, passive: true});
});

// Fallback: Load after 5 seconds if no interaction
setTimeout(() => {
  if (!scriptsLoaded) {
    loadThirdPartyScripts();
  }
}, 5000);

Reasoning: User’s first interaction triggers loading. Subsequent interactions happen after scripts initialized. Balances analytics completeness with performance.

Facade pattern for heavy embeds:

Replace heavy widgets with lightweight previews, load real widget on user interaction:

YouTube embed facade:

<div class="youtube-facade" data-video-id="dQw4w9WgXcQ">
  <img src="https://img.youtube.com/vi/dQw4w9WgXcQ/maxresdefault.jpg" alt="Video thumbnail">
  <button class="play-button">▶</button>
</div>

<style>
.youtube-facade {
  position: relative;
  cursor: pointer;
}
.play-button {
  position: absolute;
  top: 50%;
  left: 50%;
  transform: translate(-50%, -50%);
  font-size: 48px;
  background: rgba(0,0,0,0.7);
  color: white;
  border: none;
  border-radius: 50%;
  width: 80px;
  height: 80px;
}
</style>

<script>
document.querySelectorAll('.youtube-facade').forEach(facade => {
  facade.addEventListener('click', function() {
    const videoId = this.dataset.videoId;
    
    // Create real YouTube iframe
    const iframe = document.createElement('iframe');
    iframe.src = `https://www.youtube.com/embed/${videoId}?autoplay=1`;
    iframe.allow = 'autoplay; encrypted-media';
    iframe.width = '560';
    iframe.height = '315';
    iframe.frameborder = '0';
    
    // Replace facade with real iframe
    this.replaceWith(iframe);
  });
});
</script>

Saves 500KB+ JavaScript per YouTube embed. No iframe, no YouTube player API, no tracking scripts until user actually wants to watch video.

Twitter/X embed facade:

<div class="twitter-facade" data-tweet-id="123456">
  <img src="tweet-screenshot.jpg" alt="Tweet preview">
  <button>Load Tweet</button>
</div>

<script>
document.querySelectorAll('.twitter-facade button').forEach(button => {
  button.addEventListener('click', function() {
    const facade = this.parentElement;
    const tweetId = facade.dataset.tweetId;
    
    // Load Twitter widget script
    const script = document.createElement('script');
    script.src = 'https://platform.twitter.com/widgets.js';
    script.onload = () => {
      window.twttr.widgets.createTweet(tweetId, facade);
    };
    document.body.appendChild(script);
  });
});
</script>

Web Workers for analytics processing:

Offload analytics to background thread so tracking never blocks interactions:

Main thread:

const analyticsWorker = new Worker('analytics-worker.js');

// Send events to worker (non-blocking)
function trackEvent(eventName, eventData) {
  analyticsWorker.postMessage({
    type: 'track',
    event: eventName,
    data: eventData,
    timestamp: Date.now()
  });
}

// Event handlers remain instant
button.addEventListener('click', () => {
  trackEvent('button_click', {id: 'cta-button'});
  // Handler completes immediately, no analytics blocking
});

Worker thread (analytics-worker.js):

let eventQueue = [];

onmessage = (e) => {
  if (e.data.type === 'track') {
    eventQueue.push({
      event: e.data.event,
      data: e.data.data,
      timestamp: e.data.timestamp
    });
    
    // Process queue without blocking main thread
    processQueue();
  }
};

function processQueue() {
  // Heavy analytics processing here
  // Batch events, compress data, send to backend
  // Main thread stays responsive
}

Partytown (third-party scripts in Web Workers):

Library that automatically relocates third-party scripts to Web Workers:

<!-- Include Partytown library -->
<script src="https://cdn.jsdelivr.net/npm/@builder.io/partytown/lib/partytown.js"></script>

<!-- Third-party scripts run in worker, not main thread -->
<script type="text/partytown">
  // Google Tag Manager
  (function(w,d,s,l,i){...})(window,document,'script','dataLayer','GTM-XXXX');
  
  // Google Analytics
  gtag('config', 'GA-XXXX');
  
  // Facebook Pixel
  fbq('init', 'PIXEL_ID');
</script>

Partytown intercepts DOM access from worker scripts, keeping main thread free. Works with most analytics and tracking scripts.

Limitations: Some scripts don’t work in workers (those requiring synchronous DOM access). Test thoroughly.

Next.js optimizations:

Script component with strategy:

import Script from 'next/script';

export default function Page() {
  return (
    <>
      {/* Defer until page interactive */}
      <Script
        src="https://www.googletagmanager.com/gtag/js"
        strategy="lazyOnload"
      />
      
      {/* Or use Partytown (Web Worker) */}
      <Script
        src="https://connect.facebook.net/en_US/sdk.js"
        strategy="worker"
      />
      
      {/* Critical scripts only */}
      <Script
        src="/critical-feature.js"
        strategy="afterInteractive"
      />
    </>
  );
}

Dynamic imports for heavy components:

import dynamic from 'next/dynamic';

const HeavyComponent = dynamic(() => import('./Heavy'), {
  loading: () => <Skeleton />,
  ssr: false // Don't server-render if not needed
});

Server Components (App Router):

// Heavy processing on server, sends HTML to client
async function ServerComponent() {
  const data = await fetchHeavyData(); // Runs on server
  const processed = expensiveCalculation(data); // Runs on server
  
  return <Display data={processed} />; // Client receives HTML
  // No client-side JavaScript for this component
}

Moves processing from client (affects INP) to server (doesn’t affect INP).

Vue 3 strategies:

Async components:

import {defineAsyncComponent} from 'vue';

const HeavyComponent = defineAsyncComponent(() =>
  import('./HeavyComponent.vue')
);

Suspense for loading states:

<Suspense>
  <template #default>
    <HeavyComponent />
  </template>
  <template #fallback>
    <Loading />
  </template>
</Suspense>

Angular strategies:

Lazy load modules:

const routes: Routes = [
  {
    path: 'heavy',
    loadChildren: () => import('./heavy/heavy.module')
      .then(m => m.HeavyModule)
  }
];

OnPush change detection:

@Component({
  selector: 'app-component',
  changeDetection: ChangeDetectionStrategy.OnPush
})

Reduces unnecessary change detection cycles that can coincide with interactions.

Measuring third-party impact:

Long task attribution:

const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    const attribution = entry.attribution[0];
    
    // Identify third-party scripts
    if (attribution?.name.includes('google') || 
        attribution?.name.includes('facebook') ||
        attribution?.name.includes('twitter')) {
      console.warn('Third-party long task:', {
        script: attribution.name,
        duration: entry.duration
      });
    }
  }
});

observer.observe({type: 'longtask', buffered: true});

Chrome DevTools third-party filtering:

Performance panel shows “Third-party” badge on tasks from external scripts. Use filter to isolate third-party impact.

Lighthouse third-party audit:

“Reduce the impact of third-party code” diagnostic shows:

  • Time spent executing third-party scripts
  • Main thread blocking time attributed to third-parties
  • Specific third-party domains causing issues

Aggressively delaying third-party initialization (3-5 seconds or first interaction), using facades for heavy embeds (YouTube, social), offloading analytics to Web Workers, and framework-specific code splitting prevents third-party code from dominating interaction processing time and blocking user responsiveness.

Testing, Monitoring, and Troubleshooting INP

Systematic testing and continuous monitoring ensure INP remains within “Good” threshold as code evolves and new features ship.

Web Vitals library implementation (production monitoring):

Most reliable real user measurement:

import {onINP} from 'web-vitals';

onINP(metric => {
  console.log('INP:', metric.value, 'ms');
  console.log('Rating:', metric.rating); // "good", "needs-improvement", "poor"
  
  // Send to analytics backend
  fetch('/api/metrics', {
    method: 'POST',
    body: JSON.stringify({
      metric: 'INP',
      value: metric.value,
      rating: metric.rating,
      id: metric.id,
      page: window.location.pathname,
      userAgent: navigator.userAgent,
      timestamp: Date.now()
    }),
    keepalive: true // Ensures delivery even on page unload
  });
});

Attribution for production debugging:

When INP scores degrade, attribution reveals specific problem interactions:

import {onINP} from 'web-vitals/attribution';

onINP(metric => {
  // Only log/report slow interactions
  if (metric.value > 200) {
    console.warn('Slow interaction detected:', {
      value: metric.value,
      element: metric.attribution.interactionTarget, // CSS selector
      interactionType: metric.attribution.interactionType,
      inputDelay: metric.attribution.inputDelay,
      processingDuration: metric.attribution.processingDuration,
      presentationDelay: metric.attribution.presentationDelay,
      loadState: metric.attribution.loadState
    });
    
    // Send detailed report for slow interactions
    fetch('/api/slow-interactions', {
      method: 'POST',
      body: JSON.stringify({
        inp: metric.value,
        element: metric.attribution.interactionTarget,
        phase_breakdown: {
          input: metric.attribution.inputDelay,
          processing: metric.attribution.processingDuration,
          presentation: metric.attribution.presentationDelay
        }
      }),
      keepalive: true
    });
  }
});

Backend aggregates this data to identify:

  • Which pages have worst INP
  • Which specific elements/interactions are slow
  • Which phase (input/processing/presentation) is bottleneck
  • Trends over time (improving/degrading)

Google Search Console monitoring:

Core Web Vitals report shows field data from real users:

  1. Open Google Search Console
  2. Navigate to Experience > Core Web Vitals
  3. View Mobile and Desktop reports separately
  4. INP replaced FID on March 12, 2024

Report categories:

  • Good (green): URLs with INP under 200ms
  • Needs Improvement (yellow): INP 200-500ms
  • Poor (red): INP over 500ms

Click any category to see:

  • Example URLs experiencing that INP range
  • Number of impressions affected
  • Trend graph (improving/stable/worsening)

After fixing INP issues:

  1. Click “Validate fix” button
  2. Google recrawls URLs over 28 days
  3. Status updates as new field data confirms improvement
  4. Full validation takes 28 days (rolling window of user data)

Chrome DevTools interaction analysis:

Most detailed single-interaction debugging:

Recording interactions:

  1. Open DevTools (F12) > Performance panel
  2. Click record button or press Cmd/Ctrl + E
  3. Interact with page: click buttons, type in inputs, use navigation
  4. Stop recording after capturing problematic interaction
  5. DevTools captures complete main thread activity

Analyzing recorded trace:

Find interaction event: Look for “Event: click”, “Event: input”, or “Event: keydown” markers in timeline

Measure total latency: Time from event marker to next “Paint” event = complete INP for that interaction

Identify phases:

  • Gap before handler = input delay (long tasks blocking)
  • Handler duration = processing time (JavaScript execution)
  • Time after handler to paint = presentation delay (rendering)

Long task identification: Red triangles above 50ms indicate blocking tasks. Click triangle to:

  • See task duration
  • View call stack (which functions ran)
  • Examine Bottom-Up view (which functions consumed most time)
  • Identify culprit script/library

Screenshots: Enable screenshots (gear icon > Screenshots) to see visual progression during interaction, confirming when user sees feedback.

Lighthouse Total Blocking Time (TBT) proxy:

Lighthouse can’t measure INP directly (requires real user interaction). TBT provides development proxy:

Run Lighthouse:

  1. Open DevTools > Lighthouse panel
  2. Select “Performance” category
  3. Choose “Mobile” or “Desktop”
  4. Click “Analyze page load”

TBT interpretation:

  • TBT measures sum of blocking time during page load
  • Correlates with INP: low TBT usually means good INP potential
  • Not exact measure: doesn’t capture post-load interaction handlers

TBT targets:

  • Good: 0-200ms
  • Needs Improvement: 200-600ms
  • Poor: Over 600ms

Use TBT for: CI/CD performance budgets, pre-production testing, development regression detection.

Common INP problems and solutions:

Problem 1: High input delay (over 100ms)

Symptoms: Long gap between click and handler start in DevTools timeline.

Diagnosis: Long tasks blocking main thread when user interacted.

Solutions:

  • Identify long tasks with PerformanceObserver
  • Break up tasks over 50ms using setTimeout or scheduler.yield()
  • Defer third-party script initialization
  • Code split large JavaScript bundles
  • Move CPU-intensive work to Web Workers

Problem 2: High processing time (over 150ms)

Symptoms: Web Vitals attribution shows high processingDuration.

Diagnosis: Event handler taking too long to execute.

Solutions:

  • Optimize handler logic (reduce calculations, simplify operations)
  • Provide immediate feedback, defer heavy work with setTimeout
  • Use React.memo/useMemo to prevent unnecessary re-renders
  • Batch DOM operations (reads before writes)
  • Move processing to Web Workers if possible

Problem 3: High presentation delay (over 100ms)

Symptoms: Long gap between handler completion and paint in DevTools.

Diagnosis: Rendering bottleneck.

Solutions:

  • Avoid forced synchronous layouts (don’t read layout properties after style changes)
  • Batch DOM reads before writes
  • Simplify CSS selectors (avoid deeply nested, universal selectors)
  • Reduce DOM size (virtualize long lists)
  • Animate only transform/opacity (not width/height/top/left)

Problem 4: INP spikes on specific interactions

Symptoms: Most interactions fast, one feature consistently slow.

Diagnosis: Specific feature has unoptimized handler.

Solutions:

  • Use Web Vitals attribution to identify exact element
  • Profile that specific interaction in DevTools Performance
  • Optimize that feature’s event handler
  • Consider lazy loading heavy feature until user needs it

Problem 5: Third-party scripts causing poor INP

Symptoms: Long tasks attributed to external scripts (google.com, facebook.com, etc.).

Diagnosis: Analytics, ads, or widgets blocking interactions.

Solutions:

  • Delay third-party initialization 3-5 seconds
  • Use Partytown to run scripts in Web Workers
  • Implement facades for heavy embeds (YouTube, social)
  • Audit necessity of each third-party script (remove unused)

Performance budgets:

Set thresholds and enforce in CI/CD:

// performance-budget.json
{
  "inp": 200,
  "total-blocking-time": 200,
  "max-potential-fid": 130,
  "interactive": 3500
}

Lighthouse CI configuration:

// lighthouserc.js
module.exports = {
  ci: {
    collect: {
      numberOfRuns: 3,
    },
    assert: {
      assertions: {
        'total-blocking-time': ['warn', {maxNumericValue: 200}],
        'max-potential-fid': ['error', {maxNumericValue: 130}],
        'interactive': ['warn', {maxNumericValue: 3500}],
      },
    },
  },
};

Fails build if TBT exceeds budget, preventing INP regressions from reaching production.

Automated alerts:

Set up monitoring alerts when metrics degrade:

// Backend aggregation (pseudo-code)
const dailyINP = calculateP75(last24HoursINPData);

if (dailyINP > 200) {
  sendAlert({
    message: `INP degraded to ${dailyINP}ms (threshold: 200ms)`,
    affectedPages: getWorstPages(last24HoursINPData),
    severity: dailyINP > 500 ? 'critical' : 'warning'
  });
}

Platform-specific troubleshooting:

WordPress:

Common issues:

  • Heavy plugins (page builders, sliders, contact forms)
  • jQuery and large theme scripts loading synchronously
  • Admin-ajax.php requests during user interactions
  • Unoptimized third-party integrations

Fixes:

  • Disable unnecessary plugins (test INP after disabling each)
  • Switch to lightweight theme (GeneratePress, Kadence, Astra)
  • Defer jQuery and theme scripts: <script src="script.js" defer>
  • Use REST API instead of admin-ajax.php
  • Implement caching (WP Rocket, W3 Total Cache)

React/Next.js:

Common issues:

  • Entire component tree re-rendering on state changes
  • Large bundle loaded immediately (no code splitting)
  • Synchronous state updates blocking interactions
  • Heavy components rendering on every interaction

Fixes:

  • Wrap expensive components in React.memo
  • Use useMemo for expensive calculations
  • Implement code splitting (React.lazy, Next.js dynamic imports)
  • Use Transitions for non-urgent updates (React 18+)
  • Monitor with React DevTools Profiler
  • Enable Strict Mode to catch issues during development

Shopify:

Limitations:

  • Limited control over platform JavaScript
  • Apps add scripts you cannot modify
  • Theme JavaScript often unoptimized

Fixes:

  • Minimize number of apps installed (each adds overhead)
  • Choose performant theme (Dawn is fastest official theme)
  • Use Shopify’s built-in lazy loading features
  • Defer app scripts when possible
  • Test INP before/after adding each app

Continuous monitoring workflow:

  1. Install Web Vitals library in production
  2. Send metrics to backend for aggregation
  3. Calculate 75th percentile (matches Google’s methodology)
  4. Track trends (daily/weekly comparison)
  5. Alert on regressions (INP increases beyond threshold)
  6. Investigate attribution data when alerts fire
  7. Validate fixes through field data (GSC, internal monitoring)

Comprehensive testing combining development tools (DevTools, Lighthouse), real user monitoring (Web Vitals library, GSC), and automated performance budgets ensures INP remains optimized while catching regressions before they impact users and search rankings.


INP Optimization Checklist

Measurement:

  • [ ] Install web-vitals library: npm install web-vitals
  • [ ] Implement onINP monitoring in production
  • [ ] Send INP data to analytics backend
  • [ ] Check Google Search Console Core Web Vitals report
  • [ ] Run Lighthouse for TBT baseline
  • [ ] Set up attribution API for debugging slow interactions

Long Task Identification:

  • [ ] Implement PerformanceObserver for long task tracking
  • [ ] Record Performance traces in Chrome DevTools
  • [ ] Identify all tasks over 50ms duration
  • [ ] Note which JavaScript files/functions cause long tasks
  • [ ] Categorize: event handlers, third-party, framework, data processing

Breaking Up Long Tasks:

  • [ ] Implement yieldToMain helper (setTimeout or scheduler.yield polyfill)
  • [ ] Break up tasks over 50ms with yielding strategy
  • [ ] Use time-based yielding (45ms chunks) for adaptive approach
  • [ ] Test that all tasks now under 50ms
  • [ ] Measure improvement in input delay component

Event Handler Optimization:

  • [ ] Target all handlers under 100ms execution
  • [ ] Implement immediate feedback pattern (5ms visual response)
  • [ ] Defer heavy work with setTimeout after showing loading state
  • [ ] Debounce continuous input handlers (search, typing): 300ms delay
  • [ ] Throttle high-frequency events (scroll, resize): 100ms limit
  • [ ] Remove synchronous API calls from handlers

DOM Manipulation:

  • [ ] Batch all DOM reads before writes
  • [ ] Avoid reading layout properties (offsetHeight, etc.) in loops
  • [ ] Use DocumentFragment for multiple insertions
  • [ ] Don’t interleave style changes and property reads
  • [ ] Minimize forced synchronous layouts
  • [ ] Profile presentation delay in DevTools

React Optimization (if applicable):

  • [ ] Wrap expensive components in React.memo
  • [ ] Use useMemo for expensive calculations
  • [ ] Use useCallback for event handlers
  • [ ] Implement code splitting (React.lazy, dynamic imports)
  • [ ] Use Transitions for non-urgent updates (React 18+)
  • [ ] Lazy load heavy components with Suspense
  • [ ] Profile with React DevTools to identify slow renders

Third-Party Scripts:

  • [ ] Audit all third-party scripts (GTM, analytics, ads, widgets)
  • [ ] Delay initialization 3-5 seconds after page load
  • [ ] Or load on first user interaction
  • [ ] Implement facades for YouTube/social embeds
  • [ ] Consider Partytown for running scripts in Web Workers
  • [ ] Remove unnecessary third-party scripts
  • [ ] Self-host critical scripts when possible

Web Workers:

  • [ ] Identify CPU-intensive processing in handlers
  • [ ] Move data processing to Web Workers
  • [ ] Move analytics processing to background threads
  • [ ] Ensure main thread stays responsive during heavy work

Code Splitting:

  • [ ] Split large JavaScript bundles
  • [ ] Implement dynamic imports for heavy features
  • [ ] Reduce initial bundle size
  • [ ] Load features on demand
  • [ ] Measure parsing time reduction

Testing:

  • [ ] Test interactions in Chrome DevTools Performance panel
  • [ ] Record traces with real interactions (clicks, typing)
  • [ ] Verify no tasks over 50ms in traces
  • [ ] Test on slow device (CPU throttling 4x or real budget phone)
  • [ ] Confirm INP under 200ms for all tested interactions
  • [ ] Test on real mobile devices

Field Data Monitoring:

  • [ ] Check Google Search Console Core Web Vitals weekly
  • [ ] Monitor INP trend (improving/stable/worsening)
  • [ ] Identify URLs with poor INP
  • [ ] Set up automated alerts for INP regressions
  • [ ] Track 75th percentile improvements over time

Performance Budgets:

  • [ ] Set INP budget: under 200ms
  • [ ] Set TBT budget: under 200ms (Lighthouse proxy)
  • [ ] Configure Lighthouse CI in build pipeline
  • [ ] Fail builds exceeding budgets
  • [ ] Alert team on performance regressions

Framework-Specific (check applicable):

  • [ ] Next.js: Use dynamic imports, enable Strict Mode
  • [ ] WordPress: Disable unnecessary plugins, use lightweight theme
  • [ ] Shopify: Minimize apps, use Dawn theme
  • [ ] React: Implement Profiler, use memoization
  • [ ] Vue: Use async components, computed properties
  • [ ] Angular: Lazy load modules, use OnPush change detection

Common Mistakes to Avoid:

  • [ ] ❌ Don’t ignore long tasks over 50ms
  • [ ] ❌ Don’t only optimize first interaction (INP measures all)
  • [ ] ❌ Don’t defer work without measurement (identify actual bottlenecks first)
  • [ ] ❌ Don’t test only on fast desktop (test on mobile devices)
  • [ ] ❌ Don’t debounce clicks/taps (only continuous inputs like typing)
  • [ ] ❌ Don’t assume frameworks optimize automatically (must use memo/callback)

Use this checklist during implementation, code reviews, and performance audits to ensure INP remains within “Good” threshold (under 200ms).


Related Core Web Vitals Resources

Complete your Core Web Vitals optimization:

  • Largest Contentful Paint (LCP) Optimization – Master loading performance where images and text blocks often become LCP elements. Learn server optimization (TTFB), resource prioritization (preload, fetchpriority), and image-specific improvements beyond file optimization.
  • Cumulative Layout Shift (CLS) Fix – Prevent visual instability caused by images loading without dimensions, fonts swapping, or dynamic content insertion. Master aspect-ratio implementation, font-display strategies, and reservation techniques for ads/embeds.
  • Core Web Vitals Complete Guide – Strategic overview of how LCP, INP, and CLS work together as ranking signals. Understand measurement methodologies, field vs lab data, and prioritization frameworks when resources are limited.
  • Image Optimization Techniques – Deep dive into formats (WebP, AVIF), responsive images (srcset/sizes), compression quality, and lazy loading strategies that directly impact LCP and indirectly improve INP by reducing main thread work during page load.

Key Takeaways

INP replaced FID on March 12, 2024 as the Core Web Vitals interactivity metric, measuring complete interaction latency (input delay + processing + presentation) for ALL user interactions throughout page lifetime rather than just the first input. This fundamental change requires comprehensive JavaScript performance optimization across every event handler and background task, not superficial first-interaction improvements.

Long tasks over 50ms represent the root cause of poor INP by blocking the main thread and preventing browsers from responding to user inputs. Breaking up long tasks through systematic yielding (setTimeout or scheduler.yield every 40-45ms) transforms uninterruptible 300ms+ blocks into interruptible chunks, allowing browsers to process pending interactions between work segments and dramatically improving input delay.

Event handler optimization keeps processing time under 100ms target through immediate feedback patterns (show loading state in 5ms, defer heavy work), debouncing continuous inputs (search, typing), efficient DOM manipulation (batch reads before writes), and framework-specific memoization (React.memo, useMemo, useCallback) to prevent unnecessary re-renders that consume 200-400ms during interactions.

Third-party scripts frequently cause poor INP by running long tasks during user interactions. Delay loading 3-5 seconds or until first interaction, implement facades for heavy embeds (YouTube, social widgets), and consider Web Workers (Partytown) to move tracking scripts off main thread, preventing analytics from blocking responsiveness.

React optimization through React.memo (prevent unnecessary component re-renders), useMemo (cache expensive calculations), useCallback (stable event handler references), and Transitions (mark non-urgent updates as interruptible) reduces processing time from 200-400ms to under 100ms by eliminating wasted rendering work during interactions.

Real user monitoring through Web Vitals library provides accurate INP measurement from production users, while attribution API reveals exactly which interactions are slow and which phase (input/processing/presentation) needs optimization, enabling data-driven prioritization and validation of improvements through field data rather than lab proxies like Total Blocking Time.

Strategic INP optimization combining long task elimination (yielding), event handler efficiency (immediate feedback, debouncing), framework optimization (memoization, code splitting), and third-party management (delayed loading, Web Workers) achieves consistent “Good” ratings (under 200ms) across all user interactions, directly improving Core Web Vitals scores, search rankings, user engagement, and conversion rates.