Nov 17, 2025

DOM & Event Delegation Explained Simply: A Must-Know JavaScript Interview Topic

When I first started learning JavaScript, one of the most confusing topics was how events actually work inside a browser. Later, during interviews, I realized that the DOM and especially Event Delegation are commonly asked topics.


In this article, I’ll go through the concepts the way I wish someone had explained them to me.


dom-and-event-delegation-in-javascript


What is the DOM (Document Object Model)?


The DOM is a structured map that the browser uses to interpret and display your webpage.

When you write HTML, the browser turns it into a tree-like structure where each element becomes a node, like <div>, <button>, <p>, etc.


You can think of it like this:


dom-tree-structure


With JavaScript, you and I can manipulate these nodes:

  • Add or remove elements
  • Change text or style
  • Listen to events (click, input, submit, etc.)

Example:


document.getElementById("title").innerText = "Hello Code Vichar!";


What Problem Does Event Delegation Solve?


Imagine you have a list where items are added dynamically:


<ul id="todoList"></ul>
<button id="addItem">Add Item</button>

Every time a new item is added, you want to detect when the user clicks on it.


A beginner might write:


item.addEventListener("click", () => {
  console.log("Item clicked");
});


But this approach has some issues:

  • New items added later won’t have the click listener
  • Adding many listeners can affect performance
  • Managing them all can get messy

This is where Event Delegation makes our lives simple.


What is Event Delegation?


Event Delegation involves adding a single event listener to a parent, allowing it to catch events that originate from its child elements as they rise through the DOM.

Instead of adding a listener to every list item, you add one to the list itself.

Because JavaScript events bubble up, you can detect them at the parent level.


Example (Event Delegation):


const list = document.getElementById("todoList");

list.addEventListener("click", (e) => {
  if (e.target && e.target.tagName === 'LI') {
    console.log("Item clicked:", e.target.textContent);
  }
});


Now it doesn’t matter if:

  • Items are static, or
  • Items are added later with JavaScript

Everything works automatically. That’s the power of Event Delegation.


How Does Event Delegation Work Under the Hood?


When JavaScript handles events, it processes them in three steps:

  • first capturing from the root down to the element,
  • then the event happening on the target element itself,
  • followed by bubbling the event back up through ancestor elements.


Event Delegation uses the bubbling phase.


So when you click:


 <li>Item 1</li>


The flow is:


event-delegation-flow


This means you can detect a click on the list item even if the click started on a nested element.



Example: Dynamic List with Event Delegation (Full Code)



<ul id="todoList"></ul>
<button id="addItem">Add Item</button>

<script>
  const addBtn = document.getElementById("addItem");
  const list = document.getElementById("todoList");

  // Add new item dynamically
  addBtn.addEventListener("click", () => {
    const li = document.createElement("li");
    li.textContent = "New Item " + (list.children.length + 1);
    list.appendChild(li);
  });

  // Event Delegation
  list.addEventListener("click", (e) => {
    if (e.target.tagName === "LI") {
      alert("You clicked: " + e.target.textContent);
    }
  });
</script>



Event Delegation vs Direct Event Binding


Feature
Direct Binding
Event Delegation
Works for dynamic elements
No
Yes
Performance
Many listeners
One listener
Clean code
Hard
Easy
Memory usage
High
Low



Follow-Up Interview Topics


Interviewers often connect Event Delegation with:


1. Event Bubbling & Capturing

Be ready to explain the three phases of events. To know more refer our post Event Bubbling & Event Capturing.


2. Stop Propagation

Sometimes you want to stop the event from bubbling:


e.stopPropagation();


3. Event.target vs Event.currentTarget


Property
Meaning
event.target
The actual element clicked
event.currentTarget
The element the listener is attached to


Example:


list.addEventListener("click", (e) => {
  console.log("target:", e.target); 
  console.log("currentTarget:", e.currentTarget); 
});


4. Delegation Limitations

Some events, such as blur and focus, do not bubble, so event delegation cannot be used with them effectively.


Best Practices for Event Delegation


  • Always check event.target before applying logic
  • Use closest() to match nested elements
  • Don’t overuse delegation; use it only for repeating or dynamic elements
  • Avoid heavy logic inside the delegated listener
  • Use class names for target matching


Example using closest():


list.addEventListener("click", (e) => {
  const li = e.target.closest("li");
  if (li) {
    console.log("Clicked item:", li.textContent);
  }
});


Conclusion


Event Delegation is one of those smart techniques that makes your frontend cleaner, faster, and more scalable.

Whenever I work with lists, menus, tables, or any dynamic content, I always prefer Event Delegation over attaching multiple listeners.


If you’re preparing for interviews, trust me, this is a topic that comes up often.

Understanding it deeply will give you confidence and help you explain it clearly during any JavaScript or frontend interview.

Nov 16, 2025

C Programming for Embedded Systems and IoT: Why It Matters & How to Get Started

In today’s connected world, tiny computers are embedded in almost everything from thermostats and medical devices to smart traffic lights and industrial machinery. These computers, known as embedded systems, are frequently powered by the C programming language. For anyone building IoT solutions or custom electronics, understanding why C is the right tool and how to optimise its use matters a great deal.


What Makes Embedded and IoT Different?


Embedded systems are computers designed to do a small, specific job. IoT (Internet of Things) means many of these devices are hooked up to the internet or a network, enabling remote monitoring and control.

Unlike general-purpose computers: 

  • Embedded devices have tight limits on memory and processing power. 
  • Power usage often must be minimised (think of battery-operated sensors).
  • They interact directly with hardware components such as motors, LEDs, or sensors.
  • Software must be robust—if a microcontroller in a car fails, it could be catastrophic. 


Internet of Things: Device and Cloud Connectivity

iot-device-cloud-connectivity

Embedded System Architecture


embedded-system-architecture.webp


Why C? A Developer’s Rationale


Ask a room full of firmware developers what language they use for embedded or IoT, and most will point to C. Here’s why:

  • Close to the metal: C lets you access hardware registers, set pin voltages, and manage memory directly.
  • Predictable performance: Code behaves as you expect, there’s little hidden overhead.
  • Portability: Once you master C for one chip, moving to another family is much easier.
  • Tool support: Nearly every microcontroller maker offers strong C compilers, libraries, and debugging tools.
  • Community wisdom: Decades of documentation, examples, and forum advice exist for C-based embedded projects. 


Real-World Example: Why Not Use Python or JavaScript?


Languages like Python are ideal for prototyping or running on beefy computers. But even today’s most powerful microcontrollers often lack the resources for Python’s runtime. With C, you can write the critical “bare metal” code needed for direct control and tiny footprints. 


Getting Started: The Basics


Let’s see what a simple embedded C program looks like for blinking an LED, a common first test on any board. 



#define LED_PIN 0x01            // Bit mask for pin 0
volatile unsigned char *port = (unsigned char *)0x5000; // Address of port register
void delay() {
  for (volatile int i = 0; i < 50000; ++i); // Crude delay loop
}
int main() {
  while (1) {
    *port |= LED_PIN;    // Turn LED on
    delay();
    *port &= ~LED_PIN;   // Turn LED off
    delay();
  }
}


It should be noted that embedded C frequently uses this tight loop with direct register access. You "poke" bits in memory connected to hardware directly; there is no operating system to do it for you.


Hardware Management: A Detailed Examination


C is particularly well-suited for working with hardware via interrupts, timers, and memory-mapped registers.


Here’s what that actually means:

  • Memory-mapped I/O: Hardware like GPIO ports appear as memory locations. You set or clear bits to turn things on/off.
  • Interrupts: You can attach small functions (Interrupt Service Routines) that run instantly when a button is pressed or a sensor triggers, overriding the main code flow. 


Interrupt Example

Suppose you want to react immediately to a sensor trigger: 


void __interrupt() my_ISR(void) {
  // Code here runs whenever the hardware fires an interrupt
  // e.g., read sensor, update variable, toggle actuator
}


Why does this matter?


No other general-purpose language offers such direct control crucial for real-time responses.


Key Advantages in IoT Projects


As IoT scales up, so do developer concerns:

  • Power Efficiency: C’s minimal overhead means you can write code that puts chips into deep sleep, only waking up for needed work. This makes it possible for sensors to run for years on a coin cell.
  • Compact Footprint: Well-crafted C code often fits in less than 32 KB, even with networking support.
  • Custom Protocols and Drivers: Need custom serial protocols or to talk to exotic sensors? Writing drivers from scratch in C is straightforward (if sometimes challenging).
  • Lack of Metal Reliability: Reliability cannot be compromised when devices are in charge of actual processes. Abstraction layers that might conceal defects or unexpected behaviour are minimised by C. 

Modern IoT Workflows and Ecosystem


A typical developer workflow:

  • Select hardware (e.g., an STM32, AVR, or ESP32 board).
  • Configure the tool chain (compiler, debugger).
  • Write C code using the vendor’s libraries and hardware abstraction layers.
  • Use open-source stacks (like lwIP for networking or FreeRTOS for multitasking), which are almost always written in C.
  • Test and debug directly on hardware, often using breakpoints, logic analysers, and serial output.
  • Iterate and optimise for speed, power, and safety. 
modern-iot-workflows-and-ecosystem

Practical Use Cases for C in IoT


  • Wearables: Custom firmware for step counting, heart rate, and Bluetooth communication.
  • Smart meters: Code that samples energy usage at precise intervals and transmits to a base station.
  • Environmental monitors: Sensor code for air quality or weather stations—must wake, sample, transmit, and get enough rest.
  • Industrial controls: Devices that must respond to sensor changes in milliseconds while adhering to strict safety regulations.
  • Automotive systems: Everything from airbag sensors to dashboard gauges. 


Power Management: C Unlocks Ultimate Control


On a typical IoT node, power is king. Consider this C workflow for reducing consumption: 


// Pseudocode for sleeping until an event
setup_interrupt_on_pin(SENSOR_PIN);
enter_sleep_mode(); // CPU goes idle
// Wakes up when the sensor triggers an interrupt
read_sensor_data();

Putting the chip to sleep and only waking for interrupts is a core IoT pattern that only C enables so precisely.



Best Practices for Embedded C Developers


  • Never assume memory or clock speed is unlimited. Always check your usage.
  • Avoid dynamic memory (malloc/free), which can fragment RAM and is rarely needed.
  • Use descriptive naming: “magic numbers” confuse future maintainers.
  • Test for edge cases, not just typical flows: What happens if the sensor reads 0? Or never responds?
  • Review hardware errata: Chip manufacturers publish quirks—real-world dependencies might force workarounds in your code. 

Conclusion


Remarks: Why Developers Must Pay Attention


C is not only proficiency in "old school," but it's essential for dependable, scalable electronics. Writing efficient, hardware-aware C is necessary, whether you're developing wearable prototypes or implementing massive sensor networks. It allows you to maximise the power and performance of your devices, unleashes creativity at the lowest levels, and creates long-lasting solutions.


For any developer wanting to move beyond high-level scripting and make things that really work, there’s no substitute for mastering C in the world of embedded and IoT. 


Oct 28, 2025

React Infinite Scroll Made Easy: Best Practices and Optimizations

Infinite scrolling in React is a modern UI technique that lets users browse large sets of data by simply scrolling, as fresh content loads automatically when they reach the end of the list. This approach is everywhere: from your social media feed to product catalogues and streaming platforms. 


By removing the need for page clicks, infinite scroll helps build seamless, immersive, and highly engaging user experiences, especially on mobile devices, where natural scrolling is king. But behind this magic, React developers need to carefully handle data fetching, rendering performance, loading indicators, and accessibility to get it right.


react-inifinite-scroll


What is Infinite Scrolling and How Does It Work?


Infinite scrolling is a web technique where content loads continuously as the user scrolls down. Instead of clicking "Next" or moving through pages (pagination), new items appear automatically at the page’s bottom. This seamless experience hooks users into exploring more content, keeps engagement high, and is ideal for social feeds, galleries, and discovery-driven apps.


Infinite Scroll vs Pagination


Feature
Infinite Scroll
Pagination
User flow
Scroll to reveal new content
Navigate between pages
Best for
Infinite feeds, discovery
Search, structured data
SEO
Harder to crawl
Better page indexing
User control
No way to jump to end or bookmark
Easy navigation and bookmarks
Load time
Risk of slower performance
Predictable page loading


Infinite scrolling boosts engagement, but it can suffer from SEO and navigation issues. Pagination is best for content-heavy or search-centric sites.


Implementing Infinite Scroll in React 


Using Intersection Observer API


The Intersection Observer API tracks when an element (e.g., a loader) enters the viewport, triggering new data loads:


import { useRef, useEffect } from "react";

function InfiniteScrollList({ fetchMore }) {
  const loaderRef = useRef();

  useEffect(() => {
    const observer = new IntersectionObserver(([entry]) => {
      if (entry.isIntersecting) fetchMore();
    });
    if (loaderRef.current) observer.observe(loaderRef.current);
    return () => observer.disconnect();
  }, [fetchMore]);

  return (
    
{/* ...list items... */}
Loading...
); }

This is efficient, less error-prone than scroll listeners, and great for both window and div scroll containers.


Using 'react-infinite-scroll-component' Library


react-infinite-scroll-component lets you drop in infinite scrolling with minimal effort:


import InfiniteScroll from 'react-infinite-scroll-component';

<InfiniteScroll
  dataLength={items.length}
  next={fetchMore}
  hasMore={hasMore}
  loader={<h4>Loading...</h4>}
>
  {items.map(item => <Item key={item.id} {...item} />)}
</InfiniteScroll>

It handles pagination, loading states, and bottom triggers for you.


Custom Infinite Scroll with useEffect, useRef, and useState


This approach allows you to create your own infinite scroll logic without relying on libraries. It focuses on the window scroll position and loads new data when the user scrolls near the bottom.

Step-by-step Example:


import React, { useEffect, useState, useRef } from "react";
import axios from "axios";

function InfiniteScrollList() {
  const [items, setItems] = useState([]);
  const [page, setPage] = useState(1);
  const [loading, setLoading] = useState(false);
  const [hasMore, setHasMore] = useState(true);

  const loadMoreRef = useRef();

  // Fetch data function
  const fetchItems = async () => {
    setLoading(true);
    try {
      const res = await axios.get(`/api/items?page=${page}`);
      setItems((prev) => [...prev, ...res.data.items]);
      setHasMore(res.data.hasMore); // Assume API tells if there's more
    } catch (err) {
      // Handle errors
    }
    setLoading(false);
  };

  // Observe the loader at bottom of list
  useEffect(() => {
    if (loading || !hasMore) return;
    const observer = new window.IntersectionObserver(
      (entries) => {
        if (entries[0].isIntersecting) {
          setPage((prev) => prev + 1);
        }
      },
      { threshold: 1 }
    );
    if (loadMoreRef.current) observer.observe(loadMoreRef.current);
    return () => observer.disconnect();
  }, [loading, hasMore]);

  // Fetch next page whenever page state changes
  useEffect(() => {
    if (hasMore) fetchItems();
  }, [page]);

  return (
    <div>
      {items.map((item) => (
        <div key={item.id}>{item.title}</div>
      ))}
      {loading && <div>Loading...</div>}
      <div ref={loadMoreRef} style={{ height: 1 }}></div>
    </div>
  );
}

export default InfiniteScrollList;


  • useState tracks items, loading, page, and hasMore for pagination.
  • useRef targets a “loader” div at the end of the list.
  • useEffect sets up an Intersection Observer to detect when the loader is visible, then increments the page.
  • The second useEffect fetches new data when the page updates.
  • Always clean up observers to avoid memory leaks.

This approach works great for any scrollable container and keeps your app fast and memory-safe.
You can further debounce or throttle fetches for performance, and you can add error/retry UI as needed!


Infinite Scrolling with Virtualized Lists ('react-window', 'react-virtualized')


When your app needs to render thousands (or even millions) of items, think chat threads, product grids, or feeds. A standard infinite scroll can choke memory and performance. Virtualization solves this by only rendering the items actually visible on screen, replacing off-screen content with lightweight placeholders.


Example: Basic Virtualized List with react-window


import React from "react";
import { FixedSizeList as List } from "react-window";

const Row = ({ index, style }) => (
  <div style={style}>
    Row {index}
  </div>
);

const MyList = ({ data }) => (
  <List
    height={400}
    itemCount={data.length}
    itemSize={35}
    width={"100%"}
  >
    {Row}
  </List>
);


  • Only the visible rows are mounted in the DOM.
  • This keeps your app fast even with huge datasets.

Variable Height with 'react-window'


If your items have different heights, use VariableSizeList:


import { VariableSizeList as List } from "react-window";

const getItemSize = index => data[index].height;

<List
  height={500}
  itemCount={data.length}
  itemSize={getItemSize}
  width={400}
>
  {Row}
</List>

  • Dynamically calculates each row’s height.

Infinite Scroll With 'react-virtualized'



import { List } from "react-virtualized";

const rowRenderer = ({ index, key, style }) => (
  <div key={key} style={style}>
    Row {index}
  </div>
);

<List
  width={300}
  height={300}
  rowCount={1000}
  rowHeight={20}
  rowRenderer={rowRenderer}
/>
  • react-virtualized also supports tables and grids, with more built-in features for complex UIs.

Why Use Virtualized Lists?


  • Renders hundreds or thousands of rows with minimal DOM and memory cost.
  • Keeps infinite scroll fast, even as data grows.
  • Pairs perfectly with lazy loading: as you fetch new data, append it to the existing data and let virtualisation handle the heavy lifting.

Fetching Paginated API Data with Infinite Scroll


Most APIs return paginated results (page/offset). At each scroll, fetch the next "page" and append:


async function fetchMore() {
  const res = await fetch(`/api/feed?page=${nextPage}`);
  const data = await res.json();
  setItems(items => [...items, ...data.items]);
}
Track nextPage or cursor for incremental loads.



Handling API Errors and Retries


If fetching fails, show an error UI and allow the user to retry:


if (error) return <button onClick={retryFunc}>Retry</button>;

Debounce or throttle fetch calls to avoid overload.


Performance Optimization (Debouncing & Throttling)


Avoid making too many requests on fast scrolls:
  • Debounce: Wait for scrolling to “pause” before firing a fetch
  • Throttle: Limit fetch calls to once every N ms

import _ from "lodash"; // lodash debounce
const debouncedFetch = _.debounce(fetchMore, 500);

Improves speed and reduces server load.


Loading States & Skeleton UI


Show a loader or skeleton while fetching:


{isLoading && <SkeletonLoader />}
Use libraries like react-loading-skeleton for a polished appearance and great UX.



Infinite Scroll in Div vs Window


You can trigger infinite scrolling inside any scrollable container, not just the window, by observing each container’s scroll position or attaching IntersectionObservers.



Accessibility Considerations


  • Announce new content for screen readers (aria-live)
  • Always provide alternate navigation for long feeds, allow jump points or a “Back to Top”
  • Ensure keyboard navigation works in virtual lists


React Infinite Scroll Best Practices


  • Never re-render the whole list append new items to the state
  • Clean up observers in useEffect return
  • Use virtualization for large datasets
  • Debounce/throttle API calls for performance
  • Handle errors gracefully
  • Avoid memory leaks by disconnecting observers


Summary


Infinite scrolling in React offers a smooth alternative to traditional pagination, keeping users engaged with continuous, dynamic content loads. By leveraging APIs like Intersection Observer, specialised libraries, and smart state management, you can tackle huge datasets efficiently while controlling memory use and minimising re-renders.

However, infinite scroll comes with UX and accessibility challenges, such as SEO, navigation, and ensuring screen reader support, that you must address to create a great product.

Mastering these techniques lets you build intuitive, high-performance interfaces that users will love, whether for social feeds or large e-commerce catalogues.


Oct 25, 2025

TypeScript any vs unknown: Key Differences and Best Practices

Working with dynamic or external data in TypeScript can be tricky especially when you’re not sure what shape that data will take. This is where the any and unknown types come in, each handling uncertainty in a different way. While any acts as a “turn off type safety” switch, letting you do anything without checks, unknown forces you to validate and narrow types before using them.


Understanding the key differences between these two special types will help you write much safer, more predictable, and maintainable TypeScript code, especially when handling API responses or user input.


TypeScript any vs unknown: Key Differences and Best Practices


What is 'any' in TypeScript?


The any type in TypeScript is an “escape hatch” that turns off type checking for a variable. Once you use any, you can assign any value and perform any action on that value, and the compiler won’t catch errors, even deeply unsafe actions. It’s like telling TypeScript, “Trust me, I know what I’m doing!”.

Example:


let value: any = "Hello";
value = 42;
value = { greet: "Hi" };
console.log(value.nonExistentMethod()); // No error at compile time, will crash at runtime!


What is 'unknown' in TypeScript?


The unknown type is a safer alternative to any. Like any, it allows assignment of any value but you can’t perform arbitrary operations on it until you explicitly check or assert its type. TypeScript forces you to prove what the value is before using it.


let result: unknown = "Hello";
if (typeof result === "string") {
  console.log(result.toUpperCase()); // ✅ Safe, type checked
}

Trying to use unknown directly triggers a compile-time error!


Type Narrowing for 'unknown' (typeof, instanceof, custom guards)


Before using an unknown value, narrow its type:

if (typeof input === "string") { /* ... */ }
if (input instanceof Array) { /* ... */ }
function isUser(val: unknown): val is User { return typeof val === "object" && val !== null && "email" in val; }


How TypeScript Treats Type Checking for 'any' vs 'unknown'


  • any: No type checks; compiler lets anything through.
  • unknown: The compiler requires type narrowing or assertion before use, enforcing safety.

Why 'unknown' is Safer than 'any'


'unknown' forces you to check or narrow the type before every operation, reducing runtime bugs. By using 'unknown', you keep TypeScript’s guarantees, making your code easier to debug, refactor, and scale safely.


When to Use 'unknown' Instead of 'any'


Use unknown for external data with unpredictable shape API responses, user input, deserialized JSON, so you’re required to validate before using it.

Example:


function handleApiResponse(data: unknown) {
  if (typeof data === "object" && data !== null && "success" in data) {
    // Safe to proceed!
  }
}


When 'any' Is Necessary (Limiting Scope)


Sometimes any is needed rapid prototyping, using broken external libraries, or testing. Always restrict its usage to small scopes, and migrate to unknown or concrete types quickly.


Common Mistakes Developers Make with 'any'


  • Overusing any leads to silent bugs and runtime crashes.
  • Assigning any too early disables TypeScript for the rest of the code.
  • Using any for APIs, user data, or third-party packages instead of validating types.

Real-World Use Cases for 'unknown'


Check some Real World use cases for type unknown below-

1- Handling API Response Data with 'unknown'


Always type your API responses with unknown first, and check their shape:


function getData(): Promise<unknown> { /* ... */ }
getData().then(response => {
  if (typeof response === "object" && response !== null) { /*...*/ }
});


2- Working with Dynamic User Input Safely Using 'unknown'


If you get dynamic input from forms, treat it as unknown:


function processInput(input: unknown) {
  if (typeof input === "string") { /* ... */ }
}


3- Validating Unknown JSON Data



function validateJSON(val: unknown): val is User {
  return typeof val === "object" && val !== null && "email" in val;
}


Migrating from 'any' to 'unknown': Step-by-Step


  • Change your variable’s type from any to unknown.
  • Add type checks wherever you use that variable.
  • Use custom type guards as needed.
  • Refactor dependent code to ensure strict safety.

TypeScript Best Practices: Avoid 'any', Embrace 'unknown'


  • Default to unknown for untyped data.
  • Only use any for prototyping, legacy code, or unavoidable hacks.
  • Always narrow unknown to concrete types before use.
  • Leverage custom guards for advanced validation.

Summary


The any type gives you total freedom but sacrifices all safety making TypeScript act like plain JavaScript, with all its potential runtime surprises. In contrast, unknown is a safer alternative, allowing you the flexibility to receive any value while ensuring you still leverage TypeScript’s powerful compile-time checks.

By embracing unknown, you push yourself and your team to narrow types and validate assumptions, leading to fewer bugs and more robust applications. Whenever you’re dealing with unpredictable or external data, reach for unknown over any your future code (and coworkers) will thank you.

Oct 24, 2025

Mastering TypeScript Generics: Reusable, Type-Safe, and Scalable Code

If you’ve ever wanted to write reusable TypeScript code that’s both flexible and type-safe, generics are your best friend. Generics allow you to define functions, interfaces, and classes that work with any data type, while still enforcing strict type checking. Imagine writing a utility or hook just once and having TypeScript guarantee it works for strings, numbers, objects, or even complex structures, with no loss of type safety. 


Whether you’re building simple libraries or scalable app architectures, understanding generics will help you avoid repetitive code, catch more bugs at compile time, and keep your codebase clean and robust.


typescript-generics


What are Generics in TypeScript?


Generics are a powerful TypeScript feature that lets you create reusable, type-safe code without specifying concrete types up front. They act as placeholders (like <T>) for types that are filled in later, so your functions, classes, and interfaces can work with any data type and still maintain full type safety.


Example (generic function):


function identity<T>(value: T): T {
  return value;
}

const num = identity<number>(42);    // num is type number
const str = identity<string>("Hello"); // str is type string


Why Do We Use Generics in TypeScript?


Generics make code more flexible and reusable, while preserving static type checking. Instead of duplicating similar logic for multiple types, you can write one generic function/class/interface that works with them all.


  • Prevent runtime type errors by catching mistakes at compile-time
  • Write utility code once; reuse for different data structures
  • Reduce code duplication


How Generics Improve Type Safety and Reusability


With generics, TypeScript tracks what type is used, “locking in” the type at the usage site but leaving implementation flexible. This lets you catch type mismatches early and build solid, reusable utilities.


Example (interface):


interface Pair<T, K> {
  first: T;
  second: K;
}
const pair: Pair<string, number> = { first: "one", second: 2 };


When to Use Generics in Functions, Interfaces, and Classes


  • Functions: For utilities that operate on any type
  • Interfaces/Types: For data structures that hold variable types
  • Classes: For things like collections, repositories, or state containers


Example (class):


class Box<T> {
  constructor(private value: T) {}
  getValue(): T { return this.value; }
}
const numberBox = new Box<number>(42);


Difference Between Generics and any Type


Generics (<T>)
any
Placeholder for a specific type
Accepts any type, loses safety
Checked by TypeScript at compile time
Not checked (no autocomplete)
Preserves type information
Results in type loss, potential errors

  • Use generics for type safety and flexibility; avoid any unless you need to bypass checks deliberately.


Generic React Components and Props (<T> with React.FC)


TypeScript generics help make truly flexible React components:


type ListProps<T> = { items: T[]; renderItem: (item: T) => JSX.Element };

function List<T>({ items, renderItem }: ListProps<T>) {
  return <ul>{items.map(renderItem)}</ul>;
}


Creating Reusable Utility Hooks with Generics (e.g., useFetch<T>)


Generics are perfect for scalable React hooks:


function useFetch<T>(url: string): T | null {
  // fetch and parse data as type T
}

const user = useFetch<User>("/api/user"); // infers User type
const posts = useFetch<Post[]>("/api/posts"); // infers Post[]


Implementing Repository Patterns with Generics (Repository<T>)


You can keep your data access code DRY with generic repositories:


class Repository<T> {
  private items: T[] = [];
  add(item: T) { this.items.push(item); }
  findAll(): T[] { return this.items; }
}

const userRepo = new Repository<User>();
const postRepo = new Repository<Post>();
userRepo.add({ id: 1, name: "Alice" });


Debugging Common TypeScript Generic Errors


  • Implicit any: TypeScript may infer "any" if the generic type can’t be determined. Always provide explicit types if possible.
  • Type constraints: Use "extends" for generic bounds: function logLength<T extends { length: number }>(val: T) {...}
  • Mismatched types: Check your assignments carefully; type mismatches show up at compile-time.

How Generics Are Used in Popular Libraries

  • React Query: types queries with generics: useQuery<T>()
  • Redux Toolkit: infers slice state with generics
  • Prisma: returns typed results for any model defined in the schema
  • Axios: passes response types with generics: axios.get<T>()
Generics power much of TypeScript’s ecosystem, enabling autocomplete, error checking, and more.

Performance and Maintainability Benefits of Generics


Performance: TypeScript generics don’t impact runtime speed; they only exist at compile time for safety.
Maintainability: Drastically reduce code duplication. Generic logic is clearer and easier to update. When you change the underlying type or data model, usage updates across the codebase are triggered thanks to the type system.


Summary


Generics in TypeScript make it possible to write one set of logic that adapts to any type, ensuring both reusability and strong type safety. They’re used everywhere: in array utilities, React hooks, repositories, and popular libraries like Axios and React Query.


By mastering generics, you not only prevent subtle type errors but also boost maintainability and performance across your projects. Whether you’re handling simple lists or crafting advanced patterns, generics give your code the flexibility and reliability it needs to scale confidently.

Oct 21, 2025

React 19.2 Features Explained: Activity API, useEffectEvent, and More

Hey developers, React 19.2 is here, and it’s not just another minor patch. This update focuses on making your apps faster, smarter, and easier to manage. It brings new tools like the <Activity /> API, useEffectEvent, and cacheSignal, all aimed at improving performance and simplifying your workflow.


If you’ve ever struggled with retaining state between hidden components, dealing with useEffect dependency chaos, or optimizing server rendering, React 19.2 has your back. You can now pause UIs without losing data, handle effects more efficiently, and stream server-rendered content faster.


In short, this version makes React feel more reactive, smart enough to pause background work, resume instantly, and reduce unnecessary renders, all while keeping your code cleaner and simpler to maintain.




What’s New in React 19.2


1. The <Activity /> API


The new <Activity /> component lets you hide parts of the UI without destroying their state. In earlier versions, hiding a component often meant unmounting it, losing any temporary values or scroll positions.


The <Activity /> component changes that: it pauses rendering and effects when hidden, but keeps everything in memory. When visible again, it resumes seamlessly.


Example:


import { Activity } from 'react';

function Dashboard({ isHidden }) {
  return (
    <Activity mode={isHidden ? 'hidden' : 'visible'}>
      <UserProfile />
    </Activity>
  );
}


That means less jank when switching between views, for example, between tabs, modals, or routes


2. The useEffectEvent Hook


A major pain point in React has been stale closures in effects when your effect captures outdated values of props or state.


useEffectEvent finally fixes this. It lets you define stable event handlers inside effects that always have access to the latest state and props without repeatedly re-running the effect.


Before (React 19.2):


useEffect(() => {
  const handle = () => console.log(count); // may log old count
  window.addEventListener('click', handle);
  return () => window.removeEventListener('click', handle);
}, [count]); // keeps re-attaching!


After (React 19.2):


const onClick = useEffectEvent(() => {
  console.log(count); // always latest count
});

useEffect(() => {
  window.addEventListener('click', onClick);
  return () => window.removeEventListener('click', onClick);
}, []); // cleaner and stable


You now get both optimal performance and correct behaviour, no stale state bugs or redundant effect restarts.


3. cacheSignal and Smart State Caching


React 19.2 introduces cacheSignal, a new primitive for caching state and resources efficiently. It helps avoid repeated data fetching or computation on re-renders, particularly when using Suspense or Server Components.


This is especially powerful when combined with Server-Side Rendering (SSR) or Web Streams, enabling partial rehydration and faster page transitions.​


4. Better SSR and Streaming


React 19.2 continues improving SSR (Server-Side Rendering) with web streams, allowing sections of the UI to progressively load without waiting for all data to finish.


Benefits:

  • Faster first paint time
  • Smoother hydration
  • Less JavaScript bloat
Large apps can now stream HTML to the browser in chunks, and React hydrates these chunks incrementally, a massive boost for user perception and SEO.​


5. Batched Suspense Boundaries


Another subtle but impactful update: React now batches Suspense boundaries. When multiple components suspend (wait for async data), React can handle them as a group to minimize unnecessary re-renders, improving both memory usage and UI consistency.


This also helps with "pre-warming" neighbouring Suspense trees React can prepare them in the background before showing the fallback, making the loading experience even smoother.


Problems React 19.2 Solves


Before this release, developers frequently struggled with:
  • Lost UI state when hiding components
  • Re-renders due to stale closures in useEffect
  • Lagging SSR pipelines in large applications
  • Over-fetching or repeated async triggers
React 19.2 addresses all of these with better built-in memory handling, composable suspense, and intelligent caching mechanisms.​


How React 19.2 Improves Developer Experience (DX)


These updates focus on giving developers:
  • Simpler effect management via useEffectEvent
  • Smoother UI transitions via the <Activity /> API
  • Better state retention when toggling between screens
  • Optimized streaming SSR for content-heavy apps
  • Improved debugging tools via enhanced DevTools integration
Overall, you spend less time writing workaround logic and more time building features that feel instantly responsive.​


Migration and Upgrade Tips


The best news? React 19.2 has no major breaking changes.


Still, here’s what to keep in mind:

  • Update to the latest ESLint plugin for React Hooks (v6.1.1) to support useEffectEvent.
  • Refactor long-lived useEffect dependencies using the new pattern.
  • Test UI components wrapped in <Activity /> for compatibility with older routers or layout systems.
  • Upgrade React DevTools to new panels that now visualise Suspense and activity states.

You can install React 19.2 easily via:

npm install react@19.2 react-dom@19.2

Then re-run your app, no further breaking changes expected.


Future Implications: Why React 19.2 Is a Game-Changer


React 19.2 feels like the foundation for a more self-optimizing, async-friendly future.


Here’s why:

  • The <Activity /> component and Suspense improvements mark a shift toward preserving background work, not discarding it.
  • useEffectEvent modernises React’s approach to effects, linking state updates, transitions, and event handling in a cleaner flow.
  • With cacheSignalstartTransition improvements, and streaming SSR, React is now blending client and server logic more naturally.
This means: fewer third-party hacks, smoother DX, and more performance “by default.”


Summary


React 19.2 sharpens the tools developers rely on daily. The <Activity /> API keeps hidden UI elements alive without reloading their state, improving navigation speed. useEffectEvent finally fixes stale closures, making effects cleaner and safer. And cacheSignal brings better caching for async data, especially useful with server components.


On top of that, React enhances SSR with Web Streams, batches Suspense boundaries, and adds partial pre-rendering for faster initial loads.


Upgrading is smooth, with no major breaking changes. Just install the latest version and you’re ready to use the new APIs.


Overall, React 19.2 feels like a maturity milestone. It balances performance, simplicity, and scalability, giving developers more control while asking for less manual optimization. This release shows React’s commitment to being both developer-friendly and production-ready for the next generation of web apps.​


Oct 20, 2025

React Code Splitting Made Easy: Lazy Loading & Suspense Guide

As your React applications grow, so does the bundle size, and that means slower loading times for users. Route-based code splitting solves this by loading only the JavaScript required for the page a user is viewing. Instead of sending every component in one huge file, React can intelligently load code “on demand,” giving your app blazing-fast speed and a better user experience.


This guide will walk through what route-based code splitting is, how to implement it with React.lazy and Suspense, why it improves performance, and practical examples that beginners can easily follow.


React Code Splitting Made Easy: Lazy Loading & Suspense Guide


What is Route-Based Code Splitting?


In simple terms, Code Splitting breaks your React app into smaller chunks instead of one big bundle. Route-based code splitting splits your code by routes, meaning only the code needed for that specific route is loaded when a user navigates to it.


Problem (Without Code Splitting)


When a React app grows, all your pages and dependencies are bundled into one large file by Webpack. As a result, even visiting the homepage downloads the entire app — including code for other unused pages like login, profile, and dashboard.


Solution


With route-based code splitting, the homepage only fetches the code it needs. When users navigate to another route, React dynamically loads that route’s code using React.lazy and Suspense.


How to Implement Route-Based Code Splitting (React Router v6)


You can achieve route-based code splitting easily using two features:


  • React.lazy() to lazy-load a component
  • React.Suspense to show fallback UI while the component loads


Step-by-Step Example


import React, { lazy, Suspense } from "react";
import { BrowserRouter as Router, Routes, Route } from "react-router-dom";

// Lazy load components
const Home = lazy(() => import("./pages/Home"));
const About = lazy(() => import("./pages/About"));
const Dashboard = lazy(() => import("./pages/Dashboard"));

function App() {
  return (
    <Router>
      <Suspense fallback={<div>Loading...</div>}>
        <Routes>
          <Route path="/" element={<Home />} />
          <Route path="/about" element={<About />} />
          <Route path="/dashboard" element={<Dashboard />} />
        </Routes>
      </Suspense>
    </Router>
  );
}

export default App;

Here:

  • React.lazy() ensures that each route component is loaded only when needed.
  • Suspense displays a fallback (like a spinner) while the lazy-loaded component is fetched.


React Lazy and Suspense Explained


React.lazy()


React.lazy() allows you to load components dynamically with import(). When React encounters a lazy component, it only fetches its code bundle when rendered for the first time.


const Profile = React.lazy(() => import('./Profile'));


React.Suspense


Suspense wraps lazy components and provides a fallback UI, often a loading spinner or skeleton, while the component is fetched.


<Suspense fallback={<div>Loading profile...</div>}>
  <Profile />
</Suspense>


Adding a Fallback UI with Suspense


For a better user experience during loading, replace plain text with a spinner or skeleton loader.


import React from "react";

function Loader() {
  return <div className="spinner">Loading...</div>;
}

<Suspense fallback={<Loader />}>
  <Dashboard />
</Suspense>

You can use libraries like react-spinners or MUI Skeleton for styling the fallback easily.


Lazy Loading Components Conditionally


You can also load components conditionally, not just based on routes.

const Chart = lazy(() => import("./Chart"));

{showChart && (
  <Suspense fallback={<div>Loading Chart...</div>}>
    <Chart />
  </Suspense>
)}

This ensures heavy components like charts, modals, or editors only load when users actually need them.


Preloading and Prefetching Routes for Faster UX


You can preload routes before users navigate to them using dynamic imports.

import("./pages/Dashboard");

Preloading commonly accessed pages improves perceived responsiveness in SPAs.


Error Boundaries for Lazy-Loaded Routes


React.Suspense doesn’t handle load errors, so always pair it with error boundaries to display friendly fallback screens instead of breaking your app.

class ErrorBoundary extends React.Component {
  state = { hasError: false };
  static getDerivedStateFromError() {
    return { hasError: true };
  }
  render() {
    return this.state.hasError ? <h3>Something went wrong!</h3> : this.props.children;
  }
}

Wrap your routes:

<ErrorBoundary>
  <Suspense fallback={<Loader />}>
    <Routes>{/* routes here */}</Routes>
  </Suspense>
</ErrorBoundary>


Code Splitting with Webpack and React Router


Webpack natively supports code splitting via dynamic import(). When you use React.lazy, Webpack automatically generates chunks based on dynamic imports.

import("./Dashboard").then(module => {
  const Dashboard = module.default;
});


You can view generated chunks in your project’s build/static/js folder after running npm run build.


How Code Splitting Improves Performance


  • Reduces initial load time: Only essential JavaScript for the current view loads.
  • Optimizes network usage: Large bundles aren’t downloaded all at once.
  • Improves SEO and Core Web Vitals: Faster render time lowers page bounce rate.
  • Enhances perceived performance: Fallbacks and skeletons maintain fluid UX.
Code splitting ensures your React app stays lightweight and responsive even as it scales.​

Best Practices for Route-Based Code Splitting


  • Split code only where necessary — don’t overuse lazy loading.
  • Provide meaningful and responsive fallback UIs.
  • Avoid lazy loading very small components; the overhead may not be worth it.
  • Preload frequently used pages for smooth transitions.
  • Monitor bundle sizes using tools like Webpack Bundle Analyzer.
  • Combine route-based and component-based splitting for best results.

Summary


Route-based code splitting is one of the easiest and most effective performance optimizations in React. By splitting large bundles, loading only what’s necessary, and pairing this with React.lazy and Suspense, your application will load faster and feel smoother.

Start simple, lazy-load your main routes first, add proper fallback UIs, and consider preloading critical pages. This small optimization can make a huge difference in the perceived performance of your React application and user satisfaction.