HTML-in-Canvas: Render Real DOM Elements Inside Canvas with the New drawElement API

The web platform just gained a superpower. The WICG HTML-in-Canvas proposal — now behind a flag in Chromium — lets you render real, interactive HTML elements directly into a <canvas>. Forms, text, dashboards, entire UI widgets — drawn into 2D or WebGL contexts while keeping full DOM accessibility.
No more html2canvas hacks. No more foreignObject SVG workarounds. Native, first-class HTML rendering inside canvas.
In this tutorial, you'll build 4 demos that showcase what this API makes possible — and learn the architecture behind it.
Prerequisites
- Node.js 18+
- Next.js 14+ (App Router)
- Chrome Canary with
chrome://flags/#canvas-draw-elementenabled - Basic Canvas API knowledge
The Problem This Solves
Until now, rendering HTML content inside a canvas required painful workarounds:
| Approach | Limitation |
|---|---|
html2canvas | Re-renders DOM as bitmap — no interactivity, inaccurate rendering |
foreignObject in SVG | Tainted canvas, CORS issues, no WebGL support |
| Manual Canvas drawing | You lose all of CSS, accessibility, i18n, text layout |
| WebGL text | Requires font atlases, no complex layouts, massive overhead |
The HTML-in-Canvas API solves all of these by letting the browser's own rendering engine draw DOM elements into a canvas context.
The API: Three Primitives
1. layoutsubtree Attribute
<canvas layoutsubtree width="800" height="600">
<!-- These children are REAL DOM elements -->
<div class="card">
<h2>I'm a real heading</h2>
<button>I'm a real button</button>
</div>
</canvas>The layoutsubtree attribute tells the browser: "Lay out my children as normal DOM elements, but also make them available for drawing into the canvas." Children get:
- A stacking context
- Containing block behavior
- Paint containment
- Full hit testing and accessibility
2. drawElement() for 2D Canvas
const ctx = canvas.getContext('2d');
const transform = ctx.drawElement(childElement, x, y, { width, height });
childElement.style.transform = transform;This draws the child element — including all its CSS styles, pseudo-elements, shadows, and layout — into the canvas at position (x, y). The returned transform aligns the DOM position with the drawn position so events work correctly.
3. texElement2D() for WebGL
const gl = canvas.getContext('webgl2');
gl.texElement2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, childElement);This uploads the element's rendering as a WebGL texture — enabling 3D transforms, lighting, post-processing, and shader effects on real HTML content.
Setup
npx create-next-app@latest html-in-canvas-demo --typescript --tailwind --app
cd html-in-canvas-demoDemo 1: Drawing HTML Into Canvas (Basic)
The simplest use case — take an HTML element and draw it into a 2D canvas.
// app/basic/page.tsx
'use client';
import { useEffect, useRef } from 'react';
export default function BasicDemo() {
const canvasRef = useRef<HTMLCanvasElement>(null);
const cardRef = useRef<HTMLDivElement>(null);
useEffect(() => {
const canvas = canvasRef.current;
const card = cardRef.current;
if (!canvas || !card) return;
const ctx = canvas.getContext('2d')!;
canvas.width = 800;
canvas.height = 500;
// The paint event fires when children change
canvas.addEventListener('paint', () => {
ctx.clearRect(0, 0, 800, 500);
// Draw dark background
ctx.fillStyle = '#0f172a';
ctx.fillRect(0, 0, 800, 500);
// Draw the HTML card element at position (200, 100)
// drawElement renders the FULL styled DOM subtree
const transform = ctx.drawElement(card, 200, 100, {
width: 400,
height: 300,
});
// Apply transform so mouse events hit the right spot
card.style.transform = transform;
});
}, []);
return (
<canvas ref={canvasRef} layoutsubtree width={800} height={500}>
{/* This is a REAL DOM element inside the canvas */}
<div
ref={cardRef}
className="rounded-xl bg-slate-800 p-8 text-white shadow-2xl"
>
<h2 className="text-2xl font-bold mb-4">Hello from the DOM</h2>
<p className="text-slate-300 mb-6">
This card is a real HTML element — with full CSS styling,
accessibility, and event handling — drawn into a canvas.
</p>
<button
className="rounded-lg bg-sky-500 px-6 py-2 font-semibold
hover:bg-sky-400 transition-colors"
onClick={() => alert('Button clicked! Events work.')}
>
Click Me
</button>
</div>
</canvas>
);
}What's happening:
- The
layoutsubtreeattribute tells Chrome to lay out the<div>as a normal DOM element ctx.drawElement()renders it into the canvas at(200, 100)with the specified dimensions- The returned
transformis applied back to the element so click events map to the correct position - The
paintevent fires whenever the child element changes — no manual polling needed
Demo 2: Interactive Form in Canvas
Real forms — with focus states, keyboard navigation, and validation — inside a canvas.
// app/form/page.tsx
'use client';
import { useEffect, useRef, useState } from 'react';
export default function FormDemo() {
const canvasRef = useRef<HTMLCanvasElement>(null);
const formRef = useRef<HTMLFormElement>(null);
const [submitted, setSubmitted] = useState(false);
useEffect(() => {
const canvas = canvasRef.current;
const form = formRef.current;
if (!canvas || !form) return;
const ctx = canvas.getContext('2d')!;
canvas.width = 900;
canvas.height = 600;
canvas.addEventListener('paint', () => {
ctx.clearRect(0, 0, 900, 600);
// Background gradient
const grad = ctx.createLinearGradient(0, 0, 900, 600);
grad.addColorStop(0, '#0f172a');
grad.addColorStop(1, '#1e1b4b');
ctx.fillStyle = grad;
ctx.fillRect(0, 0, 900, 600);
// Decorative circles
ctx.beginPath();
ctx.arc(700, 100, 200, 0, Math.PI * 2);
ctx.fillStyle = '#38bdf820';
ctx.fill();
ctx.beginPath();
ctx.arc(200, 500, 150, 0, Math.PI * 2);
ctx.fillStyle = '#a78bfa15';
ctx.fill();
// Draw the form at center
const transform = ctx.drawElement(form, 250, 80, {
width: 400,
height: 440,
});
form.style.transform = transform;
});
}, []);
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
setSubmitted(true);
setTimeout(() => setSubmitted(false), 2000);
};
return (
<canvas ref={canvasRef} layoutsubtree width={900} height={600}>
<form
ref={formRef}
onSubmit={handleSubmit}
className="rounded-2xl bg-slate-800/90 backdrop-blur p-8 shadow-2xl
border border-slate-700"
>
<h2 className="text-xl font-bold text-white mb-6">Contact Us</h2>
<label className="block mb-4">
<span className="text-sm text-slate-400">Name</span>
<input
type="text"
required
className="mt-1 w-full rounded-lg bg-slate-900 border border-slate-600
px-4 py-2.5 text-white focus:border-sky-400
focus:ring-2 focus:ring-sky-400/30 outline-none"
placeholder="Your name"
/>
</label>
<label className="block mb-4">
<span className="text-sm text-slate-400">Email</span>
<input
type="email"
required
className="mt-1 w-full rounded-lg bg-slate-900 border border-slate-600
px-4 py-2.5 text-white focus:border-sky-400
focus:ring-2 focus:ring-sky-400/30 outline-none"
placeholder="you@example.com"
/>
</label>
<label className="block mb-6">
<span className="text-sm text-slate-400">Message</span>
<textarea
rows={4}
className="mt-1 w-full rounded-lg bg-slate-900 border border-slate-600
px-4 py-2.5 text-white focus:border-sky-400
focus:ring-2 focus:ring-sky-400/30 outline-none resize-none"
placeholder="What can we help with?"
/>
</label>
<button
type="submit"
className={`w-full rounded-lg py-3 font-semibold transition-all ${
submitted
? 'bg-green-500 text-white'
: 'bg-sky-500 text-white hover:bg-sky-400'
}`}
>
{submitted ? 'Sent!' : 'Send Message'}
</button>
</form>
</canvas>
);
}Why this matters: The form is a real DOM element. Screen readers see it. Keyboard Tab works. Browser autofill works. Password managers work. All rendered into a canvas with custom decorative backgrounds that would be impossible with pure DOM.
Demo 3: Dashboard Widgets in Canvas
Compose multiple HTML widgets into a single canvas — with GPU-accelerated compositing.
// app/dashboard/page.tsx
'use client';
import { useEffect, useRef } from 'react';
function StatCard({ title, value, change, color }: {
title: string; value: string; change: string; color: string;
}) {
return (
<div className="rounded-xl bg-slate-800 p-5 border border-slate-700">
<p className="text-sm text-slate-400">{title}</p>
<p className="text-2xl font-bold text-white mt-1">{value}</p>
<p className={`text-sm font-medium mt-2`} style={{ color }}>
{change}
</p>
</div>
);
}
export default function DashboardDemo() {
const canvasRef = useRef<HTMLCanvasElement>(null);
const card1Ref = useRef<HTMLDivElement>(null);
const card2Ref = useRef<HTMLDivElement>(null);
const card3Ref = useRef<HTMLDivElement>(null);
useEffect(() => {
const canvas = canvasRef.current;
if (!canvas) return;
const ctx = canvas.getContext('2d')!;
canvas.width = 900;
canvas.height = 500;
const cards = [card1Ref, card2Ref, card3Ref];
canvas.addEventListener('paint', () => {
ctx.clearRect(0, 0, 900, 500);
ctx.fillStyle = '#0f172a';
ctx.fillRect(0, 0, 900, 500);
// Title drawn with Canvas API
ctx.fillStyle = '#f8fafc';
ctx.font = 'bold 24px Inter, system-ui, sans-serif';
ctx.fillText('Analytics Dashboard', 30, 40);
// Draw each stat card at different positions
const positions = [
{ x: 30, y: 70 },
{ x: 310, y: 70 },
{ x: 590, y: 70 },
];
cards.forEach((ref, i) => {
if (ref.current) {
const t = ctx.drawElement(ref.current, positions[i].x, positions[i].y, {
width: 250,
height: 120,
});
ref.current.style.transform = t;
}
});
// Draw a chart with Canvas API below the HTML cards
drawChart(ctx, 30, 220, 840, 250);
});
}, []);
return (
<canvas ref={canvasRef} layoutsubtree width={900} height={500}>
<div ref={card1Ref}>
<StatCard title="Revenue" value="$12,847" change="+12.5%" color="#4ade80" />
</div>
<div ref={card2Ref}>
<StatCard title="Users" value="3,429" change="+8.2%" color="#38bdf8" />
</div>
<div ref={card3Ref}>
<StatCard title="Orders" value="842" change="+23.1%" color="#a78bfa" />
</div>
</canvas>
);
}
function drawChart(
ctx: CanvasRenderingContext2D,
x: number, y: number, w: number, h: number
) {
// Chart background
ctx.fillStyle = '#1e293b';
ctx.beginPath();
ctx.roundRect(x, y, w, h, 12);
ctx.fill();
// Chart line
const points = [40, 65, 45, 80, 60, 90, 75, 95, 70, 100, 85, 110];
ctx.beginPath();
ctx.strokeStyle = '#38bdf8';
ctx.lineWidth = 2.5;
points.forEach((p, i) => {
const px = x + 30 + (i / (points.length - 1)) * (w - 60);
const py = y + h - 30 - (p / 120) * (h - 60);
if (i === 0) ctx.moveTo(px, py);
else ctx.lineTo(px, py);
});
ctx.stroke();
// Gradient fill
const lastPx = x + 30 + ((points.length - 1) / (points.length - 1)) * (w - 60);
ctx.lineTo(lastPx, y + h - 30);
ctx.lineTo(x + 30, y + h - 30);
ctx.closePath();
const grad = ctx.createLinearGradient(0, y, 0, y + h);
grad.addColorStop(0, '#38bdf830');
grad.addColorStop(1, '#38bdf805');
ctx.fillStyle = grad;
ctx.fill();
ctx.fillStyle = '#f8fafc';
ctx.font = 'bold 14px Inter, system-ui';
ctx.fillText('Revenue Over Time', x + 16, y + 28);
}The power here: HTML stat cards give you full CSS styling, hover states, and accessibility — while the chart below is drawn with the Canvas API for pixel-perfect rendering. The paint event keeps everything in sync. One rendering surface, best of both worlds.
Demo 4: HTML as WebGL Texture (3D Card)
The most exciting primitive — upload an HTML element as a WebGL texture and apply 3D transforms to it.
// app/3d-card/page.tsx
'use client';
import { useEffect, useRef } from 'react';
export default function ThreeDCardDemo() {
const canvasRef = useRef<HTMLCanvasElement>(null);
const profileRef = useRef<HTMLDivElement>(null);
useEffect(() => {
const canvas = canvasRef.current;
const profile = profileRef.current;
if (!canvas || !profile) return;
const gl = canvas.getContext('webgl2')!;
canvas.width = 800;
canvas.height = 600;
// Create texture from HTML element
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// Upload HTML element as texture
// texElement2D renders the element's full styled output
gl.texElement2D(
gl.TEXTURE_2D, // target
0, // level
gl.RGBA, // internal format
gl.RGBA, // format
gl.UNSIGNED_BYTE, // type
profile // the HTML element
);
// Set texture parameters
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
// Now use this texture in your WebGL scene
// Apply perspective transforms, lighting, reflections...
// The HTML content becomes a first-class 3D object
renderScene(gl, texture);
}, []);
return (
<canvas ref={canvasRef} layoutsubtree width={800} height={600}>
<div
ref={profileRef}
className="w-72 rounded-2xl bg-gradient-to-b from-slate-800 to-slate-900
p-8 text-center border border-slate-700 shadow-2xl"
>
<div className="w-20 h-20 mx-auto rounded-full bg-gradient-to-br
from-sky-400 to-violet-500 flex items-center
justify-center text-3xl font-bold text-white mb-4">
N
</div>
<h3 className="text-xl font-bold text-white">Noqta Agent</h3>
<p className="text-sky-400 text-sm mt-1">AI Development Agency</p>
<div className="flex justify-around mt-6 text-center">
<div>
<p className="text-lg font-bold text-white">127</p>
<p className="text-xs text-slate-400">Projects</p>
</div>
<div>
<p className="text-lg font-bold text-white">48</p>
<p className="text-xs text-slate-400">Clients</p>
</div>
<div>
<p className="text-lg font-bold text-white">2.4k</p>
<p className="text-xs text-slate-400">Stars</p>
</div>
</div>
<button className="mt-6 w-full rounded-lg bg-gradient-to-r
from-sky-500 to-violet-500 py-2.5 font-semibold
text-white hover:opacity-90 transition">
View Profile
</button>
</div>
</canvas>
);
}
function renderScene(gl: WebGL2RenderingContext, texture: WebGLTexture | null) {
// Vertex shader for a rotating card
const vsSource = `#version 300 es
in vec4 aPosition;
in vec2 aTexCoord;
uniform mat4 uProjection;
uniform mat4 uModelView;
out vec2 vTexCoord;
void main() {
gl_Position = uProjection * uModelView * aPosition;
vTexCoord = aTexCoord;
}
`;
// Fragment shader — applies the HTML texture
const fsSource = `#version 300 es
precision highp float;
in vec2 vTexCoord;
uniform sampler2D uTexture;
out vec4 fragColor;
void main() {
fragColor = texture(uTexture, vTexCoord);
}
`;
// ... standard WebGL setup: compile shaders, create program,
// set up card geometry, animate rotation with requestAnimationFrame
// The key insight: the texture IS the HTML element's rendered output
}What texElement2D enables:
- 3D product cards with real HTML content
- Perspective text effects with full CSS typography
- Shader-based transitions between HTML views
- Mixed reality UIs with HTML in WebXR scenes
How the paint Event Works
The paint event is the synchronization mechanism. It fires when any canvas child changes — re-renders, animations, style changes.
canvas.addEventListener('paint', () => {
// A snapshot of all children's rendering is captured
// BEFORE this event fires. When you call drawElement()
// here, it uses that snapshot — no extra render pass.
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.drawElement(myElement, 0, 0);
});This means:
- No double rendering — the browser captures children's pixels in the same compositing pass
- No manual
requestAnimationFrame— the paint event fires at the right time - Automatic batching — multiple child changes trigger a single paint event
Browser Support (April 2026)
| Browser | Status |
|---|---|
| Chrome Canary | Behind #canvas-draw-element flag |
| Chrome Stable | Expected Q3 2026 |
| Firefox | Under consideration |
| Safari | No signal yet |
For production use today, you'll need a progressive enhancement strategy:
function CanvasWithFallback({ children }: { children: React.ReactNode }) {
const supportsDrawElement = typeof HTMLCanvasElement !== 'undefined'
&& 'drawElement' in CanvasRenderingContext2D.prototype;
if (!supportsDrawElement) {
// Render as normal DOM
return <div className="canvas-fallback">{children}</div>;
}
return (
<canvas layoutsubtree width={800} height={600}>
{children}
</canvas>
);
}What You've Built
| Demo | API Used | Capability |
|---|---|---|
| HTML in Canvas | drawElement() | Render any styled HTML into 2D canvas |
| Interactive Form | drawElement() + events | Full form with focus, validation, a11y |
| Dashboard | drawElement() + Canvas API | Mix HTML widgets with canvas-drawn charts |
| 3D Profile Card | texElement2D() | HTML content as WebGL texture with 3D transforms |
Why HTML-in-Canvas Changes Everything
Before this API, canvas and DOM were two separate worlds. You either:
- Used DOM and gave up canvas performance/effects
- Used canvas and gave up accessibility, i18n, text layout, events
- Used
html2canvasand got a broken screenshot
HTML-in-Canvas bridges the gap. The browser's rendering engine draws DOM elements into canvas — with full fidelity, full interactivity, and GPU-accelerated compositing.
This enables a new class of web applications:
- Creative tools (Figma/Canva-like) with real text editing inside canvas
- Game UIs with accessible HTML menus overlaid on WebGL scenes
- Data visualization with HTML tooltips and labels inside canvas charts
- Presentation tools with 3D transitions between HTML slides
Building a canvas-heavy application? Our agents ship production Next.js with advanced Canvas, WebGL, and creative coding — $45/hr, human-in-the-loop. Book a free call
What to Build Next
- Figma-like editor with real text editing inside canvas objects
- Photo editor with HTML control panels rendered into the canvas workspace
- 3D portfolio with HTML profile cards floating in a WebGL scene
- Interactive data dashboard mixing D3 charts with HTML stat widgets
The HTML-in-Canvas API is the missing bridge between DOM and canvas. The days of choosing one or the other are over.
Need help building a canvas-powered product? From creative tools to data dashboards, our AI agents handle the complex Canvas/WebGL code while you focus on the product. Talk to an agent
Related Reading
- Build Stunning Text Effects with Pretext and Next.js
- WICG HTML-in-Canvas Proposal
- Chromium Status: HTML-in-Canvas
Canvas gave us pixels. DOM gave us structure. HTML-in-Canvas gives us both — at the same time, in the same element.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.
Related Articles

Building a Conversational AI App with Next.js
Learn how to build a web application that enables real-time voice conversations with AI agents using Next.js and ElevenLabs.

Building a Multi-Tenant App with Next.js
Learn how to build a full-stack multi-tenant application using Next.js, Vercel, and other modern technologies.

Creating a Podcast from a PDF using Vercel AI SDK and LangChain
Learn how to create a podcast from a PDF using Vercel AI SDK, LangChain's PDFLoader, ElevenLabs, and Next.js.