Posts in the DOM Category

Canvas-ing the Web

Published 9 hours past

Over the years, I’ve created an experiment or two that drew stuff to a <canvas> element: a wave function collapse experiment here, a crystallizing palette there.  After a while, I found a way to wire up a button so that clicking it would save the canvas’s contents to my computer as a PNG file.  Pretty cool, I thought.  Can I do the same thing with HTML+CSS structures?

An abstract image somewhat resembling a flower, rendered in dusky purples, greens, and similar colors.
First I generated it on a canvas, then I clicked a button to save it.

Turns out, no.  I could use, and often have used, Firefox’s “Screenshot node” menu entry in the web inspector, or the :screenshot command in Firefox’s console, but not do it with an in-page button.  Because HTML nodes don’t go in <canvas>, you see, let alone styled and scripted ones.

Or they didn’t, until just recently, when Chrome shipped a flag-gated preview of the HTML-in-canvas API.  How it works is, you add a layoutsubtree attribute to a <canvas> element, and then you can put whatever HTML you want in there, with whatever CSS and JS you would normally apply to it, add a couple of magic JScantations, and what the browser would normally have painted to the page is painted to the canvas, at whatever speed the browser can manage (usually 60 frames per second or more, because web browsers are high-end first-person scrollers).

If you want to try all this out for yourself, I commend you to Amit Sheen’s “The Web Is Fun Again” over at the Frontend Masters blog, where he details how to get yourself set up for the wackiness this makes possible, and then shows some experiments.  Water ripples over your pages, lens distortions that follow the mouse pointer, chromatic aberrations!

Which, I admit, all sound really off-putting to the “I just want to use the web” folks among us.  What possible utility is there in having an input form that, say, makes ripples spread out from every character you type?  Or having dropdown menus fall to the bottom of the page, but still actually work?  Probably not a lot, unless you’re an expensive design studio working on a brag page.

But remember, this is how any new graphic advancement goes: we, by which I mean the collective web industry, start by doing really outré and eye-catching stuff that we later have cause to regret.  Remember parallax scrolling effects?  The early days of CSS animation?  Drop shadows?  There will be an initial period of excess, and then it will all settle down.

I’ve already skipped straight to the settle down part, though.

See, when I asked myself if I could render HTML+CSS on a <canvas> and then save the image to my computer, it wasn’t just me doing that “push at the limits of web features” thing I do sometimes.  I had an actual, practical use case in mind: I wanted to save social media banners and thumbnails from a browser-based tool I built for my work at Igalia, just by clicking or otherwise triggering a button.

If you’re subscribed to our YouTube channel, you’ve seen these thumbnails; ditto if you’re following us on Mastodon or Bluesky.  To produce those, I have an in-browser thing I built out of custom elements.  It’s where the super-slider pattern developed (though they have a different name in the tool).  I’m not going to link to the tool because it’s on our intranet and very few of you have a login, so here’s a screenshot of it in all its dweeb-designed semi-glory.

The banner-making tool being discussed, showing a number of panels with range slider inputs to set things like font size for various pieces of the banner.  There are also color inputs to change the coloration of both foreground and background elements, and a couple of places to drag and drop background or highlight images.
The banner maker, with a recent thumbnail already loaded in.

The text bits in the banner are all contenteditable HTML elements, and the various themes are managed with various blocks of CSS.  (And yeah, those range inputs are all “super sliders”.)  The point of all this being, I built it so that anyone at work could use it to make banners whenever they needed, without having to wait on me to do so.

What I’ve always wanted, in order to make things easy for anyone who isn’t me, is a “click this button to save the banner as an image” feature.  Anyone at Igalia could easily learn (if they didn’t already know) the web-inspector-or-console stuff I was using, of course, but it just felt so janky.  A touch embarrassing, if I’m being honest.

Well, now I have what I wanted.  In any browser that supports HTML-in-canvas, there is a button labeled “Download banner image”.  Right now, that’s recent Chrome with the proper developer flag enabled.  For all other browsers, there’s no button, and you just use the same web inspector screenshot tricks we’ve always relied on.

Making this happen wasn’t as easy as maybe that sounded, though.  I hit a couple of snags along the way, one of which was quite frustrating.  Those are what I actually brought you here to talk about.

The first snag was that I had to get the thumbnail preview into a <canvas> element without blowing the call stack.  To explain that, let me show you a rough skeleton of the tool’s markup.

<section id="youtube_talks">
	<thumb-panel class="text"> … </thumb-panel>
	<thumb-panel class="colors"> … </thumb-panel>
	<thumb-panel class="highlightImage"> … </thumb-panel>
	<thumb-panel class="backgroundImage"> … </thumb-panel>
	<thumb-panel class="icons"> … </thumb-panel>
	<thumb-panel class="scaler"> … </thumb-panel>
	<thumb-panel class="loader"> … </thumb-panel>
	<thumb-preview> … </thumb-preview>
</section>

As you can read, it’s basically all custom elements, each with their own connectedCallback() function to do whatever scripting magic needs to be done when the browser first encounters them.  To wrap that last element, the <thumb-preview>, inside a <canvas>, I needed to create a new canvas element, shift the preview element into the new canvas, and then insert the preview-bearing canvas, ending up with this structure.

<section id="youtube_talks">
	<thumb-panel class="text"> … </thumb-panel>
	<thumb-panel class="colors"> … </thumb-panel>
	<thumb-panel class="highlightImage"> … </thumb-panel>
	<thumb-panel class="backgroundImage"> … </thumb-panel>
	<thumb-panel class="icons"> … </thumb-panel>
	<thumb-panel class="scaler"> … </thumb-panel>
	<thumb-panel class="loader"> … </thumb-panel>
	<canvas layoutsubtree>
		<thumb-preview> … </thumb-preview>
	</canvas>
</section>

Thus, when the <thumb-preview> was loaded in, I had its connectedCallback() run a check to see if HTML-in-canvas is supported.  In situations where it is supported, I did what was needed to get to the above result.

At which point, since the <thumb-preview> is a custom element that was being placed into the DOM, it fired its connectedCallback(), thus starting the process again, creating a canvas and inserting the <thumb-preview> into the new canvas, which started the process again, recursing toward infinity.  Within milliseconds, the call stack was exceeded.

So… that wasn’t going to work.

I thought for a moment that I could avoid this by setting a flag variable to true and then checking for its existence in order to skip the whole canvas-creation-preview-insertion part, but I couldn’t figure out how to make that actually work.  Then I thought maybe I could sidestep the whole imbroglio using connectedMoveCallback(), but this wasn’t a move, it was a (re-)creation.

That callback was the route to fixing this problem, though.  You see, there is a way to move elements from one part of the DOM to another: Element.moveBefore().  There’s no moveAfter() or moveInto(), sadly, just “move this node to the spot right before some other node”.

Here’s how I made use of that feature:

let canvas = document.createElement('canvas');
canvas.setAttribute('layoutsubtree','');
canvas.setAttribute('width','1280');
canvas.setAttribute('height','720');

this.closest('section').appendChild(canvas);

let beacon = document.createElement('span');
canvas.appendChild(beacon);
canvas.moveBefore(this,beacon);
beacon.remove();

Yep.  I created a canvas, stuck the canvas into the closest ancestor section, created a span, stuck the span into the canvas, moved the preview element to right before the span, and then deleted the span.  (There may well be a better way to do this, one that my DuckDucking failed to turn up.  If so, please comment below!)

Oh, and here’s what gets executed when the preview is moved, instead of append-created:

connectedMoveCallback() {
	return;
}

Heckuva way to run a railroad.

At that point, I had the canvas where I wanted it and the preview where I wanted it, and the call stack remained un-blown.  Huzzah!  I then recited the magic JScantations to make the canvas actually render its subtree (see the “Web is Fun Again” article I linked earlier for details on this), and hey presto, DOM was being rendered into a canvas!  Then, when I clicked the button, the canvas was rendered as a PNG and my browser downloaded that PNG!  I had what I wanted!

Almost.

Because the second snag, you see, is that canvases have an explicit size.  Are in effect required to do so, because otherwise they default to zero pixels tall and wide.  So if you want to see anything, you need to give them some dimensions.  I did that, as the code before showed, making the canvas 1280×720 (YouTube’s recommended thumbnail size) through setAttribute() methods.

The problem is, the default scale factor on the thumbnail preview is 0.75, which translates to 960×540.  Thus, when I clicked the image capture button, my browser downloaded a 1280×720 image with the thumbnail in the top left, and transparency below and to its right.

The previously-seen banner, which was rendered at 0.75 scale in an un-resized canvas, as shown in the macOS image editor Acorn.

“Just resize the canvas, ya dork!” you might say.  I certainly did (say that, I mean).  But if I set it to 960 wide and 540 tall, then when the scale was increased to 1, I got a 1280×720 DOM node cropped to its top left 960×540.  I needed to dynamically resize the canvas element to have its size match the size of the thumb-preview.

And this is where I ran headfirst into several brick walls, because orcing a canvas element to resize in all the situations you want it to, including when it’s spawned, is not nearly as easy as you’d think.  It wasn’t for me, anyway.  I bulled my way through to a solution, eventually, painfully, but I got there.

(As I write this, I’m wondering if I should have also created a <div>, appended the canvas to that, and then used CSS to change the div’s size while the canvas was set to have 100% height and width.  Or maybe have the DOM subtree pinned to 1280×720 and use CSS scale to change the canvas size visually.  Or perhaps some kind of resizeObserver shenanigans.  Or probably just pass some parameters to the HTML-in-canvas drawElementImage method.  Hmmm.)

Regardless of whether I overlooked a less frustrating way do what I wanted, this does still point to a fundamental tension in the HTML-in-canvas approach: sizing.

Canvases do not, as a rule, grow or shrink to fit their contents.  DOM elements, as a rule, very much do, unless you force them not to.  HTML-in-canvas is taking a very fluid, flexible, mostly unbounded layout paradigm and rasterizing it, or at least some of it, into a very bounded window of a given size.  Sixty times (or more) every second, the browser is taking a screenshot the size of the canvas’s content box and pasting said screenshot into that content box.  You can do fun stuff to it along the way, with filters or shaders or canvas draw calls or whatever you can code up, so that each one of those screenshots gets jazzed up in some fashion, but at base, it’s still fundamentally screenshot, paste, screenshot, paste, over and over.

For use cases like mine, this isn’t really a big problem.  I am, in the end, trying to get a screenshot of a static part of the page.  HTML-in-canvas is very good for that.  It could completely revolutionize the browser-based slideshow genre.  The Reveal.js plugin landscape alone could be a sight to behold.

But in the general cases — the kinds of things we mostly do most every day — I don’t think this is likely to catch on.  We might develop some patterns to make it easier, some interesting hacks to overcome the mismatch, but I don’t think that will significantly move the needle.  On the other hand, if canvases can be made as flexible and content-wrapping as a bog-standard <div>, then I would expect to see a lot more usage.

Although if that can be done, then we wouldn’t really need to stay chained to HTML-in-canvas.  Instead, we could define a syntax to mark standard HTML elements as more visually manipulable, via an HTML attribute or CSS property or DOM method or all three.

We’ve gotten close to that before: CSS Houdini and Microsoft’s original filter property, to pick two examples.  We could try again.  Maybe the HTML-in-canvas period is how we figure out what that simpler syntax should look like, by figuring out what it should make possible, and what it should make easy.

I’d be okay with that.  How about you?


Many thanks to my colleagues Brian Kardell and Stephen Chenney for their early review and feedback on this post.


Targeting by Reference in the Shadow DOM

Published 4 months, 1 week past

I’ve long made it clear that I don’t particularly care for the whole Shadow DOM thing.  I believe I understand the problems it tries to solve, and I fully acknowledge that those are problems worth solving.  There are just a bunch of things about it that don’t feel right to me, like how it can break accessibility in a number of ways.

One of those things is how it breaks stuff like the commandFor attribute on <button>s, or the popoverTarget attribute, or a variety of ARIA attributes such as aria-labelledby.  This happens because a Shadow DOMmed component creates a whole separate node tree, which creates a barrier (for a lot of things, to be clear; this is just one class of them).

At least, that’s been the case.  There’s now a proposal to fix that, and prototype implementations in both Chrome and Safari!  In Chrome, it’s covered by the Experimental Web Platform features flag in chrome://flags.  In Safari, you open the Develop > Feature Flags… dialog, search for “referenceTarget”, and enable both flags.

(Disclosure: My employer, Igalia, with support from NLnet, did the WebKit implementation, and also a Gecko implementation that’s being reviewed as I write this.)

If you’re familiar with Shadow DOMming, you know that there are attributes for the <template> element like shadowRootClonable that set how the Shadow DOM for that particular component can be used.  The proposal at hand is for a shadowRootReferenceTarget attribute, which is a string used to identify an element within the Shadowed DOM tree that should be the actual target of any references.  This is backed by a ShadowRoot.referenceTarget API feature.

Take this simple setup as a quick example.

 <label for="consent">I agree to join your marketing email list for some reason</label>
<sp-checkbox id="consent">
	<template>
		<input id="setting" type="checkbox" aria-checked="indeterminate">
		<span id="box"></span>	
	</template> </sp-checkbox> 

Assume there’s some JavaScript to make that stuff inside the Shadow DOM work as intended.  (No, nothing this simple should really be a web component, but let’s assume that someone has created a whole multi-faceted component system for handling rich user interactions or whatever, and someone else has to use it for job-related reasons, and this is one small use of that system.)

The problem is, the <label> element’s for is pointing at consent, which is the ID of the component.  The actual thing that should be targeted is the <input> element with the ID of setting .  We can’t just change the markup to <label for="setting"> because that <input> is trapped in the Shadow tree, where none in the Light beyond may call for it.  So it just plain old doesn’t work.

Under the Reference Target proposal, one way to fix this would look something like this in HTML:

 <label for="consent">I agree to join your marketing email list for some reason</label>
<sp-checkbox id="consent">
	<template shadowRootReferenceTarget="setting">
		<input id="setting" type="checkbox" aria-checked="indeterminate">
		<span id="box"></span>	
	</template> </sp-checkbox> 

With this markup in place, if someone clicks/taps/otherwise activates the label, it points to the ID consent .  That Shadowed component takes that reference and redirects it to an effective target  —  the reference target identified in its shadowRootReferenceTarget attribute.

You could also set up the reference with JavaScript instead of an HTML template:

 <label for="consent">I agree to join your marketing email list for some reason</label>
<sp-checkbox id="consent"></sp-checkbox> 
class SpecialCheckbox extends HTMLElement {
	checked = "mixed";
	constructor() {
		super();
		this.shadowRoot_ = this.attachShadow({ 
			referenceTarget: "setting"
		});
		
		// lines of code to Make It Go
	
	}
} 

Either way, the effective target is the <input> with the ID of setting .

This can be used in any situation where one element targets another, not just with for .  The form and list attributes on inputs would benefit from this.  So, too, would the relatively new popoverTarget and commandFor button attributes.  And all of the ARIA targeting attributes, like aria-controls and aria-errormessage and aria-owns as well.

If reference targets are something you think would be useful in your own work, please give it a try in Chrome or Safari or both, to see if your use cases are being met.  If not, you can leave feedback on issue 1120 to share any problems you run into.  If we’re going to have a Shadow DOM, the least we can do is make it as accessible and useful as possible.


Blinded By the Light DOM

Published 2 years, 5 months past

For a while now, Web Components (which I’m not going to capitalize again, you’re welcome) have been one of those things that pop up in the general web conversation, seem intriguing, and then fade into the background again.

I freely admit a lot of this experience is due to me, who is not all that thrilled with the Shadow DOM in general and all the shenanigans required to cross from the Light Side to the Dark Side in particular.  I like the Light DOM.  It’s designed to work together pretty well.  This whole high-fantasy-flavored Shadowlands of the DOM thing just doesn’t sit right with me.

If they do for you, that’s great!  Rock on with your bad self.  I say all this mostly to set the stage for why I only recently had a breakthrough using web components, and now I quite like them.  But not the shadow kind.  I’m talking about Fully Light-DOM Components here.

It started with a one-two punch: first, I read Jim Nielsen’s “Using Web Components on My Icon Galleries Websites”, which I didn’t really get the first few times I read it, but I could tell there was something new (to me) there.  Very shortly thereafter, I saw Dave Rupert’s <fit-vids> CodePen, and that’s when the Light DOM Bulb went off in my head.  You just take some normal HTML markup, wrap it with a custom element, and then write some JS to add capabilities which you can then style with regular CSS!  Everything’s of the Light Side of the Web.  No need to pierce the Vale of Shadows or whatever.

Kindly permit me to illustrate at great length and in some depth, using a thing I created while developing a tool for internal use at Igalia as the basis.  Suppose you have some range inputs, just some happy little slider controls on your page, ready to change some values, like this:

<label for="title-size">Title font size</label>
<input id="title-size" type="range" min="0.5" max="4" step="0.1" value="2" />

The idea here is that you use the slider to change the font size of an element of some kind.  Using HTML’s built-in attributes for range inputs, I set a minimum, maximum, and initial value, the step size permitted for value changes, and an ID so a <label> can be associated with it.  Dirt-standard HTML stuff, in other words.  Given that this markup exists in the page, then, it needs to be hooked up to the thing it’s supposed to change.

In Ye Olden Days, you’d need to write a function to go through the entire DOM looking for these controls (maybe you’d add a specific class to the ones you need to find), figure out how to associate them with the element they’re supposed to affect (a title, in this case), add listeners, and so on.  It might go something like:

let sliders = document.querySelectorAll('input[id]');
for (i = 0; i < sliders.length; i++) {
	let slider = sliders[i];
	// …add event listeners
	// …target element to control
	// …set behaviors, maybe call external functions
	// …etc., etc., etc.
}

Then you’d have to stuff all that into a window.onload observer or otherwise defer the script until the document is finished loading.

To be clear, you can absolutely still do it that way.  Sometimes, it’s even the most sensible choice!  But fully-light-DOM components can make a lot of this easier, more reusable, and robust.  We can add some custom elements to the page and use those as a foundation for scripting advanced behavior.

Now, if you’re like me (and I know I am), you might think of converting everything into a completely bespoke element and then forcing all the things you want to do with it into its attributes, like this:

<super-slider type="range" min="0.5" max="4" step="0.1" value="2"
	          unit="em" target=".preview h1">
Title font size
</super-slider>

Don’t do this.  If you do, then you end up having to reconstruct the HTML you want to exist out of the data you stuck on the custom element.  As in, you have to read off the type, min, max, step, and value attributes of the <super-slider> element, then create an <input> element and add the attributes and their values you just read off <super-slider>, create a <label> and insert the <super-slider>’s text content into the label’s text content, and why?  Why did I do this to myse —  uh, I mean, why do this to yourself?

Do this instead:

<super-slider unit="em" target=".preview h1">
	<label for="title-size">Title font size</label>
	<input id="title-size" type="range" min="0.5" max="4" step="0.1" value="2" />
</super-slider>

This is the pattern I got from <fit-vids>, and the moment that really broke down the barrier I’d had to understanding what makes web components so valuable.  By taking this approach, you get everything HTML gives you with the <label> and <input> elements for free, and you can add things on top of it.  It’s pure progressive enhancement.

To figure out how all this goes together, I found MDN’s page “Using custom elements” really quite valuable.  That’s where I internalized the reality that instead of having to scrape the DOM for custom elements and then run through a loop, I could extend HTML itself:

class superSlider extends HTMLElement {
	connectedCallback() {
		//
		// the magic happens here!
		//
	}
}

customElements.define("super-slider",superSlider);

What that last line does is tell the browser, “any <super-slider> element is of the superSlider JavaScript class”.  Which means, any time the browser sees <super-slider>, it does the stuff that’s defined by class superSlider in the script.  Which is the thing in the previous code block!  So let’s talk about how it works, with concrete examples.

It’s the class structure that holds the real power.  Inside there, connectedCallback() is invoked whenever a <super-slider> is connected; that is, whenever one is encountered in the page by the browser as it parses the markup, or when one is added to the page later on.  It’s an auto-startup callback.  (What’s a callback? I’ve never truly understood that, but it turns out I don’t have to!)  So in there, I write something like:

connectedCallback() {
	let targetEl = document.querySelector(this.getAttribute('target'));
	let unit = this.getAttribute('unit');
	let slider = this.querySelector('input[type="range"]');
}

So far, all I’ve done here is:

  • Used the value of the target attribute on <super-slider> to find the element that the range slider should affect using a CSS-esque query.
  • The unit attribute’s value to know what CSS unit I’ll be using later in the code.
  • Grabbed the range input itself by running a querySelector() within the <super-slider> element.

With all those things defined, I can add an event listener to the range input:

slider.addEventListener("input",(e) => {
	let value = slider.value + unit;
	targetEl.style.setProperty('font-size',value);
});

…and really, that’s it.  Put all together:

class superSlider extends HTMLElement {
	connectedCallback() {
		let targetEl = document.querySelector(this.getAttribute('target'));
		let unit = this.getAttribute('unit');
		let slider = this.querySelector('input[type="range"]');
		slider.addEventListener("input",(e) => {
			targetEl.style.setProperty('font-size',slider.value + unit);
		});
	}
}

customElements.define("super-slider",superSlider);

You can see it in action with this CodePen.

 <span>See the Pen <a href="/proxy?url=https%3A%2F%2Fcodepen.io%2Fmeyerweb%2Fpen%2FoNmXJRX">
 WebCOLD 01</a> by Eric A.  Meyer (<a href="/proxy?url=https%3A%2F%2Fcodepen.io%2Fmeyerweb">@meyerweb</a>)
 on <a href="/proxy?url=https%3A%2F%2Fcodepen.io">CodePen</a>.</span>

As I said earlier, you can get to essentially the same result by running document.querySelectorAll('super-slider') and then looping through the collection to find all the bits and bobs and add the event listeners and so on.  In a sense, that’s what I’ve done above, except I didn’t have to do the scraping and looping and waiting until the document has loaded  —  using web components abstracts all of that away.  I’m also registering all the components with the browser via customElements.define(), so there’s that too.  Overall, somehow, it just feels cleaner.

One thing that sets customElements.define() apart from the collect-and-loop-after-page-load approach is that custom elements fire all that connection callback code on themselves whenever they’re added to the document, all nice and encapsulated.  Imagine for a moment an application where custom elements are added well after page load, perhaps as the result of user input.  No problem!  There isn’t the need to repeat the collect-and-loop code, which would likely have to have special handling to figure out which are the new elements and which already existed.  It’s incredibly handy and much easier to work with.

But that’s not all!  Suppose we want to add a “reset” button  —  a control that lets you set the slider back to its starting value.  Adding some code to the connectedCallback() can make that happen.  There’s probably a bunch of different ways to do this, so what follows likely isn’t the most clever or re-usable way.  It is, instead, the way that made sense to me at the time.

let reset = slider.getAttribute('value');
let resetter = document.createElement('button');
resetter.textContent = '↺';
resetter.setAttribute('title', reset + unit);
resetter.addEventListener("click",(e) => {
	slider.value = reset;
	slider.dispatchEvent(
	    new MouseEvent('input', {view: window, bubbles: false})
	);
});
slider.after(resetter);

With that code added into the connection callback, a button gets added right after the slider, and it shows a little circle-arrow to convey the concept of resetting.  You could just as easily make its text “Reset”.  When said button is clicked or keyboard-activated ("click" handles both, it seems), the slider is reset to the stored initial value, and then an input event is fired at the slider so the target element’s style will also be updated.  This is probably an ugly, ugly way to do this!  I did it anyway.

 <span>See the Pen <a href="/proxy?url=https%3A%2F%2Fcodepen.io%2Fmeyerweb%2Fpen%2FjOdPdyQ">
 WebCOLD 02</a> by Eric A.  Meyer (<a href="/proxy?url=https%3A%2F%2Fcodepen.io%2Fmeyerweb">@meyerweb</a>)
 on <a href="/proxy?url=https%3A%2F%2Fcodepen.io">CodePen</a>.</span>

Okay, so now that I can reset the value, maybe I’d also like to see what the value is, at any given moment in time?  Say, by inserting a classed <span> right after the label and making its text content show the current combination of value and unit?

let label = this.querySelector('label');
let readout = document.createElement('span');
readout.classList.add('readout');
readout.textContent = slider.value + unit;
label.after(readout);

Plus, I’ll need to add the same text content update thing to the slider’s handling of input events:

slider.addEventListener("input", (e) => {
	targetEl.style.setProperty("font-size", slider.value + unit);
	readout.textContent = slider.value + unit;
});

I imagine I could have made this readout-updating thing a little more generic (less DRY, if you like) by creating some kind of getter/setter things on the JS class, which is totally possible to do, but that felt like a little much for this particular situation.  Or I could have broken the readout update into its own function, either within the class or external to it, and passed in the readout and slider and reset value and unit to cause the update.  That seems awfully clumsy, though.  Maybe figuring out how to make the span a thing that observes slider changes and updates automatically?  I dunno, just writing the same thing in two places seemed a lot easier, so that’s how I did it.

So, at this point, here’s the entirety of the script, with a CodePen example of the same thing immediately after.

class superSlider extends HTMLElement {
	connectedCallback() {
		let targetEl = document.querySelector(this.getAttribute("target"));
		let unit = this.getAttribute("unit");

		let slider = this.querySelector('input[type="range"]');
		slider.addEventListener("input", (e) => {
			targetEl.style.setProperty("font-size", slider.value + unit);
			readout.textContent = slider.value + unit;
		});

		let reset = slider.getAttribute("value");
		let resetter = document.createElement("button");
		resetter.textContent = "↺";
		resetter.setAttribute("title", reset + unit);
		resetter.addEventListener("click", (e) => {
			slider.value = reset;
			slider.dispatchEvent(
				new MouseEvent("input", { view: window, bubbles: false })
			);
		});
		slider.after(resetter);

		let label = this.querySelector("label");
		let readout = document.createElement("span");
		readout.classList.add("readout");
		readout.textContent = slider.value + unit;
		label.after(readout);
	}
}

customElements.define("super-slider", superSlider);
 <span>See the Pen <a href="/proxy?url=https%3A%2F%2Fcodepen.io%2Fmeyerweb%2Fpen%2FNWoGbWX">
 WebCOLD 03</a> by Eric A.  Meyer (<a href="/proxy?url=https%3A%2F%2Fcodepen.io%2Fmeyerweb">@meyerweb</a>)
 on <a href="/proxy?url=https%3A%2F%2Fcodepen.io">CodePen</a>.</span>

Anything you can imagine JS would let you do to the HTML and CSS, you can do in here.  Add a class to the slider when it has a value other than its default value so you can style the reset button to fade in or be given a red outline, for example.

Or maybe do what I did, and add some structural-fix-up code.  For example, suppose I were to write:

<super-slider unit="em" target=".preview h2">
	<label>Subtitle font size</label>
	<input type="range" min="0.5" max="2.5" step="0.1" value="1.5" />
</super-slider>

In that bit of markup, I left off the id on the <input> and the for on the <label>, which means they have no structural association with each other.  (You should never do this, but sometimes it happens.)  To handle this sort of failing, I threw some code into the connection callback to detect and fix those kinds of authoring errors, because why not?  It goes a little something like this:

if (!label.getAttribute('for') && slider.getAttribute('id')) {
	label.setAttribute('for',slider.getAttribute('id'));
}
if (label.getAttribute('for') && !slider.getAttribute('id')) {
	slider.setAttribute('id',label.getAttribute('for'));
}
if (!label.getAttribute('for') && !slider.getAttribute('id')) {
	let connector = label.textContent.replace(' ','_');
	label.setAttribute('for',connector);
	slider.setAttribute('id',connector);
}

Once more, this is probably the ugliest way to do this in JS, but also again, it works.  Now I’m making sure labels and inputs have association even when the author forgot to explicitly define it, which I count as a win.  If I were feeling particularly spicy, I’d have the code pop an alert chastising me for screwing up, so that I’d fix it instead of being a lazy author.

It also occurs to me, as I review this for publication, that I didn’t try to do anything in situations where both the for and id attributes are present, but their values don’t match.  That feels like something I should auto-fix, since I can’t imagine a scenario where they would need to intentionally be different.  It’s possible my imagination is lacking, of course.

So now, here’s all just-over-40 lines of the script that makes all this work, followed by a CodePen demonstrating it.

class superSlider extends HTMLElement {
	connectedCallback() {
		let targetEl = document.querySelector(this.getAttribute("target"));
		let unit = this.getAttribute("unit");

		let slider = this.querySelector('input[type="range"]');
		slider.addEventListener("input", (e) => {
			targetEl.style.setProperty("font-size", slider.value + unit);
			readout.textContent = slider.value + unit;
		});

		let reset = slider.getAttribute("value");
		let resetter = document.createElement("button");
		resetter.textContent = "↺";
		resetter.setAttribute("title", reset + unit);
		resetter.addEventListener("click", (e) => {
			slider.value = reset;
			slider.dispatchEvent(
				new MouseEvent("input", { view: window, bubbles: false })
			);
		});
		slider.after(resetter);

		let label = this.querySelector("label");
		let readout = document.createElement("span");
		readout.classList.add("readout");
		readout.textContent = slider.value + unit;
		label.after(readout);

		if (!label.getAttribute("for") && slider.getAttribute("id")) {
			label.setAttribute("for", slider.getAttribute("id"));
		}
		if (label.getAttribute("for") && !slider.getAttribute("id")) {
			slider.setAttribute("id", label.getAttribute("for"));
		}
		if (!label.getAttribute("for") && !slider.getAttribute("id")) {
			let connector = label.textContent.replace(" ", "_");
			label.setAttribute("for", connector);
			slider.setAttribute("id", connector);
		}
	}
}

customElements.define("super-slider", superSlider);
 <span>See the Pen <a href="/proxy?url=https%3A%2F%2Fcodepen.io%2Fmeyerweb%2Fpen%2FPoVPbzK">
 WebCOLD 04</a> by Eric A.  Meyer (<a href="/proxy?url=https%3A%2F%2Fcodepen.io%2Fmeyerweb">@meyerweb</a>)
 on <a href="/proxy?url=https%3A%2F%2Fcodepen.io">CodePen</a>.</span>

There are doubtless cleaner/more elegant/more clever ways to do pretty much everything I did above, considering I’m not much better than an experienced amateur when it comes to JavaScript.  Don’t focus so much on the specifics of what I wrote, and more on the overall concepts at play.

I will say that I ended up using this custom element to affect more than just font sizes.  In some places I wanted to alter margins; in others, the hue angle of colors.  There are a couple of ways to do this.  The first is what I did, which is to use a bunch of CSS variables and change their values.  So the markup and relevant bits of the JS looked more like this:

<super-slider unit="em" variable="titleSize">
	<label for="title-size">Title font size</label>
	<input id="title-size" type="range" min="0.5" max="4" step="0.1" value="2" />
</super-slider>
let cssvar = this.getAttribute("variable");
let section = this.closest('section');

slider.addEventListener("input", (e) => {
	section.style.setProperty(`--${cssvar}`, slider.value + unit);
	readout.textContent = slider.value + unit;
});

The other way (that I can think of) would be to declare the target element’s selector and the property you want to alter, like this:

<super-slider unit="em" target=".preview h1" property="font-size">
	<label for="title-size">Title font size</label>
	<input id="title-size" type="range" min="0.5" max="4" step="0.1" value="2" />
</super-slider>

I’ll leave the associated JS as an exercise for the reader.  I can think of reasons to do either of those approaches.

But wait!  There’s more! Not more in-depth JS coding (even though we could absolutely keep going, and in the tool I built, I absolutely did), but there are some things to talk about before wrapping up.

First, if you need to invoke the class’s constructor for whatever reason — I’m sure there are reasons, whatever they may be — you have to do it with a super() up top.  Why?  I don’t know.  Why would you need to?  I don’t know.  If I read the intro to the super page correctly, I think it has something to do with class prototypes, but the rest went so far over my head the FAA issued a NOTAM.  Apparently I didn’t do anything that depends on the constructor in this article, so I didn’t bother including it.

Second, basically all the JS I wrote in this article went into the connectedCallback() structure.  This is only one of four built-in callbacks!  The others are:

  • disconnectedCallback(), which is fired whenever a custom element of this type is removed from the page.  This seems useful if you have things that can be added or subtracted dynamically, and you want to update other parts of the DOM when they’re subtracted.
  • adoptedCallback(), which is (to quote MDN) “called each time the element is moved to a new document.” I have no idea what that means.  I understand all the words; it’s just that particular combination of them that confuses me.
  • attributeChangedCallback(), which is fired when attributes of the custom element change.  I thought about trying to use this for my super-sliders, but in the end, nothing I was doing made sense (to me) to bubble up to the custom element just to monitor and act upon.  A use case that does suggest itself: if I allowed users to change the sizing unit, say from em to vh, I’d want to change other things, like the min, max, step, and default value attributes of the sliders.  So, since I’d have to change the value of the unit attribute anyway, it might make sense to use attributeChangedCallback() to watch for that sort of thing and then take action.  Maybe!

Third, I didn’t really talk about styling any of this.  Well, because all of this stuff is in the Light DOM, I don’t have to worry about Shadow Walls or whatever, I can style everything the normal way.  Here’s a part of the CSS I use in the CodePens, just to make things look a little nicer:

super-slider {
	display: flex;
	align-items: center;
	margin-block: 1em;
}
super-slider input[type="range"] {
	margin-inline: 0.25em 1px;
}
super-slider .readout {
	width: 3em;
	margin-inline: 0.25em;
	padding-inline: 0.5em;
	border: 1px solid #0003;
	background: #EEE;
	font: 1em monospace;
	text-align: center;
}

Hopefully that all makes sense, but if not, let me know in the comments and I’ll clarify.

A thing I didn’t do was use the :defined pseudo-class to style custom elements that are defined, or rather, to style those that are not defined.  Remember the last line of the script, where customElements.define() is called to define the custom elements?  Because they are defined that way, I could add some CSS like this:

super-slider:not(:defined) {
	display: none;
}

In other words, if a <super-slider> for some reason isn’t defined, make it and everything inside it just… go away.  Once it becomes defined, the selector will no longer match, and the display: none will be peeled away.  You could use visibility or opacity instead of display; really, it’s up to you.  Heck, you could tile red warning icons in the whole background of the custom element if it hasn’t been defined yet, just to drive the point home.

The beauty of all this is, you don’t have to mess with Shadow DOM selectors like ::part() or ::slotted().  You can just style elements the way you always style them, whether they’re built into HTML or special hyphenated elements you made up for your situation and then, like the Boiling Isles’ most powerful witch, called into being.

That said, there’s a “fourth” here, which is that Shadow DOM does offer one very powerful capability that fully Light DOM custom elements lack: the ability to create a structural template with <slot> elements, and then drop your Light-DOM elements into those slots.  This slotting ability does make Shadowy web components a lot more robust and easier to share around, because as long as the slot names stay the same, the template can be changed without breaking anything.  This is a level of robustness that the approach I explored above lacks, and it’s built in.  It’s the one thing I actually do like about Shadow DOM.

It’s true that in a case like I’ve written about here, that’s not a huge issue: I was quickly building a web component for a single tool that I could re-use within the context of that tool.  It works fine in that context.  It isn’t portable, in the sense of being a thing I could turn into an npm package for others to use, or probably even share around my organization for other teams to use.  But then, I only put 40-50 lines worth of coding into it, and was able to rapidly iterate to create something that met my needs perfectly.  I’m a lot more inclined to take this approach in the future, when the need arises, which will be a very powerful addition to my web development toolbox.

I’d love to see the templating/slotting capabilities of Shadow DOM brought into the fully Light-DOM component world.  Maybe that’s what Declarative Shadow DOM is?  Or maybe not!  My eyes still go cross-glazed whenever I try to read articles about Shadow DOM, almost like a trickster demon lurking in the shadows casts a Spell of Confusion at me.

So there you have it: a few thousand words on my journey through coming to understand and work with these fully-Light-DOM web components, otherwise known as custom elements.  Now all they need is a catchy name, so we can draw more people to the Light Side of the Web.  If you have any ideas, please drop ’em in the comments!


Prodding Firefox to Update :has() Selection

Published 2 years, 6 months past

I’ve posted a followup to this post which you should read before you read this post, because you might decide there’s no need to read this one.  If not, please note that what’s documented below was a hack to overcome a bug that was quickly fixed, in a part of CSS that wasn’t enabled in stable Firefox at the time I wrote the post.  Thus, what follows isn’t really useful, and leaves more than one wrong impression.  I apologize for this.  For a more detailed breakdown of my errors, please see the followup post.


I’ve been doing some development recently on a tool that lets me quickly produce social-media banners for my work at Igalia.  It started out using a vanilla JS script to snarfle up collections of HTML elements like all the range inputs, stick listeners and stuff on them, and then alter CSS variables when the inputs change.  Then I had a conceptual breakthrough and refactored the entire thing to use fully light-DOM web components (FLDWCs), which let me rapidly and radically increase the tool’s capabilities, and I kind of love the FLDWCs even as I struggle to figure out the best practices.

With luck, I’ll write about all that soon, but for today, I wanted to share a little hack I developed to make Firefox a tiny bit more capable.

One of the things I do in the tool’s CSS is check to see if an element (represented here by a <div> for simplicity’s sake) has an image whose src attribute is a base64 string instead of a URI, and when it is, add some generated content. (It makes sense in context.  Or at least it makes sense to me.) The CSS rule looks very much like this:

div:has(img[src*=";data64,"])::before {
	[…generated content styles go here…]
}

This works fine in WebKit and Chromium.  Firefox, at least as of the day I’m writing this, often fails to notice the change, which means the selector doesn’t match, even in the Nightly builds, and so the generated content isn’t generated.  It has problems correlating DOM updates and :has(), is what it comes down to.

There is a way to prod it into awareness, though!  What I found during my development was that if I clicked or tabbed into a contenteditable element, the :has() would suddenly match and the generated content would appear.  The editable element didn’t even have to be a child of the div bearing the :has(), which seemed weird to me for no distinct reason, but it made me think that maybe any content editing would work.

I tried adding contenteditable to a nearby element and then immediately removing it via JS, and that didn’t work.  But then I added a tiny delay to removing the contenteditable, and that worked!  I feel like I might have seen a similar tactic proposed by someone on social media or a blog or something, but if so, I can’t find it now, so my apologies if I ganked your idea without attribution.

My one concern was that if I wasn’t careful, I might accidentally pick an element that was supposed to be editable, and then remove the editing state it’s supposed to have.  Instead of doing detection of the attribute during selection, I asked myself, “Self, what’s an element that is assured to be present but almost certainly not ever set to be editable?”

Well, there will always be a root element.  Usually that will be <html> but you never know, maybe it will be something else, what with web components and all that.  Or you could be styling your RSS feed, which is in fact a thing one can do.  At any rate, where I landed was to add the following right after the part of my script where I set an image’s src to use a base64 URI:

let ffHack = document.querySelector(':root');
ffHack.setAttribute('contenteditable','true');
setTimeout(function(){
	ffHack.removeAttribute('contenteditable');
},7);

Literally all this does is grab the page’s root element, set it to be contenteditable, and then seven milliseconds later, remove the contenteditable.  That’s about a millisecond less than the lifetime of a rendering frame at 120fps, so ideally, the browser won’t draw a frame where the root element is actually editable… or, if there is such a frame, it will be replaced by the next frame so quickly that the odds of accidentally editing the root are very, very, very small.

At the moment, I’m not doing any browser sniffing to figure out if the hack needs to be applied, so every browser gets to do this shuffle on Firefox’s behalf.  Lazy, I suppose, but I’m going to wave my hands and intone “browsers are very fast now” while studiously ignoring all the inner voices complaining about inefficiency and inelegance.  I feel like using this hack means it’s too late for all those concerns anyway.

I don’t know how many people out there will need to prod Firefox like this, but for however many there are, I hope this helps.  And if you have an even better approach, please let us know in the comments!


Element Dragging in Web Inspectors

Published 9 years, 3 months past
Yesterday, I was looking at an existing page, wondering if it would be improved by rearranging some of the elements.  I was about to fire up the git engine (spawn a branch, check it out, do edits, preview them, commit changes, etc., etc.) when I got a weird thought: could I just drag elements around in the Web Inspector in my browser of choice, Firefox Nightly, so as to quickly try out various changes without having to open an editor?  Turns out the answer is yes, as demonstrated in this video!
Youtube: “Dragging elements in Firefox Nightly’s Web Inspector”
Since I recorded the video, I’ve learned that this same capability exists in public-release Firefox, and has been in Chrome for a while.  It’s probably been in Firefox for a while, too.  What I was surprised to find was how many other people were similarly surprised that this is possible, which is why I made the video.  It’s probably easier to understand to video if it’s full screen, or at least expanded, but I think the basic idea gets across even in small-screen format.  Share and enjoy!

Undoing oncut/oncopy/onpaste Falsities

Published 10 years, 9 months past

Inspired by Ryan Joy’s excellent and deservedly popular tweet, I wrote a small, not-terribly-smart Javascript function to undo cut/copy/paste blocking in HTML.

function fixCCP() {
   var elems = document.getElementsByTagName('*');
   var attrs = ['onpaste','oncopy','oncut'];
   for (i = 0; i < elems.length; i++) {
      for (j = 0; j < attrs.length; j++) {
         if (elems[i].getAttribute(attrs[j])) {
            elems[i].setAttribute(attrs[j],elems[i]
            .getAttribute(attrs[j])
            .replace("return false","return true"));
         }
      }
   }
}

Here it is as a bookmarklet, if you still roll that way (as I do): fixCCP.  Thanks to the Bookmarklet Maker at bookmarklets.org for helping me out with that!

If there are obvious improvements to be made to its functionality, let me know and I’ll throw it up on Github.


Invented Elements

Published 14 years, 1 month past

This morning I caught a pointer to TypeButter, which is a jQuery library that does “optical kerning” in an attempt to improve the appearance of type.  I’m not going to get into its design utility because I’m not qualified; I only notice kerning either when it’s set insanely wide or when it crosses over into keming.  I suppose I’ve been looking at web type for so many years, it looks normal to me now.  (Well, almost normal, but I’m not going to get into my personal typographic idiosyncrasies now.)

My reason to bring this up is that I’m very interested by how TypeButter accomplishes its kerning: it inserts kern elements with inline style attributes that bear letter-spacing values.  Not span elements, kern elements.  No, you didn’t miss an HTML5 news bite; there is no kern element, nor am I aware of a plan for one.  TypeButter basically invents a specific-purpose element.

I believe I understand the reasoning.  Had they used span, they would’ve likely tripped over existing author styles that apply to span.  Browsers these days don’t really have a problem accepting and styling arbitrary elements, and any that do would simply render type their usual way.  Because the markup is script-generated, markup validation services don’t throw conniption fits.  There might well be browser performance problems, particularly if you optically kern all the things, but used in moderation (say, on headings) I wouldn’t expect too much of a hit.

The one potential drawback I can see, as articulated by Jake Archibald, is the possibility of a future kern element that might have different effects, or at least be styled by future author CSS and thus get picked up by TypeButter’s kerns.  The currently accepted way to avoid that sort of problem is to prefix with x-, as in x-kern.  Personally, I find it deeply unlikely that there will ever be an official kern element; it’s too presentationally focused.  But, of course, one never knows.

If TypeButter shifted to generating x-kern before reaching v1.0 final, I doubt it would degrade the TypeButter experience at all, and it would indeed be more future-proof.  It’s likely worth doing, if only to set a good example for libraries to follow, unless of course there’s downside I haven’t thought of yet.  It’s definitely worth discussing, because as more browser enhancements are written, this sort of issue will come up more and more.  Settling on some community best practices could save us some trouble down the road.

Update 23 Mar 12: it turns out custom elements are not as simple as we might prefer; see the comment below for details.  That throws a fairly large wrench into the gears, and requires further contemplation.


Turning Web Video On Its Head

Published 16 years, 2 weeks past

Here’s some fun.  (For a sufficiently nerdy definition of “fun”.)

  1. Launch Safari 4 or Chrome 4.

  2. Drag Videotate to the bookmarks bar.

  3. Go opt into the YouTube HTML5 beta.

  4. Find your favorite YouTube video.  Or maybe your least favorite.  Here’s one of my favorites: Walk Don’t Run.  Here’s another that’s not necessarily a favorite, but it seems like a fairly appropriate choice.

    Note: not all videos are available via HTML5, even when you’re opted in.  If you get a Flash video, the bookmarklet won’t work.

  5. Once the video has started playing, activate the “Videotate” bookmarklet.

  6. Enjoy.

Thanks to Simon WIllison for tweeting the JS I modified, and Jeremy Keith for helping me realize it would be easy to do during the HTML5 portion of A Day Apart.


Browse the Archive

Earlier Entries