jahed.dev

Image Optimisation for Websites

In web development, image optimisation is one of those steps that is easy to over look. Often images are copied over from cameras, phones and websites regardless of how they've been encoded. Encoding is important as all images are an approximation of what we see in the real world and are optimised digital displays which are themselves approximations.

An image from a 4K-optimised camera does not need to be visible in full-fidelity in a website banner. A 128x128 pixel icon does not need to be 1024x1024 pixels. It all depends on where it's used. So copy and pasting images without optimising them can lead to excessive bandwidth usage, rendering and load times.

In this post, I'll be going through some steps that can be taken to reduce all of those problems.

Resolution

It goes without saying, if you have massive image that you're using in a tiny area of a website, you should lower its resolution. There's no point having an image with so much detail when it's only visible in a 2cm area of the screen.

Lowering Resolution

There's a range of ways to lower resolution. Use your favourite image editor. I tend to use ImageMagick as it provides a way to tell the computer what you want without fiddling around with the mouse pointer and various drop-down interfaces.

convert in.png -resize 128x128 out.png

On-Demand Scaling

Sometimes, pre-rendering all of these different resolutions can be a pain, so you can generate them on-demand using your HTTP server. As the image size into your URL somewhere and pick an image library to generate the image. To save processing costs, store the generated image so that you can provide it on future requests without needing to create a new one.

Be careful when implementing this as you don't want to accept any and all sizes. It'll be a great DDoS vector. Limit your choices (e.g. 32x32, 64x64, 128x128, etc.) and sanitise your inputs!

Personally, I use OpenResty (Nginx) for my servers which supports Lua, which in turn has ImageMagick bindings. I'd go into it but there's a very similar guide here.

HiDPI Screens

I used cm units previously when talking about resolution, that was intentional. Nowadays, images aren't really rendering on a per-pixel basis to a screen. A 128x128 area of a website can be rendered to 256x256 pixels on a screen, so that pixel-perfect image you provided by end up a blurry mess. It all depends on the the DPI setting. There are various ways to handle this behaviour on a website.

Encoding and Compression

As mentioned before, all images are approximations and you should optimise images for your specific use case, where you have a good balance between file size and visual fidelity. Sure some file formats, like JPEG, introduce artifacts, but it reduces so much data cost and the artifacts are often not visible unless if someone looks very closely.

Even if you don't want to reduce quality, there are still optimisations you can make to better compress images into a smaller file size without losing quality.

JPEG

JPEG is already a lossy image format so reducing quality of an existing JPEG can lead to visual errors. If you do store original images as JPEG to save space, make sure they're high quality and high resolution so you can easily create downscaled copies at lower qualities without introducing visible compression artifiacts.

Quality Level

All JPEG images are encoded to a target "quality" ranging from 0 to 100. When you're resizing images, make sure to lower the quality as much as possible. Quality above 90 is considered high while quality above 70 is good enough. I personally stick with around 85 as it tends to be a good balance without showing visible artifacts.

jpegtran

jpegtran comes with most operating systems and lets you easily optimising JPEGs without changing the resolution and quality. It can strip metadata and also enable progressive rendering.

jpegtran -optimize -copy none -progressive -outfile out.jpg in.jpg

PNG

PNG is generally not great for optimising detailed images as it tries to maintain per-pixel detail, but they're often needed for transparency. WebP is a better overall choice as it provides compression similar to JPEG while also supporting transparency, however it's not supported in all browsers yet. For less detailed images, like icons, PNG is still a good choice.

OptiPNG

OptiPNG is a lossless optimiser that uses various approaches to reduce file size without losing quality. The more complicated the approach, the longer it takes. I personally stick with the default as I tested a few images and found the difference to be minor for the time spent.

optipng -strip all -out out.png in.png

pngquant

pngquant is a lossy optimiser that tries to compress images into small colour palettes without losing perceptible detail. A size reduction isn't guaranteed, especially for simple images that already use a small range of colours.

Figuring out the best options for your image is very much a hands-on process, so if you don't have a fixed usecase for it, I recommend using JPEG instead or WebP if you need transparency.

pngquant --strip -out out.png in.png

WebP

WebP is not yet fully supported in all popular web browsers (like Safari) so I won't go into it right now.

SVG

The previous formats are for raster images. Images that tell the computer where the pixels are. SVG is a vector format that instead tells the computer how to draw the image. As such, it's not good at storing detail, but it is good for rendering the same image at any size without becoming a blurry mess. SVG is a good choice for images that have obvious edges like icons, logos and diagrams.

SVGO

SVGO is the best SVG optimiser I've found. It has a whole suite of options to strip as much metadata as possible. The only noticeable downside is that it needs a Node.js runtime which isn't often available in most environments by default.

Tying it all together

If you ended up with an on-demand server to reduce your image resolution, you can easily add a step to also optimise your images before pushing it out.

If the optimisation step takes too long, to reduce response times, another option is to optimise new images periodically after the first request. That means the first request won't be fully optimised, but future requests will eventually be.

I have a daily cronjob that goes through all the newly cached lower resolution images and optimises them. So for PNGs it's something like:

find /var/tmp/images/ -type f -mtime -1 -regex '.+\.png' -print0 \
  | xargs -0 -L 1 -I % optipng -quiet -strip all -out '%' '%'

Third Party Services

For those that don't want to deal with using all of these tools and steps in their creation pipelines, there are third-party services you can use that will do it all for you. Integrating them can be as simple as enabling a plugin. I won't go into detail on how to use them and which are best since I don't use them myself.

Conclusion

That's about everything. Image optimisation can be an involved process, but once you've got it all set up, there's not much to maintain and you'll have a much faster website!

Thanks for reading.