Test Your Babel Configuration
Having maintained a client-heavy web application for the last few years, there's one dependency that has routinely broken builds during my weekly dependency upgrades: Babel.
That's not really a surprise. Babel is complicated, you can go through its source code to see that. There are so many permutations of different configurations that it's pretty much impossible to test everything.
What makes this worse is that Babel is a development tool. My tests rely on it to work to even run! So, what can I do? Well, the part that determines Babel's behaviour is my configuration, so I can at least test that.
scenario.jswith some code that I expect Babel to transform.
scenario.jsthrough Babel and write the output to another file like
Commit both to your version control system.
As part of the testing pipeline, run Step 2 again and compare the output to the current
- Is it different? Fail the test.
- Is it the same? Pass the test.
If the test fails, manually check the different and figure out why.
- Did something in Babel change? What is it? Is it okay?
@babel/preset-env: Did the target environment change? Is it still compatible?
If the new output is correct, overwrite
scenario.babel.js. Otherwise, roll back the version upgrade.
const mustRequireAnImport = Object.fromEntries([
Why Snapshot Testing?
The idea around storing an output (snapshot) then testing future runs against it is called 'Snapshot Testing'. There are definitely flaws with it. It might flag up differences that aren't important like whitespace or name changes. However, the main use-case for Babel is to not really care about the specifics.
I don't care how Babel decides to support
import statements, as long as it's done something to support it and that it keeps doing that same thing. If it changes what it does, I want to know about it in case it causes problems further down the line.
Who tests the tester?
A lot of testing frameworks support Snapshot Testing. However, to avoid a never ending cycle, you need to make sure your tests don't require Babel compilation to run. There are plenty of existing tools to compare files so you can automate the process with a shell script.
Who tests the shell script? Who tests the file comparator? We simply have to rely on the distributor to do that. Draw the line somewhere. If the file comparator causes frequent problems, maybe you should be testing it.
It's worth pointing out that I'm not testing Babel here. Babel has their own test suites. I'm testing my configuration to make sure it's doing what I want it to specific to my project.
What about Integration Tests?
The goal of testing my Babel configuration isn't to catch bugs. It's to help diagnose bugs. For Step 5, to know if a change in Babel's output is a problem, you'll need automated tests. Unless if you really want to manually test things...
It's easy when configuring Babel to add a bunch of configuration without really knowing what it does. Or maybe I did when I wrote it, but over time forgot why. This is an ideal scenario for testing.
I have multiple scenarios. Some test multiple parts of the Babel config to make sure they work together, some are no-op tests where I don't expect Babel to make any changes. When I use a new feature that needs Babel to compile it, I can add it to the test suite to make sure it's being compiled correctly.
Tracking Web Progress
One of the great things about going through this process is seeing your code needing less and less transformations as web browsers and NodeJS adopt more features.
I'm never stuck in a state of thinking "we're not there yet" and "do I still need this?". If a scenario is no longer transformed I can remove the configuration for it. Less configuration, less maintenance.
After implementing this process, I now know exactly what Babel is doing specific to my project and why it's doing it. So the next time I upgrade Babel and I run my tests, I can see how it's changed in places where it matters to me.
Previously, when a build failed, I had to hunt around my compiled code, trying to find out where Babel differs. I had to go through countless issues in their gigantic monorepo issue tracker to find something relevant to guide me. Now, it's never an issue.
Points of Friction
There's still a lot of manual work for this process and it's still error prone. When I'm writing source code, I might not realise I'm using a feature that needs to be compiled or might just forget to add it as a scenario in the test suite.
But with the process in place, when I do encounter problems, instead of having to compare massive blobs of compiled code against more complicated source code, I can easily test new features and make sure they're being compiled correctly.
Knowing where to look
When there is a change to an output, I need to decide if the change is desired or not. That means going to Babel's CHANGELOG to read up on what's changed. Going through related issues to understand why it's changed. And eventually deciding if that change is a problem for my own project.
To decide if it's a problem, I have tests in the pre-deployment phase that only covers a select few environments. Post-deployment, I mostly have to keep an eye on my error tracker for all the other environments.
It's again, a lot of manual work. However, by doing this, I'm fully informed about Babel's changes so I can more easily see an error, match it up to a specific change, and rollback. And it's all in version control for easy recall.
In order to not be overwhelmed by the giant build machine, it's important to be informed about every cog in the process. Testing the Babel cog has taken me one step towards that. No doubt the lessons learned apply to every other cog.
Thanks for reading.