Previously FrontierNav's tests ran against the local development build. This meant I only had to maintain two environments: development and production.
Production builds are optimised for delivery across a wide range of browsers and networks. The builds are obfuscated, minimised and split up; making them difficult to debug.
Development builds are optimised for debugging. They contain a lot more logging and are less optimised to avoid adding layers of transformation over the original source code.
These differences meant that in some cases, the optimisations applied to production would cause problems, and since I'm not testing against production, the only way to know was to test it manually or check the error logs. Sometimes I'd pick up the problem immediately, other times, it'd take days to discover.
The solution is obvious: test the production build. This week, I finally got around to doing it.
Production but not really
First issue is that I don't really want to test in production. I want to test the build, but I don't want to affect users. Users should not see a "Test User" going around the app, creating and deleting things to make sure it all works. So I created a new "environment" to differentiate between production and what's typically called "staging", which points to different databases and services.
A New Environment
The concept of an environment is extremely bloated. Node.js has its own "NODE_ENV" which is handled by most libraries and frameworks as a boolean; it's either "production" or not. If you want a production-optimised build "NODE_ENV=production" is needed, so I'll need to keep it around. Setting it to "staging" would be no different from setting it to "development". I need it to be "production" for "staging".
This new FrontierNav-specific environment, "FN_ENV", can be set directly or it'll fallback to whatever NODE_ENV is. In short, NODE_ENV defines the build, whereas FN_ENV defines the deployment.
1 2 3 4 5 6 7 8
# FN_ENV will fallback to "development" NODE_ENV=development webpack
# Explicitly set FN_ENV to "staging" NODE_ENV=production FN_ENV=staging webpack
# FN_ENV will fallback to "production" NODE_ENV=production webpack
I could have added a step to the execution that only takes an FN_ENV and calculates the NODE_ENV, but since these flags are only set in a few places, adding and maintaining that additional layer is excessive right now.
The next step was to pull out all of the variables -- things like the database, API keys and image servers -- into multiple configuration files, one for each environment.
I could have the configuration be fetched at runtime. However, while this would allow a single "production" build be used for both "staging" and "production" deployments, it also means I'd need a system to deploy and rollback the configuration separately, adding to the complexity.
A solution could be to have a two step build: One for the common code, and another for each environment which passes a configuration to the first build. I looked into this a bit, but getting it work, with all the optimisations, code splitting, etc. seemed too complicated. It's easier to manage one build per environment.
The downside to having to build for each environment is that it multiplies the time waiting for builds, which is currently just below 2 minutes each. (I'll go into this in a bit.)
There's also no guarantee that the two builds are identical, there might be bugs in the minifier or some other step that causes a behavioural difference. But these are rare cases that can exist throughout the software pipeline so it's not worth acting on them until they actually cause problems.
Finally, I had to make a deployment as close to production as possible. This meant adding a new domain and another HTTP server configuration. Previously, I was testing the project locally so none of the server configuration was tested (Nginx and Cloudflare). Now there are!
Optimising Build Times
To reduce Webpack's build times for "staging" and "production", I first looked into caching. Since both of these environments are essentially the same, minus a few variables, caching the results seemed like an obvious start. Caching does however have some issues.
Cache invalidation is always a problem. The cache identifier needs to be accurate to avoid using stale caches between builds. Managing that identifier is really complicated. For example, for Babel, I'll need to know the current version of Babel, all of plugin versions, the configuration and the browser targets (which is separate from the configuration as it's used by other tools). If anything new comes into play, I'll need to remember to add that to the list. Maintaining all of that would be a headache, and a huge risk, if say "production" variables go into "staging" and the tests end up polluting the entire database.
Overall, I did see a ~10% reduction in build time. However, that was the difference between 110 seconds and 100 seconds, still over a minute and in real terms, it won't affect my workflow.
Concurrency was another option. By building each file in parallel, I saved around 20 seconds. But again, not really noticeable in real terms.
I mentioned "real terms" when it comes to build times. For me, this asks the question: Does it make me more efficient?
If a release takes maybe, more than 30 seconds, I'm going to be doing other things. Taking a break, planning the next step, reading up on something, etc. When I'm at that step, I probably won't get back to development for around 5 minutes at the least. So having a release take 60 seconds instead of 300 seconds makes no difference to me. It's still more than 30 seconds and less than 5 minutes. Adding more complexity without any real benefit doesn't make sense.
There's no doubt that as I introduce more test cases, a release will start taking well beyond 5 minutes. At that point, I'll need to change my workflow. Instead of finding something else to do while a release goes through, I should be able to start on the next thing and let the release alert me when it's done.
To do that, I'll need a continuous integration server, otherwise my open changes would conflict with the release. But maintaining a CI server and introducing a new workflow is a lot of work, so I'll cross that bridge when I get there.
FrontierNav now supports windows. Not Windows, that's already supported. That is, you can now open up links in pop-out windows to keep them persistent between page navigations. This avoids needing to constantly click and around and go back and forth between pages.
I'm still working out the user experience on smaller screens, but it's not a priority given mobile apps tend to be geared towards a single window experience.
The worst part about this feature is getting the naming right. "Window" is such a generic term and it's used everywhere. I just settled with "AppWindow", though that doesn't help when naming variables. Calling everything appWindow so that it doesn't clash with the window global seems a bit too verbose.
Writing my own Router
I finally decided to replace the deprecated redux-little-router dependency I had. I knew it was a relatively simple replacement so I kept putting it off. I even forked it to fix a few issues as it started to go through bit rot. When I did sit down to replace it, I realised a few things.
The idea behind a router for web apps does not need to be tied to the limitations of a URL. A router is really just state and the URL is a representation of that state. So I separated those two out.
Instead of having the URL be updated as part of the router's state, the URL update is just another observer to that state. This allowed me to easily decouple the web-specific details, that is using the History API to persist the route, from the state itself which the rest of the app can use.
Since I have a few years worth of existing code using the URL to decide what to show, this decoupling is still a work in progress. I still have a lot of hardcoded strings linking different pages and routes of the app.
Freedom from Dependencies
Writing my own router is what pushed me to finally implement multiple windows. I'm now able to have multiple routers in the state for each window which greatly simplifies the logic around creating and navigating windows.
Currently, only one router is persisted in the URL, but nothing's really stopping me from persisting all of them. User experience wise, there's still clearly a "main" window, while the others are more ephemeral, so persisting them in the URL doesn't make much sense. Persisting them in local or remote storage might be more useful.
Anyways, if there's one thing I learnt from this, it's to not let your dependencies, libaries and frameworks narrow your thinking. If it prevents you from doing something, it's time to let go of it. You'll save a lot more time than working around it, which will only couple you to it even more.
I implemented an API for type-safe routing to go with my router. The main goal was to remove the hard coded strings used to link pages.
1 2 3 4 5 6 7 8 9 10 11
// Route "/explore/:gameId/entities/:entityId"
// Path using a hardcoded string "/explore/astral-chain/entities/enemy-123"
// Path using a Type-safe API and strings for IDs root().explore('astral-chain').entity('enemy-123').build()
// Path using a Type-safe API and typed objects with IDs root().explore(game).entity(entity).build()
This has a number of advantages:
Both the routes and the API for creating paths can be created in a single declaration to avoid inconsistencies.
It's impossible to have an invalid route or typo.
Autocompletion will kick in so I can see which routes are available.
I can change the output of build without going through every link.
I can find where each route is used using code discovery rather than loose text searches.
I haven't rolled this out yet as I'm still not 100% on how I want to structure the new router state.
Once all of this is complete, I'll have a new open source library to release.
FrontierNav Data Tables
The spreadsheet-like data tables are gradually becoming more and more feature rich. The tables now support sorting by column, importing pre-formatted CSVs, moving entities and various other quality of life improvements. No doubt it will keep getting better as I import more game data.
Hiding Behind Cloudflare
Most of my public infrastructure is now behind Cloudflare. This should reduce the amount of maintenance I need to do in regards to legacy domains (like jahed.io) and security.
I've disabled HTTP access to my servers entirely and HTTP connections as well as legacy domains redirect using Cloudflare's Page Rules. No server needed.
While this does tie me more to Cloudflare, I am already reliant on it for reducing bandwidth costs so from an end user perspective, nothing's really changed. Moving traffic back to my server is a matter of removing SSL Client verification and adding the redirects.
I automated node-terraform releases last week, and this week I confirmed that it all works. It picked up two versions of Terraform, ran tests and published them. As usual when it comes to using CI services, I had to make some minor tweaks to fix YAML and environment runtime errors.
I found out npm has a deprecate command so I went through my unmaintained packages and deprecated them.
Over the years FrontierNav's data model has changed a lot. This meant various parts of the codebase and web app used a range of words to describe the same thing. As I start introducing more terminology to the public, there's really no room for confusion. So I went through the project, migrated all the data and reduced the number of terms.
As an example, most data models have terms where these mean the same thing:
FrontierNav probably used at least 4 of these terms in the same contexts but now it only uses two: Entity and Relationships.
It's really hard for me to tell how well FrontierNav performs on lower-end devices as I don't own one. So as a general rule, after finishing a substantial feature, I'll put in some time to improve its performance or at least look into it.
This week, I optimised the Data Tables which uses artificial viewport scrolling to avoid rendering thousands of rows at once. Any lag in rendering is easily noticable while scrolling, even on high-end devices so it's important to optimise it as much as possible. In the React world, that usually means caching function calls (memoizing) and reducing re-renders in general. There's not much else to say about it. React's DevTools are good enough as they provide rendering times and let you know what triggered it to understand where the costs comes from.
At some point on GitHub, I switched off "automatically watch repos that you have push access to". This was back when I was in a team that had hundreds of repos and I didn't want to auto-watch all of them. Little did I know that also included my personal repos... so I wasn't getting notifications on any of my new ones.
Back in August, someone asked me to publish a new version of node-terraform to coincide with a new version of Terraform. But since I wasn't watching the repo, it went completely under my radar. I barely use Terraform personally, other than for provisioning some of my personal infra -- which barely changes -- so I had no idea.
Automate Robotic Work
For some background, node-terraform is a NodeJS wrapper I made to automatically download and link up separate versions of Terraform per-project. Each wrapper version matches a version of Terraform, so when a new Terraform version comes out, I need to release a new version of the wrapper. This way, the Terraform version is maintained through NodeJS's package manager with every other dependency. Why NodeJS? Most of my projects use it so it integrates nicely with my workflow.
After catching up with new versions, I decided that, if I was going to maintain the project long-term, I needed to automate the process sooner rather than later. I originally planned to use CircleCI, but with GitHub Actions, a continuous integration (CI) service built into GitHub, I could avoid introducing another third-party dependency.
Open Source and Infinite CIs
I really dislike how there's so many CI services, each with their own way of doing things. It means if you want to contribute beyond a certain amount to a project, chances are you'll need to figure out a new CI service, how it structures jobs, provisions infra, and understand its dreadful YAML configuration. None of it is meaningfully different. They all do the same stuff. Just slightly differently.
GitHub Actions initially tried something a bit more fresh, with HashiCorp Configuration Language (HCL), the same language Terraform uses. A new CI that tries to remove YAML is more than welcome in my eyes. However, they dropped HCL and moved to YAML recently so... yeah. More of the same. Actually worse, because most of the existing articles on GitHub Actions are just useless now. The service is still in beta so I'm not too fussed.
I really hope we get some sort of "OpenCI" initiative so we can end this cycle.
I've now got automated processes for:
Publish. Running tests and publishing new versions every time a "vX.X.X" tag is added to a commit.
Version Check. Checking for new Terraform versions every day, running tests and pushing the new version; which triggers a Publish.
Dependency Check. To automatically upgrade dependencies every week. Running tests and pushing. This doesn't trigger a Publish, but allows me to know when a dependency introduces a breaking change. Changes will eventually be published when something else triggers the Publish action.
I don't know if the scheduled checks work as they haven't picked up any changes yet. Only time will tell.
Automating the dependency check seems a bit risky as I'm not manually auditing the third-party changes being pulled in before they're pushed. However, GitHub and NPM provide security notifications so that should be covered retroactively.
It's pretty much impossible for me to check all the external code every time they publish a new package. It's mainly down to minimising dependencies as much as possible, and making sure the ones that are needed are alive and well.
At the end of all this, I now have a pretty generic set of actions which I can copy over to my other projects to make maintenance less tedious.
I missed updates over August so I'll try filling in the blanks for this report.
Added more editing features to the Data Tables, such as adding columns, tables, etc.
Added interactive guides for Astral Chain (work in progress)
Added ability to create Map Features directly from the Map
Added ability to create Entities from the Universal Search
Added ability to remove enforced filters from Universal Search (e.g. when adding new Relationships, you might want to point to an Entity that isn't currently supported by the Relationship)
Added a dropdown menu to Entity pages to edit them individually (work in progress)
Starting research into real-time features such as data entry collaboration, community features.
Guides for Astral Chain
On 30th August, Astral Chain released for the Nintendo Switch. So I spent the first few days after its release playing it to completion. After that I started working on an interactive guide.
Implementing the in-game "File Select" menu for the Web was a lot of fun. Getting it to flex and fit various screen sizes was the most challenging part. Game consoles have the benefit of rendering at a fixed resolution, the Web doesn't really have that luxury (unless if you're willing to compromise user experience).
I don't use the vh unit much in CSS, but it was perfect for this use case to scale up the size on larger displays. Dealing with the various behaviours of nested flexboxes was probably the least enjoyable part, but that's more to do with the tedium of setting up the necessary constraints rather than issues with CSS Flexbox itself.
Adding Features as Needed
While I was adding data for Astral Chain, I made the effort to use the editing features built into FrontierNav rather than writing single-use scripts. If anything I needed was missing or too tedious I made the effort to improve that workflow so that future edits can benefit from it.
This is probably the best way to work on FrontierNav going forward as it provides a clear acceptance criteria for each feature. i.e. did it actually solve the problem I was having and did it speed up data entry.
I can probably start adding more dynamic features into FrontierNav now. Things like real-time collaboration, user-suggested games, image uploads etc. The only thing stopping me is server costs. The current income from Patreon won't be enough to cover it. However, it's a bit of a "chicken or the egg" situation. If I don't add these features, users will have less incentive to contribute in the first place. So I'll try adding them and hopefully it gets people to help cover the costs.
The minimum cost for most of this is around $50 per month, so I've set a new goal on the Patreon page. Please consider contributing if you haven't already on Patreon.
Now that I'm spending more time on FrontierNav, I did a lot in July so looking back and writing it all up is draining.
I'm also writing the weekly reports making it difficult for me to filter out stuff I've said in the previous month's report and stuff I've said in the previous weekly reports.
So, I may decide to stop writing monthly reports in the future. Instead, I'll link to the weekly reports and related articles as a summary.
Let me know what you think.
The user interface for FrontierNav has been overhauled. Apologies in advance if the changelist lacks context. The main thing to takeaway is that a lot has changed. To give things more context I'll have to go through the old user interface and take before-and-after screenshots which is a lot of work.
The aim of the overhaul is to provider faster and consistent navigation and in the future support multiple visualisations on the same page.
Navigation bar is now entirely on the left.
Moved search to a button which activates a search overlay.
Added a "CTRL + P" shortcut to toggle search.
Sidebar is now available on all pages.
Removed overlaying behaviour of sidebar on desktop.
Sidebar now pushes content to the right, so it doesn't cover content.
Removed toggle, forward and back "tabs" on sidebar in favour of universal title bars with close buttons.
Moved user drawer to the left for consistency.
Added user's library to the navigation bar for easy access.
Removed navigation from game pages.
Navigation is now only on the navigation bar without duplication.
Removed game description and video trailer from game overview page.
Removed "wiki" pages in favour of showing the universal sidebar.
The universal search is now using a "fuzzy" search algorithm again. This means text doesn't need to exactly match the thing you're looking for and results appear faster.
Xenoblade Chronicles X
Added Treasure map.
Added minimaps for all existing maps.
The editor is now in a working state. It is not "feature complete" as the number of possible features is endless. The main focus now will be to use it as the sole way to add data to FrontierNav, adding any new features to it when needed.
The list of changes here will be too long so I've recorded some simple examples below. Try it out yourself if you want.
I'm going to use the Editor now to add more data into FrontierNav for Xenoblade X, Xenoblade 2 and some of the emptier games. The main goal being to add more immediately obvious functionality to it.
This week I've been doing a lot of UI experiments. A lot of it was fueled by yet another poor desktop experience being introduced by a major web company. This time it was Twitter. Luckily they have Tweetdeck which provides a much better desktop interface anyway.
Added change tracking
Added support for exporting changes
Added support for importing changes
Tables now support keyboard navigation.
Made sidebar universal.
Everything is now consistently on the left. No more navigation on the top, user login on the right, page navigation on the left, sidebar on the left, etc.
This is a step towards a frame-based UI to support multiple pages and visualisations on a single screen using frames. Kind of like windows but more intelligent so that FrontierNav can make better use of space on larger screens to reduce clicking around.
It feels like I'm rebuilding an operating system GUI...
Experimented with smaller font sizes and padding to see how a space-optimised FrontierNav might look and behave. This will become more important when a frame-based UI is available and screen space becomes more valuable.
FrontierNav Editor Initial Release
The FrontierNav editor is pretty much complete. Of course, there's a lot of features and improvements I'll be adding a long the way as with any product. But in terms of data-entry, the foundations have all been laid.
The biggest issue now I guess is figuring out the best ways for users to apply their changes. FrontierNav isn't active enough to allow any public change to get applied without moderation. So chances are, I'll go for a request-based approach. Users can make changes and send them over for approval when they're done using the import/export feature.
As a first step, I'll be using the editor to fill in the missing data for Xenoblade 2 and X. Adding functionality to the editor as I need it for others to use in the future. That's a much better approach than hacking scripts together and throwing them away.
Automated Change Submissions
I'll probably remove the need to import/export changes in the near future. If changes are stored on Firebase, there's no need for it outside of offline usage. However there's a risk of flooding the database so it'll need to be restricted to trusted "Editors" or rate-limited.
I finished "Donkey Kong Country: Tropical Freeze" on the Switch finally. It's a good game. Astounding music and level design. Some flaws. Very different from Mario's more nimble and acrobatic platforming. Donkey Kong's a lot more weighted and slow; as you'd expect from a gorilla.
I'll probably pick up "Starlink: Battle for Atlas" from my backlog next. Not expecting much from it.
It seems every week there's a new vulnerability found in an npm package. While this is great progress for the community, it's a pain to deal with when maintaining multiple projects. Automation is a solution, but then you need to maintain the automation. Removing dependencies and writing your own just means you'll probably have vulnerabilities that you don't know about since no one's looking. Eventually dependencies rot and security issues take over. At that point you need to abandon ship and find a new one.
The proper solution to this is "Server-Side Rendering" (SSR). I've tried SSR before, but it doesn't work. A lot of the code wasn't made for it in mind so it just fails. To fix it would mean going through a lot of code and also making sure any future code works in both cases. It's a big investment.
An alternative I've been thinking of is to go through every page in a web browser and dump the HTML. I've tried it this week using Puppeteer to automate the process and... it works.
There are a few issues though: cache invalidation (as always) and deciding where to run it. It can't run server-side since it's using a whole web browser which is slow and eats up a lot of resources, so it'll need to run in parallel. Which in turn means it'll need to know which pages to cache.
Right now, the experiment is using the sitemap.xml to scrape FrontierNav. That works for the more simple pages. But pages like the Maps have potentially hundreds of elements, and dumping all of that into HTML will be ridiculous. These pages are the most likely to be shared on social media. I could strip the excess post-render, but then the solution becomes more complex, at which point SSR becomes more appealing again.
Running in parallel is also a lot of waste since the vast majority of pages won't be directly navigated to. I could make it more intelligent and have it use access logs to find the pages it should cache, but then it's caching after the fact so it'll always be one step behind.