Jahed Ahmed

FrontierNav Report: October 2019

Progress Report

I wasn't planning to write a monthly report since I'm already writing weekly ones. But I realised weekly reports are a bit varied and it's nice to have a monthly update just around FrontierNav. This report only covers October. I'll be sharing what I did in November in a future report.

Changes in October

Windows in FrontierNav

Pop-out Windows

Pop-out Windows are the biggest feature this month. It's a huge convenience on desktop and saves a lot of clicking around.

They are a bit limited. Windows can't be resized and their content is static. However the ground work has been laid for more advanced features using the "Window Manager", such as...

Sidebar Behaviour

Currently the Sidebar is tied to the URL. The Main Window, where the Maps and other visualisations are rendered, also relies on the URL.

Previously, FrontierNav only really had one context so sharing a single state, the URL, was never an issue. But the limitations are starting to show as new features start conflicting with existing ones.

For example, on mobile viewports, the Sidebar covers the Main Window. At the same time, closing the Sidebar causes the Main Window to change too; making certain pages inaccessible. I've been working around this by essentially having permutations of state for each page: one for the Sidebar, one for the Main Window, and one for both. This obviously is a major headache to manage.

Ideally, the behaviour of the Sidebar should depend on the context. So having the contexts drive that behaviour makes the most sense. Things like "Show the Sidebar when the user selects a search result", "Show the Sidebar when the user expands a table row", and so on.

Now with a Window Manager implemented, this sort of behaviour should be easier to implemented. The Sidebar pretty much is a Window, except it's docked to the left side rather than freely floating.

I haven't release this change yet, but it's one example of what the "Window Manager" enables.

Data Tables

As always, Data Tables have been increasing in features as-needed. They're not that major to individually list. I've also added more data for Astral Chain such as Enemy Spawns and tidied up existing data from previous games.

There are still certain processes that I need to migrate over. Image upload is probably the more obvious one but it carries a huge security and cost risk compared to everything else. There's also templating to properly render the data in the Sidebar which is currently done in code.

I'm going to also have to start thinking about on-boarding processes to get others to use the data editing tools. Things like documentation, user guides, integrated merge processes and so on. There's a lot to do.

Thanks for reading.

Weekly Report: 18th November 2019

Image Uploads on FrontierNav

I wrote a separate post about this since it's a bit long.

Weird Traffic

For some reason, Cloudflare has started to report an ever-increasing number of "Unique Visitors". Currently, it stands at 4 times the usual levels. It'd be great if that was true but I'm doubtful.

My access logs, which avoid Cloudflare's cache, say it's more or less the same as before. Cloudflare's other metrics like "Total Requests" are also the same as before. Nothing else is following this new trend. So there's no reason to believe it.

GitHub Actions

I noticed node-terraform's automated publish workflow wasn't get triggered when new version tags were pushed. I use the exact same trigger for FrontierNav and it works fine. The only difference is that I manually push tags for FrontierNav, whereas node-terraform's is pushed by another workflow.

I'm kind of burnt out from debugging GitHub Actions so I'm giving it a break. It's probably an issue on their end or yet another caveat like a lot of the previous issues I had.

Google Search is Trash

I've been using DuckDuckGo as my default search engine for over a year now. Everything's been good, and having the !g command to fallback to Google has helped ease the transition to a less forgiving service.

However, I noticed something: Google is become worse at being a search engine as time goes on. It's full of "SEO" trash websites. The results are useless without basically telling it what website to search through using the site: keyword. The top half of the first page is always full of auto-generated junk too.

I don't know how long this trend will last, but I'm becoming more and more reliant on my bookmarks nowadays to find specific sites and run searches through them.

Thanks for reading.

Image Uploads on FrontierNav

One of the most common processes when adding new data is editing and uploading images and other media. Currently, I'm basically rsync-ing images directly to the server. If anyone has images to share, I need to download them and rsync manually. So it made sense to streamline this process on FrontierNav after I introduced Merge Requests last week.

Security

The main issue around handling images are the security risks. This is true for pretty much all user-generated data. Anything in the image processing pipeline can have bugs and vulnerabilities, ready to be exploited. In fact, it's pretty common to see new disclosures for these sorts of issues every now and then. Even the tools that are used to make images "safe" are vulnerable.

Given this never-ending issue, most applications split their content into separate services; isolating the potentially bad parts from the good parts. You can see this in your network logs with domains containing phrases like "usercontent".

Firebase conveniently provides asset storage where users can directly upload files with strict rules. Given these files are hosted and served through Firebase and Google Cloud Storage, it's already pretty isolated from the rest of FrontierNav.

Versioning

As FrontierNav's data is versioned, images will need to be versioned too. For example, if one version of data pointed to "character.jpg" which had a full-body view of a character, but then "character.jpg" was replaced with a mugshot. Sure, newer versions know its a mugshot, but previous versions are now pointing to a mugshot thinking it's a full-body.

To solve this, all images must be named using a hash and file size. So when an image changes, it's uploaded as a new file instead of overwriting an existing one. "Versions" of different images are tracked in the data using their hashes rather than being tied to the filename.

Another benefit to this is that images are automatically de-duped. In a basic sense. Images that are a few pixels different won't be de-duped as they'll have different hashes.

Local Storage and Offline Support

Images uploaded to FrontierNav are not really uploaded immediately. They're only uploaded on Merge Request. This avoids rough edits from needlessly polluting storage space.

A positive to this approach is that FrontierNav now supports loading images offline from local storage, IndexedDB to be exact. The rest of the app isn't offline-capable, but this is a major step forward.

Other Media

Technically, nothing's stopping me from allowing other media like videos and audio to be uploaded using the same process. But I don't have a need for them yet. Once there is a need, I'll open that up.

Image Editing

All of the above solves one part of the problem: uploading. The other part is a bit more difficult: editing. It's fine to edit images locally, then upload them. But for basic processes like cropping and resizing, it can be a bit tedious to open, edit, save and upload individual images.

Though it is tempting to implement an Image Editor just for the fun of it, I'm going to hold myself back. Instead, I'll focus on adding more data and reducing more urgent points of friction in the data entry process.

Thanks for reading.

Weekly Report: 11th November 2019

Merge Requests on FrontierNav

After building the automation pipeline last week, I moved onto the main feature driving it: Merge Requests.

It's worth mentioning again that FrontierNav's data is database-free. The data is packaged as part of the website for various reasons. The dynamic nature of it makes it very easy for changes to conflict and individual changes may not be valid without the whole.

Given this, there are two ways people can contribute to FrontierNav:

  1. Providing me the data which I transform to be FrontierNav-compatible,
  2. Using FrontierNav's UI to modify data.

The current goal is to make 2 easier so that I'm not constantly spending time doing 1.

Currently, users can use the Data Tables to modify data in a basic spreadsheet-like manner and add markers on the map. However, to apply that data for others to see, users need to export the changes and send it to me manually. Then, I need to re-apply those changes locally and deploy them.

This process is slow and tedious, even when I'm the only one acting on them.

The idea behind Merge Requests is pretty simple.

  1. Users can now submit their changes directly from FrontierNav without needing to export them.
  2. An admin can review those changes within FrontierNav and approve them.
  3. An automated pipeline picks up approved changes and deploys them.

This week, I pretty much implemented this entire process. I won't be relying on the automation just yet as it's not been fully proven. Instead, I'll run the same scripts manually to make sure it's working.

I was going to put a recording of the process here but OBS is being a bit glitchy at the moment.

Firebase Addiction

As much as I hate tying FrontierNav's features to Firebase's Real-Time Database, I admit it's a massive convenience and hard to avoid.

I may have made it even harder to avoid when I wrote firebase-rules, which has allowed me to re-use chunks of logic that enables things like rate-limiting, readable conditional statements and manual indexing.

The only thing really stopping me is are the usage limits but even at that point, it might be easier to pay up than move to something else.

So why do I hate it? Because Firebase is extremely opaque. It doesn't provide much in the way of details. Which I don't blame it for, that's its selling point and that's why I use it. But when the time comes where I outgrow it or I lose access to it (knowing Google), I need to be ready.

Browser Automation

A while back, I decided that all features driving FrontierNav should use the web client. This was to avoid writing one-off scripts and piling on technical debt. If a feature is available on the web client, it's technically available to everyone, including myself when I'm not on a workstation.

So when I implemented Merge Requests, I needed a way to automate the deployment process as though a human could do it. This way, if the automation is no longer available, I could easily do it myself.

Initially, I used Nightwatch to run these automations. Nightwatch is what I use for integration testing so it supports browser automation. However, it's not a good fit for general automation. Nightwatch is focused specifically around writing tests and steering away from that is difficult within its test-oriented framework.

So I moved over to WebdriverIO which recently split its test runner from its automation. Perfect. I had to figure out a few things that Nightwatch provides out of the box, but it wasn't a big deal. And WebdriverIO's documentation is so much better.

I'm now planning to move over entirely to WebdriverIO in the future, with my own wrapper to avoid being locked into a framework. Nightwatch's activity has been a bit on-and-off recently with a focus on selling their testing solution, and the documentation isn't very good.

Thanks for reading.

Weekly Report: 4th November 2019

Automating FrontierNav

In last week's update, I briefly mentioned automating FrontierNav's deployments. Well, this week I went ahead with it. Why? The main reason is that I'm working towards moving FrontierNav's workflow completely off my workstation. If I want others to contribute, even myself, I don't really want access to my personal computer to be a requirement.

Eventually, everything should be available from FrontierNav's web client. That's the broader goal. Currently, I'm focusing on data entry and I'll have more to share on that next week.

Output of the Deployment Pipeline
Output of the Deployment Pipeline

GitHub Actions

GitHub Actions is GitHub's new automation offering. It's currently in preview, though it's due to be released this month. It's very much minimal viable product, but their strategy has worked. For me, at least.

I was originally planning to move to GitLab and use their automation service, but I didn't realise that they don't provide runners. I'm expected to provision those myself. CircleCI is another offering that does provide runners but having to share a pipeline between multiple third parties was a turn off.

So GitHub Actions was the obvious solution. Both my code and pipelines in one convenient place. It wasn't easy. There were a few issues, especially related to caching between builds. But it's done.

Testing Babel

Babel 7.7 was released this week and I upgraded FrontierNav to use it. This time round, instead of upgrading and hoping the integration tests pick up any broken changes, I decided to test Babel separately using snapshots.

I don't know why I didn't do this to begin with. With snapshots, I'm able to test each transformation Babel performs, making it clear why Babel is configured the way it is. It also catches any regressions and potential errors introduced by Babel without having to dig through layers of Webpack transformations.

Considering Ansible

My server has gotten to a point now where I have a good reason to use Ansible. I used Puppet extensively before, but it's extremely heavy for a handful of servers and Ansible seems a lot simpler.

I would much rather move to a "serverless" solution like AWS S3 but my current server plan is a lot cheaper and predictable. So I'm in no rush.

Once thing I did notice about these provisioning tools is that they really make it hard to find their documentation. I don't blame them. Paying for support is probably how they can afford to create these tools in the first place. However, it was a constant annoyance during my research phase.

More Games

I've added more games to FrontierNav. Right now that's kind of pointless, but it keeps me motivated. One of the reasons I'm creating FrontierNav is because I've always wanted to make a website for the games I enjoy. A sort of tribute.

In the late 90s and 00s when I was gradually exposed to the worldwide web, I always came across fan sites and I wanted to make my own. It's a shame this trend has slowed down with the introduction of the popular web (popweb). It's one of the reasons I started my "World Wide Wanderer" series, though I haven't updated it much.

Added Games
New games are added, though they don't have much data.

Fan Sites

Of the games I added were Lufia and Lufia II. Though I was too young to fully appreciate them at the time, they left a massive impression on me, especially the soundtrack. The fan sites surrounding them have of course gone quiet as the series died off, some have been lost forever.

GameFAQs seems to be the best place to find communities for these older games, which is somewhat boring since the site is very uniform and not much of a tribute to the games they host. This is a common trend in popweb and one I want FrontierNav to avoid going down. This is why I introduced more games, so that the platform has the features to present them properly rather than retro-fitting them later. While theming isn't a priority now, it will be once there's data to warrant it.

Preservation

It's nice that GameFAQs is keeping an historical catalog of guides and walkthroughs written by fans. However, these documents would be better placed in non-profits like Internet Archive; outside of commercial interests and the risks that come with it.

Thanks for reading.

Weekly Report: 21st October 2019

Improving FrontierNav's Tests

This week was mostly around improving FrontierNav's tests which I wrote about separately since it's quite long. You can read about it here.

Sandboxing

I started using Flatpak (via Flathub) for most of the applications I use. I don't know why sandboxing hasn't become the default by now but it's nice to see some progress on it.

Deno also comes to mind as a replacement for Node.js/NPM. It doesn't sandbox, but at least it restricts access. Though, I'm not entirely sure if its reliance on grabbing remote dependencies via HTTPS is safe without lockfiles and hashes. I took a quick look just now and it looks like it's being figured out.

Thanks for reading.

Improving FrontierNav's Tests

Previously FrontierNav's tests ran against the local development build. This meant I only had to maintain two environments: development and production.

These differences meant that in some cases, the optimisations applied to production would cause problems, and since I'm not testing against production, the only way to know was to test it manually or check the error logs. Sometimes I'd pick up the problem immediately, other times, it'd take days to discover.

The solution is obvious: test the production build. This week, I finally got around to doing it.

Production but not really

First issue is that I don't really want to test in production. I want to test the build, but I don't want to affect users. Users should not see a "Test User" going around the app, creating and deleting things to make sure it all works. So I created a new "environment" to differentiate between production and what's typically called "staging", which points to different databases and services.

A New Environment

The concept of an environment is extremely bloated. Node.js has its own "NODE_ENV" which is handled by most libraries and frameworks as a boolean; it's either "production" or not. If you want a production-optimised build "NODE_ENV=production" is needed, so I'll need to keep it around. Setting it to "staging" would be no different from setting it to "development". I need it to be "production" for "staging".

This new FrontierNav-specific environment, "FN_ENV", can be set directly or it'll fallback to whatever NODE_ENV is. In short, NODE_ENV defines the build, whereas FN_ENV defines the deployment.

1
2
3
4
5
6
7
8
# FN_ENV will fallback to "development"
NODE_ENV=development webpack

# Explicitly set FN_ENV to "staging"
NODE_ENV=production FN_ENV=staging webpack

# FN_ENV will fallback to "production"
NODE_ENV=production webpack

I could have added a step to the execution that only takes an FN_ENV and calculates the NODE_ENV, but since these flags are only set in a few places, adding and maintaining that additional layer is excessive right now.

Separating Configuration

The next step was to pull out all of the variables -- things like the database, API keys and image servers -- into multiple configuration files, one for each environment.

I could have the configuration be fetched at runtime. However, while this would allow a single "production" build be used for both "staging" and "production" deployments, it also means I'd need a system to deploy and rollback the configuration separately, adding to the complexity.

A solution could be to have a two step build: One for the common code, and another for each environment which passes a configuration to the first build. I looked into this a bit, but getting it work, with all the optimisations, code splitting, etc. seemed too complicated. It's easier to manage one build per environment.

The downside to having to build for each environment is that it multiplies the time waiting for builds, which is currently just below 2 minutes each. (I'll go into this in a bit.)

There's also no guarantee that the two builds are identical, there might be bugs in the minifier or some other step that causes a behavioural difference. But these are rare cases that can exist throughout the software pipeline so it's not worth acting on them until they actually cause problems.

Staging Deployment

Finally, I had to make a deployment as close to production as possible. This meant adding a new domain and another HTTP server configuration. Previously, I was testing the project locally so none of the server configuration was tested (Nginx and Cloudflare). Now there are!

Optimising Build Times

To reduce Webpack's build times for "staging" and "production", I first looked into caching. Since both of these environments are essentially the same, minus a few variables, caching the results seemed like an obvious start. Caching does however have some issues.

Cache invalidation is always a problem. The cache identifier needs to be accurate to avoid using stale caches between builds. Managing that identifier is really complicated. For example, for Babel, I'll need to know the current version of Babel, all of plugin versions, the configuration and the browser targets (which is separate from the configuration as it's used by other tools). If anything new comes into play, I'll need to remember to add that to the list. Maintaining all of that would be a headache, and a huge risk, if say "production" variables go into "staging" and the tests end up polluting the entire database.

Second, looking up the cache can take more time than rebuilding a file. A lot of modules in JavaScript are tiny, so for all those tiny files, the overhead of doing a file system lookup can increase rather than decrease the build time for that file.

Overall, I did see a ~10% reduction in build time. However, that was the difference between 110 seconds and 100 seconds, still over a minute and in real terms, it won't affect my workflow.

Concurrency was another option. By building each file in parallel, I saved around 20 seconds. But again, not really noticeable in real terms.

Real Terms

I mentioned "real terms" when it comes to build times. For me, this asks the question: Does it make me more efficient?

If a release takes maybe, more than 30 seconds, I'm going to be doing other things. Taking a break, planning the next step, reading up on something, etc. When I'm at that step, I probably won't get back to development for around 5 minutes at the least. So having a release take 60 seconds instead of 300 seconds makes no difference to me. It's still more than 30 seconds and less than 5 minutes. Adding more complexity without any real benefit doesn't make sense.

Automated Deployments

There's no doubt that as I introduce more test cases, a release will start taking well beyond 5 minutes. At that point, I'll need to change my workflow. Instead of finding something else to do while a release goes through, I should be able to start on the next thing and let the release alert me when it's done.

To do that, I'll need a continuous integration server, otherwise my open changes would conflict with the release. But maintaining a CI server and introducing a new workflow is a lot of work, so I'll cross that bridge when I get there.

Thanks for reading.

Weekly Report: 14th October 2019

FrontierNav Windows

FrontierNav now supports windows. Not Windows, that's already supported. That is, you can now open up links in pop-out windows to keep them persistent between page navigations. This avoids needing to constantly click and around and go back and forth between pages.

I'm still working out the user experience on smaller screens, but it's not a priority given mobile apps tend to be geared towards a single window experience.

The worst part about this feature is getting the naming right. "Window" is such a generic term and it's used everywhere. I just settled with "AppWindow", though that doesn't help when naming variables. Calling everything appWindow so that it doesn't clash with the window global seems a bit too verbose.

Windows in FrontierNav

Writing my own Router

I finally decided to replace the deprecated redux-little-router dependency I had. I knew it was a relatively simple replacement so I kept putting it off. I even forked it to fix a few issues as it started to go through bit rot. When I did sit down to replace it, I realised a few things.

The idea behind a router for web apps does not need to be tied to the limitations of a URL. A router is really just state and the URL is a representation of that state. So I separated those two out.

Instead of having the URL be updated as part of the router's state, the URL update is just another observer to that state. This allowed me to easily decouple the web-specific details, that is using the History API to persist the route, from the state itself which the rest of the app can use.

Since I have a few years worth of existing code using the URL to decide what to show, this decoupling is still a work in progress. I still have a lot of hardcoded strings linking different pages and routes of the app.

Freedom from Dependencies

Writing my own router is what pushed me to finally implement multiple windows. I'm now able to have multiple routers in the state for each window which greatly simplifies the logic around creating and navigating windows.

Currently, only one router is persisted in the URL, but nothing's really stopping me from persisting all of them. User experience wise, there's still clearly a "main" window, while the others are more ephemeral, so persisting them in the URL doesn't make much sense. Persisting them in local or remote storage might be more useful.

Anyways, if there's one thing I learnt from this, it's to not let your dependencies, libaries and frameworks narrow your thinking. If it prevents you from doing something, it's time to let go of it. You'll save a lot more time than working around it, which will only couple you to it even more.

Typesafe Routing

I implemented an API for type-safe routing to go with my router. The main goal was to remove the hard coded strings used to link pages.

1
2
3
4
5
6
7
8
9
10
11
// Route
"/explore/:gameId/entities/:entityId"

// Path using a hardcoded string
"/explore/astral-chain/entities/enemy-123"

// Path using a Type-safe API and strings for IDs
root().explore('astral-chain').entity('enemy-123').build()

// Path using a Type-safe API and typed objects with IDs
root().explore(game).entity(entity).build()

This has a number of advantages:

I haven't rolled this out yet as I'm still not 100% on how I want to structure the new router state.

Once all of this is complete, I'll have a new open source library to release.

FrontierNav Data Tables

The spreadsheet-like data tables are gradually becoming more and more feature rich. The tables now support sorting by column, importing pre-formatted CSVs, moving entities and various other quality of life improvements. No doubt it will keep getting better as I import more game data.

Hiding Behind Cloudflare

Most of my public infrastructure is now behind Cloudflare. This should reduce the amount of maintenance I need to do in regards to legacy domains (like jahed.io) and security.

I've disabled HTTP access to my servers entirely and HTTP connections as well as legacy domains redirect using Cloudflare's Page Rules. No server needed.

While this does tie me more to Cloudflare, I am already reliant on it for reducing bandwidth costs so from an end user perspective, nothing's really changed. Moving traffic back to my server is a matter of removing SSL Client verification and adding the redirects.

Other Stuff

Thanks for reading.

Weekly Report: 7th October 2019

FrontierNav Terminology

Over the years FrontierNav's data model has changed a lot. This meant various parts of the codebase and web app used a range of words to describe the same thing. As I start introducing more terminology to the public, there's really no room for confusion. So I went through the project, migrated all the data and reduced the number of terms.

As an example, most data models have terms where these mean the same thing:

FrontierNav probably used at least 4 of these terms in the same contexts but now it only uses two: Entity and Relationships.

FrontierNav Performance

It's really hard for me to tell how well FrontierNav performs on lower-end devices as I don't own one. So as a general rule, after finishing a substantial feature, I'll put in some time to improve its performance or at least look into it.

This week, I optimised the Data Tables which uses artificial viewport scrolling to avoid rendering thousands of rows at once. Any lag in rendering is easily noticable while scrolling, even on high-end devices so it's important to optimise it as much as possible. In the React world, that usually means caching function calls (memoizing) and reducing re-renders in general. There's not much else to say about it. React's DevTools are good enough as they provide rendering times and let you know what triggered it to understand where the costs comes from.

Other Stuff

Thanks for reading.

FrontierNav Report: September 2019

Progress Report

Changes in September

I missed updates over August so I'll try filling in the blanks for this report.

Guides for Astral Chain

On 30th August, Astral Chain released for the Nintendo Switch. So I spent the first few days after its release playing it to completion. After that I started working on an interactive guide.

Implementing the in-game "File Select" menu for the Web was a lot of fun. Getting it to flex and fit various screen sizes was the most challenging part. Game consoles have the benefit of rendering at a fixed resolution, the Web doesn't really have that luxury (unless if you're willing to compromise user experience).

I don't use the vh unit much in CSS, but it was perfect for this use case to scale up the size on larger displays. Dealing with the various behaviours of nested flexboxes was probably the least enjoyable part, but that's more to do with the tedium of setting up the necessary constraints rather than issues with CSS Flexbox itself.

Interactive Guides for Astral Chain

Adding Features as Needed

While I was adding data for Astral Chain, I made the effort to use the editing features built into FrontierNav rather than writing single-use scripts. If anything I needed was missing or too tedious I made the effort to improve that workflow so that future edits can benefit from it.

This is probably the best way to work on FrontierNav going forward as it provides a clear acceptance criteria for each feature. i.e. did it actually solve the problem I was having and did it speed up data entry.

Entity Dropdown
Entity Dropdown

Real-time Features

I can probably start adding more dynamic features into FrontierNav now. Things like real-time collaboration, user-suggested games, image uploads etc. The only thing stopping me is server costs. The current income from Patreon won't be enough to cover it. However, it's a bit of a "chicken or the egg" situation. If I don't add these features, users will have less incentive to contribute in the first place. So I'll try adding them and hopefully it gets people to help cover the costs.

The minimum cost for most of this is around $50 per month, so I've set a new goal on the Patreon page. Please consider contributing if you haven't already on Patreon.

Thanks for reading.