Jahed Ahmed

Weekly Report: 2nd December 2019

FrontierNav

Data-driven Sidebars

Previously FrontierNav's sidebars were hard-coded React components that grabbed data and displayed them in various ways. Over time, the layout of these sidebars gradually converged into some simple components as patterns and similarities emerged across the various datasets.

I briefly mentioned the data migration I did last week, I didn't have much to say about it but it enabled me to finally take the steps to generate layouts using just the data. No coding necessary.

Astral Chain was the first game I migrated to this approach since it's a fairly recent addition. Pokemon Sun/Moon was the next since there wasn't a lot of data.

The two hard ones are Xenoblade 2 and Xenoblade X. There's a lot more data, but more importantly, they make use of some custom components that are difficult to standardise. For example, Xenoblade X lets you choose Probes for its Probe Simulation using the Sidebar. Both have a range of custom styling to better match their games.

Of course, all of these edge cases can wait as they're not needed for most games. This comes back to my main goal: to fully document Lufia II purely through the web client to prove that FrontierNav's data editing features are viable.

Other Changes

Browsers & Local Storage

Browsers come with ways to persist data locally using Local Storage, IndexedDB and other APIs. I've always been reluctant to use these features as most browsers tend to treat them as disposable. I can't have users making hundreds of changes stored locally and expect it to stay there. Things can go wrong.

Firefox 71 for example has broken local storage, at least the Fedora build of it. FrontierNav's authentication details are stored locally to persist logins. That doesn't work now on that browser, so users get logged out whenever they load the page. Even LastPass doesn't let me login, so I'm stuck using my phone to get passwords. I could rollback, but then I lose security updates which are more critical. I just have to wait for the patch to be release. The situation sucks.

Thanks for reading.

FrontierNav Report: November 2019

Progress Report

Changes in November

Follow the links for more details on each change.

Next Up

I'll be using Lufia II as a way to finish off the remaining work needed to allow anyone to contribute. At the end of this, someone should be able to contribute most of the data for an entirely new game without much input from me.

Why Lufia II? It's a reasonably small game with enough variety to match modern games. Also, it provides nostalgic motivation for me.

Thanks for reading.

Weekly Report: 25th November 2019

First Merge Request for FrontierNav

I released merge requests around the start of last week and it already had its first use before I even properly showed it off. I wasn't expecting it and only noticed after seeing a sustained increase in events around that feature. The first request was waiting for 3 days which isn't great. I initially contacted the contributor by email, then by Twitter after I noticed a recent follower with the same name.

Just by having this one contribution, I was able to gather a lot of data and fix a few issues. People obviously don't view things like I do, so having others even just try things helps a lot.

The lack of communication channels directly on the website is a problem but not an urgent one until more people start contributing. Merge requests currently don't allow comments. Introducing them shouldn't be difficult but ideally, I want to integrate it with the existing Community Forums to reduce duplication and maintenance.

Other FrontierNav Changes

Marketing vs Sharing

I personally view "marketing" as a dirty word. I know it isn't, but with commercialised tracking, privacy breaches and all the lot going on in the Web, I can't help feel that way. It is the default. Whatever my feelings, I need to market FrontierNav a lot more than I currently do, otherwise no one will even know it exists.

I recently watched a GDC Talk by David Wehle where he went through how he marketed his budget indie game side project. A lot of his points reminded me of when I first shared FrontierNav. At that point, I was just sharing. I didn't view it as marketing. But David deliberately went on forums like Reddit, and posted there weekly in order to market his game. He shared other things just to avoid the Self-Promotion Rules. To me, it's a bit disingenuous, but it worked. At the end of the day, people got what they want, the forums got more activity (as he posted other things to avoid getting banned for spam), and the game was a success.

Overcoming my distate for marketing is going to take a while, but no doubt I have to do it. So I'm going to try dedicate up to an hour or so every week to get FrontierNav out there. For example, I'll be sharing gaming-related news via FrontierNav's Twitter account. I already started since today marks 2 years since Xenoblade 2's release.

I said "sharing" again without realising. I guess "sharing" is the tactical term, where as "marketing" is the strategic term.

New Keyboard

I finally bought a new mechnical keyboard, I wrote a separate post on it.

Thanks for reading.

FrontierNav Report: October 2019

Progress Report

I wasn't planning to write a monthly report since I'm already writing weekly ones. But I realised weekly reports are a bit varied and it's nice to have a monthly update just around FrontierNav. This report only covers October. I'll be sharing what I did in November in a future report.

Changes in October

Windows in FrontierNav

Pop-out Windows

Pop-out Windows are the biggest feature this month. It's a huge convenience on desktop and saves a lot of clicking around.

They are a bit limited. Windows can't be resized and their content is static. However the ground work has been laid for more advanced features using the "Window Manager", such as...

Sidebar Behaviour

Currently the Sidebar is tied to the URL. The Main Window, where the Maps and other visualisations are rendered, also relies on the URL.

Previously, FrontierNav only really had one context so sharing a single state, the URL, was never an issue. But the limitations are starting to show as new features start conflicting with existing ones.

For example, on mobile viewports, the Sidebar covers the Main Window. At the same time, closing the Sidebar causes the Main Window to change too; making certain pages inaccessible. I've been working around this by essentially having permutations of state for each page: one for the Sidebar, one for the Main Window, and one for both. This obviously is a major headache to manage.

Ideally, the behaviour of the Sidebar should depend on the context. So having the contexts drive that behaviour makes the most sense. Things like "Show the Sidebar when the user selects a search result", "Show the Sidebar when the user expands a table row", and so on.

Now with a Window Manager implemented, this sort of behaviour should be easier to implemented. The Sidebar pretty much is a Window, except it's docked to the left side rather than freely floating.

I haven't release this change yet, but it's one example of what the "Window Manager" enables.

Data Tables

As always, Data Tables have been increasing in features as-needed. They're not that major to individually list. I've also added more data for Astral Chain such as Enemy Spawns and tidied up existing data from previous games.

There are still certain processes that I need to migrate over. Image upload is probably the more obvious one but it carries a huge security and cost risk compared to everything else. There's also templating to properly render the data in the Sidebar which is currently done in code.

I'm going to also have to start thinking about on-boarding processes to get others to use the data editing tools. Things like documentation, user guides, integrated merge processes and so on. There's a lot to do.

Thanks for reading.

Weekly Report: 18th November 2019

Image Uploads on FrontierNav

I wrote a separate post about this since it's a bit long.

Weird Traffic

For some reason, Cloudflare has started to report an ever-increasing number of "Unique Visitors". Currently, it stands at 4 times the usual levels. It'd be great if that was true but I'm doubtful.

My access logs, which avoid Cloudflare's cache, say it's more or less the same as before. Cloudflare's other metrics like "Total Requests" are also the same as before. Nothing else is following this new trend. So there's no reason to believe it.

GitHub Actions

I noticed node-terraform's automated publish workflow wasn't get triggered when new version tags were pushed. I use the exact same trigger for FrontierNav and it works fine. The only difference is that I manually push tags for FrontierNav, whereas node-terraform's is pushed by another workflow.

I'm kind of burnt out from debugging GitHub Actions so I'm giving it a break. It's probably an issue on their end or yet another caveat like a lot of the previous issues I had.

Google Search is Trash

I've been using DuckDuckGo as my default search engine for over a year now. Everything's been good, and having the !g command to fallback to Google has helped ease the transition to a less forgiving service.

However, I noticed something: Google is become worse at being a search engine as time goes on. It's full of "SEO" trash websites. The results are useless without basically telling it what website to search through using the site: keyword. The top half of the first page is always full of auto-generated junk too.

I don't know how long this trend will last, but I'm becoming more and more reliant on my bookmarks nowadays to find specific sites and run searches through them.

Thanks for reading.

Image Uploads on FrontierNav

One of the most common processes when adding new data is editing and uploading images and other media. Currently, I'm basically rsync-ing images directly to the server. If anyone has images to share, I need to download them and rsync manually. So it made sense to streamline this process on FrontierNav after I introduced Merge Requests last week.

Security

The main issue around handling images are the security risks. This is true for pretty much all user-generated data. Anything in the image processing pipeline can have bugs and vulnerabilities, ready to be exploited. In fact, it's pretty common to see new disclosures for these sorts of issues every now and then. Even the tools that are used to make images "safe" are vulnerable.

Given this never-ending issue, most applications split their content into separate services; isolating the potentially bad parts from the good parts. You can see this in your network logs with domains containing phrases like "usercontent".

Firebase conveniently provides asset storage where users can directly upload files with strict rules. Given these files are hosted and served through Firebase and Google Cloud Storage, it's already pretty isolated from the rest of FrontierNav.

Versioning

As FrontierNav's data is versioned, images will need to be versioned too. For example, if one version of data pointed to "character.jpg" which had a full-body view of a character, but then "character.jpg" was replaced with a mugshot. Sure, newer versions know its a mugshot, but previous versions are now pointing to a mugshot thinking it's a full-body.

To solve this, all images must be named using a hash and file size. So when an image changes, it's uploaded as a new file instead of overwriting an existing one. "Versions" of different images are tracked in the data using their hashes rather than being tied to the filename.

Another benefit to this is that images are automatically de-duped. In a basic sense. Images that are a few pixels different won't be de-duped as they'll have different hashes.

Local Storage and Offline Support

Images uploaded to FrontierNav are not really uploaded immediately. They're only uploaded on Merge Request. This avoids rough edits from needlessly polluting storage space.

A positive to this approach is that FrontierNav now supports loading images offline from local storage, IndexedDB to be exact. The rest of the app isn't offline-capable, but this is a major step forward.

Other Media

Technically, nothings stopping me from allowing other media like videos and audio to be uploaded using the same process. But I don't have a need for them yet. Once there is a need, I'll open that up.

Image Editing

All of the above solves one part of the problem: uploading. The other part is a bit more difficult: editing. It's fine to edit images locally, then upload them. But for basic processes like cropping and resizing, it can be a bit tedious to open, edit, save and upload individual images.

Though it is tempting to implement an Image Editor just for the fun of it, I'm going to hold myself back. Instead, I'll focus on adding more data and reducing more urgent points of friction in the data entry process.

Thanks for reading.

Weekly Report: 11th November 2019

Merge Requests on FrontierNav

After building the automation pipeline last week, I moved onto the main feature driving it: Merge Requests.

It's worth mentioning again that FrontierNav's data is database-free. The data is packaged as part of the website for various reasons. The dynamic nature of it makes it very easy for changes to conflict and individual changes may not be valid without the whole.

Given this, there are two ways people can contribute to FrontierNav:

  1. Providing me the data which I transform to be FrontierNav-compatible,
  2. Using FrontierNav's UI to modify data.

The current goal is to make 2 easier so that I'm not constantly spending time doing 1.

Currently, users can use the Data Tables to modify data in a basic spreadsheet-like manner and add markers on the map. However, to apply that data for others to see, users need to export the changes and send it to me manually. Then, I need to re-apply those changes locally and deploy them.

This process is slow and tedious, even when I'm the only one acting on them.

The idea behind Merge Requests is pretty simple.

  1. Users can now submit their changes directly from FrontierNav without needing to export them.
  2. An admin can review those changes within FrontierNav and approve them.
  3. An automated pipeline picks up approved changes and deploys them.

This week, I pretty much implemented this entire process. I won't be relying on the automation just yet as it's not been fully proven. Instead, I'll run the same scripts manually to make sure it's working.

I was going to put a recording of the process here but OBS is being a bit glitchy at the moment.

Firebase Addiction

As much as I hate tying FrontierNav's features to Firebase's Real-Time Database, I admit it's a massive convenience and hard to avoid.

I may have made it even harder to avoid when I wrote firebase-rules, which has allowed me to re-use chunks of logic that enables things like rate-limiting, readable conditional statements and manual indexing.

The only thing really stopping me is are the usage limits but even at that point, it might be easier to pay up than move to something else.

So why do I hate it? Because Firebase is extremely opaque. It doesn't provide much in the way of details. Which I don't blame it for, that's its selling point and that's why I use it. But when the time comes where I outgrow it or I lose access to it (knowing Google), I need to be ready.

Browser Automation

A while back, I decided that all features driving FrontierNav should use the web client. This was to avoid writing one-off scripts and piling on technical debt. If a feature is available on the web client, it's technically available to everyone, including myself when I'm not on a workstation.

So when I implemented Merge Requests, I needed a way to automate the deployment process as though a human could do it. This way, if the automation is no longer available, I could easily do it myself.

Initially, I used Nightwatch to run these automations. Nightwatch is what I use for integration testing so it supports browser automation. However, it's not a good fit for general automation. Nightwatch is focused specifically around writing tests and steering away from that is difficult within its test-oriented framework.

So I moved over to WebdriverIO which recently split its test runner from its automation. Perfect. I had to figure out a few things that Nightwatch provides out of the box, but it wasn't a big deal. And WebdriverIO's documentation is so much better.

I'm now planning to move over entirely to WebdriverIO in the future, with my own wrapper to avoid being locked into a framework. Nightwatch's activity has been a bit on-and-off recently with a focus on selling their testing solution, and the documentation isn't very good.

Thanks for reading.

Weekly Report: 4th November 2019

Automating FrontierNav

In last week's update, I briefly mentioned automating FrontierNav's deployments. Well, this week I went ahead with it. Why? The main reason is that I'm working towards moving FrontierNav's workflow completely off my workstation. If I want others to contribute, even myself, I don't really want access to my personal computer to be a requirement.

Eventually, everything should be available from FrontierNav's web client. That's the broader goal. Currently, I'm focusing on data entry and I'll have more to share on that next week.

Output of the Deployment Pipeline
Output of the Deployment Pipeline

GitHub Actions

GitHub Actions is GitHub's new automation offering. It's currently in preview, though it's due to be released this month. It's very much minimal viable product, but their strategy has worked. For me, at least.

I was originally planning to move to GitLab and use their automation service, but I didn't realise that they don't provide runners. I'm expected to provision those myself. CircleCI is another offering that does provide runners but having to share a pipeline between multiple third parties was a turn off.

So GitHub Actions was the obvious solution. Both my code and pipelines in one convenient place. It wasn't easy. There were a few issues, especially related to caching between builds. But it's done.

Testing Babel

Babel 7.7 was released this week and I upgraded FrontierNav to use it. This time round, instead of upgrading and hoping the integration tests pick up any broken changes, I decided to test Babel separately using snapshots.

I don't know why I didn't do this to begin with. With snapshots, I'm able to test each transformation Babel performs, making it clear why Babel is configured the way it is. It also catches any regressions and potential errors introduced by Babel without having to dig through layers of Webpack transformations.

Considering Ansible

My server has gotten to a point now where I have a good reason to use Ansible. I used Puppet extensively before, but it's extremely heavy for a handful of servers and Ansible seems a lot simpler.

I would much rather move to a "serverless" solution like AWS S3 but my current server plan is a lot cheaper and predictable. So I'm in no rush.

Once thing I did notice about these provisioning tools is that they really make it hard to find their documentation. I don't blame them. Paying for support is probably how they can afford to create these tools in the first place. However, it was a constant annoyance during my research phase.

More Games

I've added more games to FrontierNav. Right now that's kind of pointless, but it keeps me motivated. One of the reasons I'm creating FrontierNav is because I've always wanted to make a website for the games I enjoy. A sort of tribute.

In the late 90s and 00s when I was gradually exposed to the worldwide web, I always came across fan sites and I wanted to make my own. It's a shame this trend has slowed down with the introduction of the popular web (popweb). It's one of the reasons I started my "World Wide Wanderer" series, though I haven't updated it much.

Added Games
New games are added, though they don't have much data.

Fan Sites

Of the games I added were Lufia and Lufia II. Though I was too young to fully appreciate them at the time, they left a massive impression on me, especially the soundtrack. The fan sites surrounding them have of course gone quiet as the series died off, some have been lost forever.

GameFAQs seems to be the best place to find communities for these older games, which is somewhat boring since the site is very uniform and not much of a tribute to the games they host. This is a common trend in popweb and one I want FrontierNav to avoid going down. This is why I introduced more games, so that the platform has the features to present them properly rather than retro-fitting them later. While theming isn't a priority now, it will be once there's data to warrant it.

Preservation

It's nice that GameFAQs is keeping an historical catalog of guides and walkthroughs written by fans. However, these documents would be better placed in non-profits like Internet Archive; outside of commercial interests and the risks that come with it.

Thanks for reading.

Weekly Report: 21st October 2019

Improving FrontierNav's Tests

This week was mostly around improving FrontierNav's tests which I wrote about separately since it's quite long. You can read about it here.

Sandboxing

I started using Flatpak (via Flathub) for most of the applications I use. I don't know why sandboxing hasn't become the default by now but it's nice to see some progress on it.

Deno also comes to mind as a replacement for Node.js/NPM. It doesn't sandbox, but at least it restricts access. Though, I'm not entirely sure if its reliance on grabbing remote dependencies via HTTPS is safe without lockfiles and hashes. I took a quick look just now and it looks like it's being figured out.

Thanks for reading.

Improving FrontierNav's Tests

Previously FrontierNav's tests ran against the local development build. This meant I only had to maintain two environments: development and production.

These differences meant that in some cases, the optimisations applied to production would cause problems, and since I'm not testing against production, the only way to know was to test it manually or check the error logs. Sometimes I'd pick up the problem immediately, other times, it'd take days to discover.

The solution is obvious: test the production build. This week, I finally got around to doing it.

Production but not really

First issue is that I don't really want to test in production. I want to test the build, but I don't want to affect users. Users should not see a "Test User" going around the app, creating and deleting things to make sure it all works. So I created a new "environment" to differentiate between production and what's typically called "staging", which points to different databases and services.

A New Environment

The concept of an environment is extremely bloated. Node.js has its own "NODE_ENV" which is handled by most libraries and frameworks as a boolean; it's either "production" or not. If you want a production-optimised build "NODE_ENV=production" is needed, so I'll need to keep it around. Setting it to "staging" would be no different from setting it to "development". I need it to be "production" for "staging".

This new FrontierNav-specific environment, "FN_ENV", can be set directly or it'll fallback to whatever NODE_ENV is. In short, NODE_ENV defines the build, whereas FN_ENV defines the deployment.

1
2
3
4
5
6
7
8
# FN_ENV will fallback to "development"
NODE_ENV=development webpack

# Explicitly set FN_ENV to "staging"
NODE_ENV=production FN_ENV=staging webpack

# FN_ENV will fallback to "production"
NODE_ENV=production webpack

I could have added a step to the execution that only takes an FN_ENV and calculates the NODE_ENV, but since these flags are only set in a few places, adding and maintaining that additional layer is excessive right now.

Separating Configuration

The next step was to pull out all of the variables -- things like the database, API keys and image servers -- into multiple configuration files, one for each environment.

I could have the configuration be fetched at runtime. However, while this would allow a single "production" build be used for both "staging" and "production" deployments, it also means I'd need a system to deploy and rollback the configuration separately, adding to the complexity.

A solution could be to have a two step build: One for the common code, and another for each environment which passes a configuration to the first build. I looked into this a bit, but getting it work, with all the optimisations, code splitting, etc. seemed too complicated. It's easier to manage one build per environment.

The downside to having to build for each environment is that it multiplies the time waiting for builds, which is currently just below 2 minutes each. (I'll go into this in a bit.)

There's also no guarantee that the two builds are identical, there might be bugs in the minifier or some other step that causes a behavioural difference. But these are rare cases that can exist throughout the software pipeline so it's not worth acting on them until they actually cause problems.

Staging Deployment

Finally, I had to make a deployment as close to production as possible. This meant adding a new domain and another HTTP server configuration. Previously, I was testing the project locally so none of the server configuration was tested (Nginx and Cloudflare). Now there are!

Optimising Build Times

To reduce Webpack's build times for "staging" and "production", I first looked into caching. Since both of these environments are essentially the same, minus a few variables, caching the results seemed like an obvious start. Caching does however have some issues.

Cache invalidation is always a problem. The cache identifier needs to be accurate to avoid using stale caches between builds. Managing that identifier is really complicated. For example, for Babel, I'll need to know the current version of Babel, all of plugin versions, the configuration and the browser targets (which is separate from the configuration as it's used by other tools). If anything new comes into play, I'll need to remember to add that to the list. Maintaining all of that would be a headache, and a huge risk, if say "production" variables go into "staging" and the tests end up polluting the entire database.

Second, looking up the cache can take more time than rebuilding a file. A lot of modules in JavaScript are tiny, so for all those tiny files, the overhead of doing a file system lookup can increase rather than decrease the build time for that file.

Overall, I did see a ~10% reduction in build time. However, that was the difference between 110 seconds and 100 seconds, still over a minute and in real terms, it won't affect my workflow.

Concurrency was another option. By building each file in parallel, I saved around 20 seconds. But again, not really noticeable in real terms.

Real Terms

I mentioned "real terms" when it comes to build times. For me, this asks the question: Does it make me more efficient?

If a release takes maybe, more than 30 seconds, I'm going to be doing other things. Taking a break, planning the next step, reading up on something, etc. When I'm at that step, I probably won't get back to development for around 5 minutes at the least. So having a release take 60 seconds instead of 300 seconds makes no difference to me. It's still more than 30 seconds and less than 5 minutes. Adding more complexity without any real benefit doesn't make sense.

Automated Deployments

There's no doubt that as I introduce more test cases, a release will start taking well beyond 5 minutes. At that point, I'll need to change my workflow. Instead of finding something else to do while a release goes through, I should be able to start on the next thing and let the release alert me when it's done.

To do that, I'll need a continuous integration server, otherwise my open changes would conflict with the release. But maintaining a CI server and introducing a new workflow is a lot of work, so I'll cross that bridge when I get there.

Thanks for reading.