Jahed Ahmed

Weekly Report: 16th December 2019

I didn't publish a weekly for the week before since it was a holiday week. So this weekly will be a combination of the two, that is the 16th and 23rd.

All of the updates are related to FrontierNav. I haven't really blogged much about specific topics for the last few months. Maybe I should.

FrontierNav

Improved Map Visuals

Maps are now visually consistent across games. This was needed as FrontierNav moves towards using the same maps for all future games.

Preview of Improved Map Visuals
Showcasing the improved visuals.

Introduced Map Feature Toggles

Some map features are hidden by default to reduce performance issues when there's hundreds of features on the screen. They are only visible when it's relevant to your search. However, this made manual discovery difficult as users didn't know what sorts of information were available. So now, you can now unhide features.

In the future, maps can hide/show features based on how zoomed in the user is. However, since there's no culling for features outside the viewport, it will still have a performance impact. Which leads onto...

Preview of Map Feature Toggles
Showcasing feature toggles.

Performance Issues

The new map visuals use CSS filters extensively. This seems fine for Chromium-based browsers, but Firefox struggles with it. It pretty much freezes on mobile.

I'm considering moving to a canvas-based mapping library like OpenLayers as it's clear the DOM can't handle so many elements.

Improved Map Feature Discovery

Previously, map features were discovered and selected through hard-coded queries for each game. Searching for a Collectible in Xenoblade 2 would trigger a specific query to find the related Collection Points. This meant if a new type of entity was added, the maps wouldn't know how to find features relevant to it.

Now, instead of a custom query for every type of entity, FrontierNav discovers these relationships by itself. So when you search for a Collectible, FrontierNav will look for the various ways it's connected to a map location and show you those.

Improved Map Discovery

In a similar manner to feature discovery, map lists were also hard-coded. Maps had to be under a specific Region, Area, Map hierarchy. This structure came from Xenoblade 2's in-game map, and even there it often didn't make sense. The idea of Regions, Areas and Locations are vague and far from strict. They are context sensitive depending on your viewpoint.

This structure also doesn't work for general use cases such as in Astral Chain, where locations are re-used but have different features based on the current mission.

So instead of a strict hierarchy, Maps are now discoverable based on what they represent and what their features represent. This generic capability is the same as how map features are discovered.

Preview of Map Discovery
Finding Secret Areas in Xenoblade 2, and filtering by Region to further refine the search.

Revamped Sidebars

I always had a nagging feeling whenever I opened the sidebar. The font sizes never looked correct and the alignments seemed off. So I went ahead and improved it.

Preview of Revamped Sidebar
Headings are more muted to give focus on data.

Migrations Mostly Complete

The only game left to migrate now is Xenoblade X. As I mentioned in the previous weekly, Xenoblade X is the oldest game in FrontierNav and has a lot of custom features specific to it. Moving that over to a generic featureset is going to take some time.

If a new release of Xenoblade X is announced for the Switch, I'll have more incentive to move it over, but as it is right now, there's not much reason to put the effort in.

Other Changes

Thanks for reading.

Weekly Report: 9th December 2019

FrontierNav

Since last week's migration to data-driven features, FrontierNav is starting to feel a lot more powerful. I've spent most of this week ensuring more of the in-code features are covered by data-driven features. It's been a week of deleting lots and lots of code.

Last week I mentioned focusing on Lufia II's data entry, but I couldn't fight off the temptation of removing so much technical debt. I was constantly hitting it whenever I searched the codebase. So instead of putting up with it, I went ahead migrated the last 2 games: Xenoblade 2 and Xenoblade X. Xenoblade X is still a work in progress since it's the oldest and has the most edge cases.

Formula Properties Never?

One of the most obvious feature requests for the data tables are formulas. Since the tables are kind of like spreadsheets, having a way to use formulas seems inevitable. Whenever I need a new feature, the idea of using formulas always comes up but I always look for an alternative.

Formulas are too general purpose. It's code. And for a data-driven approach, introducing that will exponentially increase complexity when it comes to data retrieval, processing and migration.

Time will tell however as there may come a point where existing features become simpler when migrated to formulas.

Reaping the Rewards

While I was migrating Xenoblade 2, everything just started falling into place automatically. That's the great thing about having a data-driven approach to problems. Add an icon in one place, and everything connected to it will use the same icon. Change a relationship to go elsewhere and everything else follows it immediately. It really does "just work". I haven't hit a data-related bug for the entire week.

Other Changes

Thanks for reading.

Weekly Report: 2nd December 2019

FrontierNav

Data-driven Sidebars

Previously FrontierNav's sidebars were hard-coded React components that grabbed data and displayed them in various ways. Over time, the layout of these sidebars gradually converged into some simple components as patterns and similarities emerged across the various datasets.

I briefly mentioned the data migration I did last week, I didn't have much to say about it but it enabled me to finally take the steps to generate layouts using just the data. No coding necessary.

Astral Chain was the first game I migrated to this approach since it's a fairly recent addition. Pokemon Sun/Moon was the next since there wasn't a lot of data.

The two hard ones are Xenoblade 2 and Xenoblade X. There's a lot more data, but more importantly, they make use of some custom components that are difficult to standardise. For example, Xenoblade X lets you choose Probes for its Probe Simulation using the Sidebar. Both have a range of custom styling to better match their games.

Of course, all of these edge cases can wait as they're not needed for most games. This comes back to my main goal: to fully document Lufia II purely through the web client to prove that FrontierNav's data editing features are viable.

Other Changes

Browsers & Local Storage

Browsers come with ways to persist data locally using Local Storage, IndexedDB and other APIs. I've always been reluctant to use these features as most browsers tend to treat them as disposable. I can't have users making hundreds of changes stored locally and expect it to stay there. Things can go wrong.

Firefox 71 for example has broken local storage, at least the Fedora build of it. FrontierNav's authentication details are stored locally to persist logins. That doesn't work now on that browser, so users get logged out whenever they load the page. Even LastPass doesn't let me login, so I'm stuck using my phone to get passwords. I could rollback, but then I lose security updates which are more critical. I just have to wait for the patch to be release. The situation sucks.

Thanks for reading.

Weekly Report: 25th November 2019

First Merge Request for FrontierNav

I released merge requests around the start of last week and it already had its first use before I even properly showed it off. I wasn't expecting it and only noticed after seeing a sustained increase in events around that feature. The first request was waiting for 3 days which isn't great. I initially contacted the contributor by email, then by Twitter after I noticed a recent follower with the same name.

Just by having this one contribution, I was able to gather a lot of data and fix a few issues. People obviously don't view things like I do, so having others even just try things helps a lot.

The lack of communication channels directly on the website is a problem but not an urgent one until more people start contributing. Merge requests currently don't allow comments. Introducing them shouldn't be difficult but ideally, I want to integrate it with the existing Community Forums to reduce duplication and maintenance.

Other FrontierNav Changes

Marketing vs Sharing

I personally view "marketing" as a dirty word. I know it isn't, but with commercialised tracking, privacy breaches and all the lot going on in the Web, I can't help feel that way. It is the default. Whatever my feelings, I need to market FrontierNav a lot more than I currently do, otherwise no one will even know it exists.

I recently watched a GDC Talk by David Wehle where he went through how he marketed his budget indie game side project. A lot of his points reminded me of when I first shared FrontierNav. At that point, I was just sharing. I didn't view it as marketing. But David deliberately went on forums like Reddit, and posted there weekly in order to market his game. He shared other things just to avoid the Self-Promotion Rules. To me, it's a bit disingenuous, but it worked. At the end of the day, people got what they want, the forums got more activity (as he posted other things to avoid getting banned for spam), and the game was a success.

Overcoming my distate for marketing is going to take a while, but no doubt I have to do it. So I'm going to try dedicate up to an hour or so every week to get FrontierNav out there. For example, I'll be sharing gaming-related news via FrontierNav's Twitter account. I already started since today marks 2 years since Xenoblade 2's release.

I said "sharing" again without realising. I guess "sharing" is the tactical term, where as "marketing" is the strategic term.

New Keyboard

I finally bought a new mechnical keyboard, I wrote a separate post on it.

Thanks for reading.

Weekly Report: 18th November 2019

Image Uploads on FrontierNav

I wrote a separate post about this since it's a bit long.

Weird Traffic

For some reason, Cloudflare has started to report an ever-increasing number of "Unique Visitors". Currently, it stands at 4 times the usual levels. It'd be great if that was true but I'm doubtful.

My access logs, which avoid Cloudflare's cache, say it's more or less the same as before. Cloudflare's other metrics like "Total Requests" are also the same as before. Nothing else is following this new trend. So there's no reason to believe it.

GitHub Actions

I noticed node-terraform's automated publish workflow wasn't get triggered when new version tags were pushed. I use the exact same trigger for FrontierNav and it works fine. The only difference is that I manually push tags for FrontierNav, whereas node-terraform's is pushed by another workflow.

I'm kind of burnt out from debugging GitHub Actions so I'm giving it a break. It's probably an issue on their end or yet another caveat like a lot of the previous issues I had.

Google Search is Trash

I've been using DuckDuckGo as my default search engine for over a year now. Everything's been good, and having the !g command to fallback to Google has helped ease the transition to a less forgiving service.

However, I noticed something: Google is become worse at being a search engine as time goes on. It's full of "SEO" trash websites. The results are useless without basically telling it what website to search through using the site: keyword. The top half of the first page is always full of auto-generated junk too.

I don't know how long this trend will last, but I'm becoming more and more reliant on my bookmarks nowadays to find specific sites and run searches through them.

Thanks for reading.

Weekly Report: 11th November 2019

Merge Requests on FrontierNav

After building the automation pipeline last week, I moved onto the main feature driving it: Merge Requests.

It's worth mentioning again that FrontierNav's data is database-free. The data is packaged as part of the website for various reasons. The dynamic nature of it makes it very easy for changes to conflict and individual changes may not be valid without the whole.

Given this, there are two ways people can contribute to FrontierNav:

  1. Providing me the data which I transform to be FrontierNav-compatible,
  2. Using FrontierNav's UI to modify data.

The current goal is to make 2 easier so that I'm not constantly spending time doing 1.

Currently, users can use the Data Tables to modify data in a basic spreadsheet-like manner and add markers on the map. However, to apply that data for others to see, users need to export the changes and send it to me manually. Then, I need to re-apply those changes locally and deploy them.

This process is slow and tedious, even when I'm the only one acting on them.

The idea behind Merge Requests is pretty simple.

  1. Users can now submit their changes directly from FrontierNav without needing to export them.
  2. An admin can review those changes within FrontierNav and approve them.
  3. An automated pipeline picks up approved changes and deploys them.

This week, I pretty much implemented this entire process. I won't be relying on the automation just yet as it's not been fully proven. Instead, I'll run the same scripts manually to make sure it's working.

I was going to put a recording of the process here but OBS is being a bit glitchy at the moment.

Firebase Addiction

As much as I hate tying FrontierNav's features to Firebase's Real-Time Database, I admit it's a massive convenience and hard to avoid.

I may have made it even harder to avoid when I wrote firebase-rules, which has allowed me to re-use chunks of logic that enables things like rate-limiting, readable conditional statements and manual indexing.

The only thing really stopping me is are the usage limits but even at that point, it might be easier to pay up than move to something else.

So why do I hate it? Because Firebase is extremely opaque. It doesn't provide much in the way of details. Which I don't blame it for, that's its selling point and that's why I use it. But when the time comes where I outgrow it or I lose access to it (knowing Google), I need to be ready.

Browser Automation

A while back, I decided that all features driving FrontierNav should use the web client. This was to avoid writing one-off scripts and piling on technical debt. If a feature is available on the web client, it's technically available to everyone, including myself when I'm not on a workstation.

So when I implemented Merge Requests, I needed a way to automate the deployment process as though a human could do it. This way, if the automation is no longer available, I could easily do it myself.

Initially, I used Nightwatch to run these automations. Nightwatch is what I use for integration testing so it supports browser automation. However, it's not a good fit for general automation. Nightwatch is focused specifically around writing tests and steering away from that is difficult within its test-oriented framework.

So I moved over to WebdriverIO which recently split its test runner from its automation. Perfect. I had to figure out a few things that Nightwatch provides out of the box, but it wasn't a big deal. And WebdriverIO's documentation is so much better.

I'm now planning to move over entirely to WebdriverIO in the future, with my own wrapper to avoid being locked into a framework. Nightwatch's activity has been a bit on-and-off recently with a focus on selling their testing solution, and the documentation isn't very good.

Thanks for reading.

Weekly Report: 4th November 2019

Automating FrontierNav

In last week's update, I briefly mentioned automating FrontierNav's deployments. Well, this week I went ahead with it. Why? The main reason is that I'm working towards moving FrontierNav's workflow completely off my workstation. If I want others to contribute, even myself, I don't really want access to my personal computer to be a requirement.

Eventually, everything should be available from FrontierNav's web client. That's the broader goal. Currently, I'm focusing on data entry and I'll have more to share on that next week.

Output of the Deployment Pipeline
Output of the Deployment Pipeline

GitHub Actions

GitHub Actions is GitHub's new automation offering. It's currently in preview, though it's due to be released this month. It's very much minimal viable product, but their strategy has worked. For me, at least.

I was originally planning to move to GitLab and use their automation service, but I didn't realise that they don't provide runners. I'm expected to provision those myself. CircleCI is another offering that does provide runners but having to share a pipeline between multiple third parties was a turn off.

So GitHub Actions was the obvious solution. Both my code and pipelines in one convenient place. It wasn't easy. There were a few issues, especially related to caching between builds. But it's done.

Testing Babel

Babel 7.7 was released this week and I upgraded FrontierNav to use it. This time round, instead of upgrading and hoping the integration tests pick up any broken changes, I decided to test Babel separately using snapshots.

I don't know why I didn't do this to begin with. With snapshots, I'm able to test each transformation Babel performs, making it clear why Babel is configured the way it is. It also catches any regressions and potential errors introduced by Babel without having to dig through layers of Webpack transformations.

Considering Ansible

My server has gotten to a point now where I have a good reason to use Ansible. I used Puppet extensively before, but it's extremely heavy for a handful of servers and Ansible seems a lot simpler.

I would much rather move to a "serverless" solution like AWS S3 but my current server plan is a lot cheaper and predictable. So I'm in no rush.

Once thing I did notice about these provisioning tools is that they really make it hard to find their documentation. I don't blame them. Paying for support is probably how they can afford to create these tools in the first place. However, it was a constant annoyance during my research phase.

More Games

I've added more games to FrontierNav. Right now that's kind of pointless, but it keeps me motivated. One of the reasons I'm creating FrontierNav is because I've always wanted to make a website for the games I enjoy. A sort of tribute.

In the late 90s and 00s when I was gradually exposed to the worldwide web, I always came across fan sites and I wanted to make my own. It's a shame this trend has slowed down with the introduction of the popular web (popweb). It's one of the reasons I started my "World Wide Wanderer" series, though I haven't updated it much.

Added Games
New games are added, though they don't have much data.

Fan Sites

Of the games I added were Lufia and Lufia II. Though I was too young to fully appreciate them at the time, they left a massive impression on me, especially the soundtrack. The fan sites surrounding them have of course gone quiet as the series died off, some have been lost forever.

GameFAQs seems to be the best place to find communities for these older games, which is somewhat boring since the site is very uniform and not much of a tribute to the games they host. This is a common trend in popweb and one I want FrontierNav to avoid going down. This is why I introduced more games, so that the platform has the features to present them properly rather than retro-fitting them later. While theming isn't a priority now, it will be once there's data to warrant it.

Preservation

It's nice that GameFAQs is keeping an historical catalog of guides and walkthroughs written by fans. However, these documents would be better placed in non-profits like Internet Archive; outside of commercial interests and the risks that come with it.

Thanks for reading.

Weekly Report: 21st October 2019

Improving FrontierNav's Tests

This week was mostly around improving FrontierNav's tests which I wrote about separately since it's quite long. You can read about it here.

Sandboxing

I started using Flatpak (via Flathub) for most of the applications I use. I don't know why sandboxing hasn't become the default by now but it's nice to see some progress on it.

Deno also comes to mind as a replacement for Node.js/NPM. It doesn't sandbox, but at least it restricts access. Though, I'm not entirely sure if its reliance on grabbing remote dependencies via HTTPS is safe without lockfiles and hashes. I took a quick look just now and it looks like it's being figured out.

Thanks for reading.

Weekly Report: 14th October 2019

FrontierNav Windows

FrontierNav now supports windows. Not Windows, that's already supported. That is, you can now open up links in pop-out windows to keep them persistent between page navigations. This avoids needing to constantly click and around and go back and forth between pages.

I'm still working out the user experience on smaller screens, but it's not a priority given mobile apps tend to be geared towards a single window experience.

The worst part about this feature is getting the naming right. "Window" is such a generic term and it's used everywhere. I just settled with "AppWindow", though that doesn't help when naming variables. Calling everything appWindow so that it doesn't clash with the window global seems a bit too verbose.

Windows in FrontierNav

Writing my own Router

I finally decided to replace the deprecated redux-little-router dependency I had. I knew it was a relatively simple replacement so I kept putting it off. I even forked it to fix a few issues as it started to go through bit rot. When I did sit down to replace it, I realised a few things.

The idea behind a router for web apps does not need to be tied to the limitations of a URL. A router is really just state and the URL is a representation of that state. So I separated those two out.

Instead of having the URL be updated as part of the router's state, the URL update is just another observer to that state. This allowed me to easily decouple the web-specific details, that is using the History API to persist the route, from the state itself which the rest of the app can use.

Since I have a few years worth of existing code using the URL to decide what to show, this decoupling is still a work in progress. I still have a lot of hardcoded strings linking different pages and routes of the app.

Freedom from Dependencies

Writing my own router is what pushed me to finally implement multiple windows. I'm now able to have multiple routers in the state for each window which greatly simplifies the logic around creating and navigating windows.

Currently, only one router is persisted in the URL, but nothing's really stopping me from persisting all of them. User experience wise, there's still clearly a "main" window, while the others are more ephemeral, so persisting them in the URL doesn't make much sense. Persisting them in local or remote storage might be more useful.

Anyways, if there's one thing I learnt from this, it's to not let your dependencies, libaries and frameworks narrow your thinking. If it prevents you from doing something, it's time to let go of it. You'll save a lot more time than working around it, which will only couple you to it even more.

Typesafe Routing

I implemented an API for type-safe routing to go with my router. The main goal was to remove the hard coded strings used to link pages.

1
2
3
4
5
6
7
8
9
10
11
// Route
"/explore/:gameId/entities/:entityId"

// Path using a hardcoded string
"/explore/astral-chain/entities/enemy-123"

// Path using a Type-safe API and strings for IDs
root().explore('astral-chain').entity('enemy-123').build()

// Path using a Type-safe API and typed objects with IDs
root().explore(game).entity(entity).build()

This has a number of advantages:

I haven't rolled this out yet as I'm still not 100% on how I want to structure the new router state.

Once all of this is complete, I'll have a new open source library to release.

FrontierNav Data Tables

The spreadsheet-like data tables are gradually becoming more and more feature rich. The tables now support sorting by column, importing pre-formatted CSVs, moving entities and various other quality of life improvements. No doubt it will keep getting better as I import more game data.

Hiding Behind Cloudflare

Most of my public infrastructure is now behind Cloudflare. This should reduce the amount of maintenance I need to do in regards to legacy domains (like jahed.io) and security.

I've disabled HTTP access to my servers entirely and HTTP connections as well as legacy domains redirect using Cloudflare's Page Rules. No server needed.

While this does tie me more to Cloudflare, I am already reliant on it for reducing bandwidth costs so from an end user perspective, nothing's really changed. Moving traffic back to my server is a matter of removing SSL Client verification and adding the redirects.

Other Stuff

Thanks for reading.

Weekly Report: 7th October 2019

FrontierNav Terminology

Over the years FrontierNav's data model has changed a lot. This meant various parts of the codebase and web app used a range of words to describe the same thing. As I start introducing more terminology to the public, there's really no room for confusion. So I went through the project, migrated all the data and reduced the number of terms.

As an example, most data models have terms where these mean the same thing:

FrontierNav probably used at least 4 of these terms in the same contexts but now it only uses two: Entity and Relationships.

FrontierNav Performance

It's really hard for me to tell how well FrontierNav performs on lower-end devices as I don't own one. So as a general rule, after finishing a substantial feature, I'll put in some time to improve its performance or at least look into it.

This week, I optimised the Data Tables which uses artificial viewport scrolling to avoid rendering thousands of rows at once. Any lag in rendering is easily noticable while scrolling, even on high-end devices so it's important to optimise it as much as possible. In the React world, that usually means caching function calls (memoizing) and reducing re-renders in general. There's not much else to say about it. React's DevTools are good enough as they provide rendering times and let you know what triggered it to understand where the costs comes from.

Other Stuff

Thanks for reading.