Previously FrontierNav's sidebars were hard-coded React components that grabbed data and displayed them in various ways. Over time, the layout of these sidebars gradually converged into some simple components as patterns and similarities emerged across the various datasets.
I briefly mentioned the data migration I did last week, I didn't have much to say about it but it enabled me to finally take the steps to generate layouts using just the data. No coding necessary.
Astral Chain was the first game I migrated to this approach since it's a fairly recent addition. Pokemon Sun/Moon was the next since there wasn't a lot of data.
The two hard ones are Xenoblade 2 and Xenoblade X. There's a lot more data, but more importantly, they make use of some custom components that are difficult to standardise. For example, Xenoblade X lets you choose Probes for its Probe Simulation using the Sidebar. Both have a range of custom styling to better match their games.
Of course, all of these edge cases can wait as they're not needed for most games. This comes back to my main goal: to fully document Lufia II purely through the web client to prove that FrontierNav's data editing features are viable.
Support reordering properties
Support renaming properties
Support renaming entity types.
Colours. Lots of colours.
Each Entity Type now has a random color assigned which can be changed.
Introduced Universal Entity Type indicator to know when something is an Entity Type.
Introduced Universal Entity indicator to know when something is an Entity.
Table column headings now have context menus for quick edits.
Data validation on export.
This is mainly to ensure I didn't miss any edge cases when changing things.
Redesigned modal dialogs to be less... in your face.
Browsers & Local Storage
Browsers come with ways to persist data locally using Local Storage, IndexedDB and other APIs. I've always been reluctant to use these features as most browsers tend to treat them as disposable. I can't have users making hundreds of changes stored locally and expect it to stay there. Things can go wrong.
Firefox 71 for example has broken local storage, at least the Fedora build of it. FrontierNav's authentication details are stored locally to persist logins. That doesn't work now on that browser, so users get logged out whenever they load the page. Even LastPass doesn't let me login, so I'm stuck using my phone to get passwords. I could rollback, but then I lose security updates which are more critical. I just have to wait for the patch to be release. The situation sucks.
Added more placeholder games: Lufia, Lufia II, Golden Sun, Golden Sun: The Lost Age (2019-11-14)
Various Data Table improvements. (throughout the month)
I'll be using Lufia II as a way to finish off the remaining work needed to allow anyone to contribute. At the end of this, someone should be able to contribute most of the data for an entirely new game without much input from me.
Why Lufia II? It's a reasonably small game with enough variety to match modern games. Also, it provides nostalgic motivation for me.
I released merge requests around the start of last week and it already had its first use before I even properly showed it off. I wasn't expecting it and only noticed after seeing a sustained increase in events around that feature. The first request was waiting for 3 days which isn't great. I initially contacted the contributor by email, then by Twitter after I noticed a recent follower with the same name.
Just by having this one contribution, I was able to gather a lot of data and fix a few issues. People obviously don't view things like I do, so having others even just try things helps a lot.
The lack of communication channels directly on the website is a problem but not an urgent one until more people start contributing. Merge requests currently don't allow comments. Introducing them shouldn't be difficult but ideally, I want to integrate it with the existing Community Forums to reduce duplication and maintenance.
Other FrontierNav Changes
Added incoming relationship columns to the tables.
This required a lot of work migrating the data model to support bi-directional look ups.
Listed entity types for each relationship column.
Added data validation when exporting to avoid invalid data.
Introduced documentation to help users discover more advanced features.
Marketing vs Sharing
I personally view "marketing" as a dirty word. I know it isn't, but with commercialised tracking, privacy breaches and all the lot going on in the Web, I can't help feel that way. It is the default. Whatever my feelings, I need to market FrontierNav a lot more than I currently do, otherwise no one will even know it exists.
I recently watched a GDC Talk by David Wehle where he went through how he marketed his budget indie game side project. A lot of his points reminded me of when I first shared FrontierNav. At that point, I was just sharing. I didn't view it as marketing. But David deliberately went on forums like Reddit, and posted there weekly in order to market his game. He shared other things just to avoid the Self-Promotion Rules. To me, it's a bit disingenuous, but it worked. At the end of the day, people got what they want, the forums got more activity (as he posted other things to avoid getting banned for spam), and the game was a success.
Overcoming my distate for marketing is going to take a while, but no doubt I have to do it. So I'm going to try dedicate up to an hour or so every week to get FrontierNav out there. For example, I'll be sharing gaming-related news via FrontierNav's Twitter account. I already started since today marks 2 years since Xenoblade 2's release.
I said "sharing" again without realising. I guess "sharing" is the tactical term, where as "marketing" is the strategic term.
I finally bought a new mechnical keyboard. Previously, I was using the Logitech G710+ which is a full-size keyboard with Cherry MX Brown switches. It's big, but comfortable. The media keys and volume rocker are especially convenient. The thick chunky cable is especially inconvenient.
At my previous workplace, I used a Razer Black Widow Tournament Edition which was a tenkeyless (TKL) with Razer Green switches, similar to Cherry MX Blue. Overall, I much preferred the clicky-ness of the Razer Greens over the Cherry Browns and I've been unsatisfied with it ever since. I would've bought a Razer Green keyboard if they weren't so expensive and prone to issues. The one in my workplace used to randomly shoot out its Left CTRL key...
I tried the Das Keyboard which was nice, but I prefer a TKL layout which they only manufacture with Cherry Browns. Filco was also on my list, but they they're a bit old so I was risking spending a lot of money for something that might get a new model soon (even their website is straight out of Flash-era).
This sort of whac-a-mole went on for months. I even went down the rabbit hole of customisable do-it-yourself keyboards with hot-swapping switch slots. I blame YouTube's algorithms for that. Luckily, most of the choices aren't available in the UK so I dodged a bullet. At the same time, it kind of sucks how far behind the UK is.
Anyways, this week one of the keyboards I was eyeing on dropped in price: the HyperX Alloy FPS Pro. TKL and Cherry Blue. ~60 GBP. The board is pretty much edge-to-edge just the keys, no bezels, solid steel with no flex. No spacebar rattle. Perfect. Yet, whereever I look, it doesn't get much recommendations. Maybe I'm missing something, but I've got it now I and I'm happy. So much more desk space, and when I'm typing, it feels like a machine gun.
Cons? The red lights are garish, but I switch off backlighting anyway. The red-striped Mini USB cable isn't great either, but it's not a big deal. Neither is the logo on the spacebar. US layout only? I can get used to it.
In terms of personal issues, I have noticed some wrist pain but a wrist rest should sort that out. My key misses due to the US layout should fix itself over time. I'll probably use acronyms for currencies more often now with the GBP symbol key. I can't use my right thumb to hit Enter quickly without the numpad (the only time I ever use one).
Lastly, probably my biggest issue: the actuation force is really high. I don't recall needing so much force when I was using other Cherry Blues, but it has its pros and cons. I'm now hardly ever bottoming out the keys so I'm typing a lot faster. But if I'm pressing more keys which require more force, my fingers get tired even faster. The cold weather doesn't help.
All of these personal issues I feel will lessen over time as I get used to it. Like with most niche computer accessories, the keyboard is "gaming" branded, but it definitely is not for gaming. Overall, I'm very happy with it. After typing this post, my fingers say otherwise.
I wasn't planning to write a monthly report since I'm already writing weekly ones. But I realised weekly reports are a bit varied and it's nice to have a monthly update just around FrontierNav. This report only covers October. I'll be sharing what I did in November in a future report.
Changes in October
Optimised Data Tables for smoother scrolling (2019-10-13)
Introduced Staging Environment for more accurate automated tests (2019-10-29)
More features around Data Tables.
Pop-out Windows are the biggest feature this month. It's a huge convenience on desktop and saves a lot of clicking around.
They are a bit limited. Windows can't be resized and their content is static. However the ground work has been laid for more advanced features using the "Window Manager", such as...
Currently the Sidebar is tied to the URL. The Main Window, where the Maps and other visualisations are rendered, also relies on the URL.
Previously, FrontierNav only really had one context so sharing a single state, the URL, was never an issue. But the limitations are starting to show as new features start conflicting with existing ones.
For example, on mobile viewports, the Sidebar covers the Main Window. At the same time, closing the Sidebar causes the Main Window to change too; making certain pages inaccessible. I've been working around this by essentially having permutations of state for each page: one for the Sidebar, one for the Main Window, and one for both. This obviously is a major headache to manage.
Ideally, the behaviour of the Sidebar should depend on the context. So having the contexts drive that behaviour makes the most sense. Things like "Show the Sidebar when the user selects a search result", "Show the Sidebar when the user expands a table row", and so on.
Now with a Window Manager implemented, this sort of behaviour should be easier to implemented. The Sidebar pretty much is a Window, except it's docked to the left side rather than freely floating.
I haven't release this change yet, but it's one example of what the "Window Manager" enables.
As always, Data Tables have been increasing in features as-needed. They're not that major to individually list. I've also added more data for Astral Chain such as Enemy Spawns and tidied up existing data from previous games.
There are still certain processes that I need to migrate over. Image upload is probably the more obvious one but it carries a huge security and cost risk compared to everything else. There's also templating to properly render the data in the Sidebar which is currently done in code.
I'm going to also have to start thinking about on-boarding processes to get others to use the data editing tools. Things like documentation, user guides, integrated merge processes and so on. There's a lot to do.
For some reason, Cloudflare has started to report an ever-increasing number of "Unique Visitors". Currently, it stands at 4 times the usual levels. It'd be great if that was true but I'm doubtful.
My access logs, which avoid Cloudflare's cache, say it's more or less the same as before. Cloudflare's other metrics like "Total Requests" are also the same as before. Nothing else is following this new trend. So there's no reason to believe it.
I noticed node-terraform's automated publish workflow wasn't get triggered when new version tags were pushed. I use the exact same trigger for FrontierNav and it works fine. The only difference is that I manually push tags for FrontierNav, whereas node-terraform's is pushed by another workflow.
I'm kind of burnt out from debugging GitHub Actions so I'm giving it a break. It's probably an issue on their end or yet another caveat like a lot of the previous issues I had.
Google Search is Trash
I've been using DuckDuckGo as my default search engine for over a year now. Everything's been good, and having the !g command to fallback to Google has helped ease the transition to a less forgiving service.
However, I noticed something: Google is become worse at being a search engine as time goes on. It's full of "SEO" trash websites. The results are useless without basically telling it what website to search through using the site: keyword. The top half of the first page is always full of auto-generated junk too.
I don't know how long this trend will last, but I'm becoming more and more reliant on my bookmarks nowadays to find specific sites and run searches through them.
One of the most common processes when adding new data is editing and uploading images and other media. Currently, I'm basically rsync-ing images directly to the server. If anyone has images to share, I need to download them and rsync manually. So it made sense to streamline this process on FrontierNav after I introduced Merge Requests last week.
The main issue around handling images are the security risks. This is true for pretty much all user-generated data. Anything in the image processing pipeline can have bugs and vulnerabilities, ready to be exploited. In fact, it's pretty common to see new disclosures for these sorts of issues every now and then. Even the tools that are used to make images "safe" are vulnerable.
Given this never-ending issue, most applications split their content into separate services; isolating the potentially bad parts from the good parts. You can see this in your network logs with domains containing phrases like "usercontent".
Firebase conveniently provides asset storage where users can directly upload files with strict rules. Given these files are hosted and served through Firebase and Google Cloud Storage, it's already pretty isolated from the rest of FrontierNav.
As FrontierNav's data is versioned, images will need to be versioned too. For example, if one version of data pointed to "character.jpg" which had a full-body view of a character, but then "character.jpg" was replaced with a mugshot. Sure, newer versions know its a mugshot, but previous versions are now pointing to a mugshot thinking it's a full-body.
To solve this, all images must be named using a hash and file size. So when an image changes, it's uploaded as a new file instead of overwriting an existing one. "Versions" of different images are tracked in the data using their hashes rather than being tied to the filename.
Another benefit to this is that images are automatically de-duped. In a basic sense. Images that are a few pixels different won't be de-duped as they'll have different hashes.
Local Storage and Offline Support
Images uploaded to FrontierNav are not really uploaded immediately. They're only uploaded on Merge Request. This avoids rough edits from needlessly polluting storage space.
A positive to this approach is that FrontierNav now supports loading images offline from local storage, IndexedDB to be exact. The rest of the app isn't offline-capable, but this is a major step forward.
Technically, nothings stopping me from allowing other media like videos and audio to be uploaded using the same process. But I don't have a need for them yet. Once there is a need, I'll open that up.
All of the above solves one part of the problem: uploading. The other part is a bit more difficult: editing. It's fine to edit images locally, then upload them. But for basic processes like cropping and resizing, it can be a bit tedious to open, edit, save and upload individual images.
Though it is tempting to implement an Image Editor just for the fun of it, I'm going to hold myself back. Instead, I'll focus on adding more data and reducing more urgent points of friction in the data entry process.
After building the automation pipeline last week, I moved onto the main feature driving it: Merge Requests.
It's worth mentioning again that FrontierNav's data is database-free. The data is packaged as part of the website for various reasons. The dynamic nature of it makes it very easy for changes to conflict and individual changes may not be valid without the whole.
Given this, there are two ways people can contribute to FrontierNav:
Providing me the data which I transform to be FrontierNav-compatible,
Using FrontierNav's UI to modify data.
The current goal is to make 2 easier so that I'm not constantly spending time doing 1.
Currently, users can use the Data Tables to modify data in a basic spreadsheet-like manner and add markers on the map. However, to apply that data for others to see, users need to export the changes and send it to me manually. Then, I need to re-apply those changes locally and deploy them.
This process is slow and tedious, even when I'm the only one acting on them.
The idea behind Merge Requests is pretty simple.
Users can now submit their changes directly from FrontierNav without needing to export them.
An admin can review those changes within FrontierNav and approve them.
An automated pipeline picks up approved changes and deploys them.
This week, I pretty much implemented this entire process. I won't be relying on the automation just yet as it's not been fully proven. Instead, I'll run the same scripts manually to make sure it's working.
I was going to put a recording of the process here but OBS is being a bit glitchy at the moment.
I may have made it even harder to avoid when I wrote firebase-rules, which has allowed me to re-use chunks of logic that enables things like rate-limiting, readable conditional statements and manual indexing.
The only thing really stopping me is are the usage limits but even at that point, it might be easier to pay up than move to something else.
So why do I hate it? Because Firebase is extremely opaque. It doesn't provide much in the way of details. Which I don't blame it for, that's its selling point and that's why I use it. But when the time comes where I outgrow it or I lose access to it (knowing Google), I need to be ready.
A while back, I decided that all features driving FrontierNav should use the web client. This was to avoid writing one-off scripts and piling on technical debt. If a feature is available on the web client, it's technically available to everyone, including myself when I'm not on a workstation.
So when I implemented Merge Requests, I needed a way to automate the deployment process as though a human could do it. This way, if the automation is no longer available, I could easily do it myself.
Initially, I used Nightwatch to run these automations. Nightwatch is what I use for integration testing so it supports browser automation. However, it's not a good fit for general automation. Nightwatch is focused specifically around writing tests and steering away from that is difficult within its test-oriented framework.
So I moved over to WebdriverIO which recently split its test runner from its automation. Perfect. I had to figure out a few things that Nightwatch provides out of the box, but it wasn't a big deal. And WebdriverIO's documentation is so much better.
I'm now planning to move over entirely to WebdriverIO in the future, with my own wrapper to avoid being locked into a framework. Nightwatch's activity has been a bit on-and-off recently with a focus on selling their testing solution, and the documentation isn't very good.
In last week's update, I briefly mentioned automating FrontierNav's deployments. Well, this week I went ahead with it. Why? The main reason is that I'm working towards moving FrontierNav's workflow completely off my workstation. If I want others to contribute, even myself, I don't really want access to my personal computer to be a requirement.
Eventually, everything should be available from FrontierNav's web client. That's the broader goal. Currently, I'm focusing on data entry and I'll have more to share on that next week.
GitHub Actions is GitHub's new automation offering. It's currently in preview, though it's due to be released this month. It's very much minimal viable product, but their strategy has worked. For me, at least.
I was originally planning to move to GitLab and use their automation service, but I didn't realise that they don't provide runners. I'm expected to provision those myself. CircleCI is another offering that does provide runners but having to share a pipeline between multiple third parties was a turn off.
So GitHub Actions was the obvious solution. Both my code and pipelines in one convenient place. It wasn't easy. There were a few issues, especially related to caching between builds. But it's done.
Babel 7.7 was released this week and I upgraded FrontierNav to use it. This time round, instead of upgrading and hoping the integration tests pick up any broken changes, I decided to test Babel separately using snapshots.
I don't know why I didn't do this to begin with. With snapshots, I'm able to test each transformation Babel performs, making it clear why Babel is configured the way it is. It also catches any regressions and potential errors introduced by Babel without having to dig through layers of Webpack transformations.
My server has gotten to a point now where I have a good reason to use Ansible. I used Puppet extensively before, but it's extremely heavy for a handful of servers and Ansible seems a lot simpler.
I would much rather move to a "serverless" solution like AWS S3 but my current server plan is a lot cheaper and predictable. So I'm in no rush.
Once thing I did notice about these provisioning tools is that they really make it hard to find their documentation. I don't blame them. Paying for support is probably how they can afford to create these tools in the first place. However, it was a constant annoyance during my research phase.
I've added more games to FrontierNav. Right now that's kind of pointless, but it keeps me motivated. One of the reasons I'm creating FrontierNav is because I've always wanted to make a website for the games I enjoy. A sort of tribute.
In the late 90s and 00s when I was gradually exposed to the worldwide web, I always came across fan sites and I wanted to make my own. It's a shame this trend has slowed down with the introduction of the popular web (popweb). It's one of the reasons I started my "World Wide Wanderer" series, though I haven't updated it much.
Of the games I added were Lufia and Lufia II. Though I was too young to fully appreciate them at the time, they left a massive impression on me, especially the soundtrack. The fan sites surrounding them have of course gone quiet as the series died off, some have been lost forever.
GameFAQs seems to be the best place to find communities for these older games, which is somewhat boring since the site is very uniform and not much of a tribute to the games they host. This is a common trend in popweb and one I want FrontierNav to avoid going down. This is why I introduced more games, so that the platform has the features to present them properly rather than retro-fitting them later. While theming isn't a priority now, it will be once there's data to warrant it.
It's nice that GameFAQs is keeping an historical catalog of guides and walkthroughs written by fans. However, these documents would be better placed in non-profits like Internet Archive; outside of commercial interests and the risks that come with it.
This week was mostly around improving FrontierNav's tests which I wrote about separately since it's quite long. You can read about it here.
I started using Flatpak (via Flathub) for most of the applications I use. I don't know why sandboxing hasn't become the default by now but it's nice to see some progress on it.
Deno also comes to mind as a replacement for Node.js/NPM. It doesn't sandbox, but at least it restricts access. Though, I'm not entirely sure if its reliance on grabbing remote dependencies via HTTPS is safe without lockfiles and hashes. I took a quick look just now and it looks like it's being figured out.