Jahed Ahmed

Weekly Report: 15th July 2019




So Many Security Updates

It seems every week there's a new vulnerability found in an npm package. While this is great progress for the community, it's a pain to deal with when maintaining multiple projects. Automation is a solution, but then you need to maintain the automation. Removing dependencies and writing your own just means you'll probably have vulnerabilities that you don't know about since no one's looking. Eventually dependencies rot and security issues take over. At that point you need to abandon ship and find a new one.

It's an unsolvable problem as far as I can tell.

FrontierNav Static Generation

FrontierNav is a JavaScript heavy application. So much so that loading it up without JavaScript will just show a warning. This is fine, since without JavaScript, FrontierNav becomes a bare bones wiki and there are plenty of better wikis around.

However, it does cause issues with search engines, social network sharing and others where they expect relevant information in the HTML on request. Google's search engine does use JavaScript, but others like Bing don't. Share a link to FrontierNav on Twitter and you'll just see a generic embed. This is a problem as it means FrontierNav isn't interacting properly with the rest of the web.

The proper solution to this is "Server-Side Rendering" (SSR). I've tried SSR before, but it doesn't work. A lot of the code wasn't made for it in mind so it just fails. To fix it would mean going through a lot of code and also making sure any future code works in both cases. It's a big investment.

An alternative I've been thinking of is to go through every page in a web browser and dump the HTML. I've tried it this week using Puppeteer to automate the process and... it works.

There are a few issues though: cache invalidation (as always) and deciding where to run it. It can't run server-side since it's using a whole web browser which is slow and eats up a lot of resources, so it'll need to run in parallel. Which in turn means it'll need to know which pages to cache.

Right now, the experiment is using the sitemap.xml to scrape FrontierNav. That works for the more simple pages. But pages like the Maps have potentially hundreds of elements, and dumping all of that into HTML will be ridiculous. These pages are the most likely to be shared on social media. I could strip the excess post-render, but then the solution becomes more complex, at which point SSR becomes more appealing again.

Running in parallel is also a lot of waste since the vast majority of pages won't be directly navigated to. I could make it more intelligent and have it use access logs to find the pages it should cache, but then it's caching after the fact so it'll always be one step behind.

Decisions. Decisions.

FrontierNav Search Performance

I've become more comfortable using Web Workers now. My first implementation of it for the Universal Search had a single global Worker searching across multiple indexes. Now, there's one for each index, created only when Search is activated and discarded after. I really can't tell what difference it's made since there aren't any Dev Tools that show individual performance metrics, but from an implementation point of view, it makes a lot more sense. There's no point having Workers waiting around, hogging resources.

Agility Bug Fixes and Maintenance

Agility, the mobbing timer I made a while back, had a bug report. Since it's just a single web page, I recently stripped it down from a Middleman static generated site, to a simple static site with no build step. It's a lot easier to maintain now. I'm still a bit reluctant to be maintaining it though since it's not something I use anymore and I'm not getting anything out of it. But it's not a big deal.

Weekly Report: 8th July 2019

I wrote this entire blog post and it was really long, so I broke it down into separate posts. This is just a summary.



I spent a lot of time sorting out my workstation. It's a bit long and boring so I wrote a separate blog post on it.

Thanks for reading.

FrontierNav Security - CSP and SRI

I can't remember how I started going down this route, but I do know that as someone with multiple websites, I should be doing the most to ensure nothing malicious is being loaded onto my viewer's computer.

Actually, I do remember. I was looking into how FrontierNav can introduce an iframe-based, postMessage API to allow third-party integrations -- an exciting topic for another time. Loading iframes from other places is of course open to abuse, so I looked into securing it.

Read more

Linux Madness

My sound card died this week. I didn't mind. A year ago I assumed the inteference coming from my speakers was from my motherboard's on-board sound so I bought a PCI one, avoiding an external one so that I don't have to deal with more cabling.

When I switched from Windows to Linux as my main operating system, the main issue I had was with drivers. Sound (Asus), Wi-Fi (Broadcom) and Video (Nvidia). They're all propertietory and Linux support is abysmal thanks to various free versus proprietary conflicts in interest.

I use CentOS on my servers, so I thought I'd try Fedora. But since Fedora has a Free Software policy, most of the drivers weren't officially supported. The sound didn't work at all. So I chose Linux Mint which came with everything out of the box.

After a few months, I noticed my speakers were still picking up random radio stations every now and then so I bought new ones. Problem solved. I didn't notice a difference in sound quality either with the sound card. So it was all a waste of time.

It seems since then Fedora 30 has greatly improved its support. After the sound card died and with the issues I've been having since last week, I gave Fedora another chance. This time, everything worked -- except the WiFi, so I switched over to my old PowerLAN, which now works because I relocated my router to a different room. Though, I'm not sure if the PowerLAN or Fedora is the cause of the random lag I've getting on my network. What a mess.

On the bright side, I actually like using GNOME now. I've always been an Xfce fanboy, but I've had do deal with a ton of caveats over the last year. Firefox flickers randomly and gives me a headache. The panels go out of sync with my multi-monitor setup. Music stops playing when I lock the screen. Screen tearing everywhere with Nouveau drivers. Broken resolutions with Nvidia drivers. The list is endless.

I wonder what new issues I'll run into over the next year with Fedora...

Still better than Windows 10.

Thanks for reading.

Weekly Report: 1st July 2019

Weekly Report is going to be a new series of blog posts giving an update of what I did in the current week. The aim is to share what I've done and also to help me appreciate and compare my acheivements.

While there will be FrontierNav-related updates in this series. They'll also contain unrelated updates related to my other projects. If you just want FrontierNav updates, you can wait for the monthly FrontierNav Progress Reports.

Read more

Self-Hosted Event Tracking with Nginx and Lua

Over the years I've become less and less trusting of third-party network requests on the websites I visit. In part, it's due to the ever escalating hoarding and selling of our personal data by ad-tech companies; something I've witnessed as I worked in the industry.

However, there are legitimate use cases for tracking on the public web to better understand your users and improve your product. In fact, I've come to the conclusion that it really is the only way to get accurate feedback. The vast majority of users will never tell you how they use your website, and the ones that do will likely skip over certain details.

So, to me, the problem with tracking isn't the tracking itself but how the data is managed. And the easiest way for me to make sure data isn't being misused is to host it myself.


It's worth mentioning why I need event tracking and what my requirements are:

Why not use Request Logs?

One of the simplest ways to get some event data is to look at request logs. My requirements will be fulfilled by doing just that.

However, in my case, I've put my web server behind Cloudflare's CDN. Meaning, Cloudflare gets most of the requests, and only contacts my web server when it needs to refresh its caches.

Removing the CDN is not an option as it reduces a lot of my bandwidth costs and server load. And, as far as I know, Cloudflare's free tier does not provide network logs.

The only solution is to have a separate request sent directly to my server with similar details. This can be done either by using a separate domain or, to avoid cross-origin request issues, disable CDN caching using Cache-Control headers.

While the latter does mean the CDN is handling every request and likely logging it, that's already the case with most of the website's content. Removing the CDN also introduces other issues such as exposing the web server to direct malicious attacks.

Existing Solutions

There are plenty of self-hosted event tracking services that provide similar features to third-party solutions like Google Analytics. Matomo (formerly Piwik) is probably the most popular of the bunch.

At the end of the day, all these web analytics services can be broken down into three steps:

  1. Send. A client sends events to a server.
  2. Store. The server processes and stores events.
  3. Query. The server provides an interface to query events.

Pretty much every solution differentiates themselves on their querying capabilities. So much so that Matomo, while mostly open source, places its more advanced features behind a paywall.

While these services satisfy my basic requirements, they also do a lot more, and as such, I lose a lot of control and have to maintain more than I'm actually using.

My Solution

I already have an Nginx server compiled with a Lua module (via OpenResty), so ideally, a simple handler to log my events to disk will be enough. To simplify event processing, I can log my events as JSON, then query and aggregate those logs using jq. The server itself is not very powerful, so anything heavier, like Node.js, isn't possible.

1. Sending Events

Tracking has been a core part of the web for a while. So much so that web browsers have built-in mechanisms to send tracking events.

Anchor tags (<a>) have the ping attribute which sends a POST request to a list of URLs. However, this is only for Anchor tags so it won't work for buttons and other interactive elements.

The Beacon API also sends a POST request with a custom payload. It's specifically made for event tracking so browsers can optimise for it. In a perfect world, I'd use it, but the API doesn't work on certain browsers like Safari 10 so it isn't universal.

There's also the Fetch API which lets you send any request, and with the keepalive flag enabled, it's similar to the Beacon API but with more flexibility.

One of the most popular ways is to use the <img> tag programmatically to send a GET request. I'll be using this approach as it's common and lightweight. However, it's possible to use any of the methods above as at the end of the day, they all do the same thing: send a request. Chances are I'll switch to the Beacon API at some point.

const track = payload => {
const img = document.createElement("img");
img.src = `${window.location.origin}/_event?${payload}`;

Browsers request the src of an image as soon as it's set, regardless if it's on the page or not. Here, it'll send the request to my server's /_event endpoint which will log it to JSON.

The server then responds with an image consisting of a single blank pixel. This is where the term "Tracking Pixel" comes from, and a lot of HTTP servers come with built-in features to respond with this blank pixel. Nginx uses empty_gif.

The event payload is a query string, how that's generated is up to you. I personally used URLSearchParams with a polyfill for older browsers. I originally thought of using JSON.stringify to reduce the number of formats the payload needs to go through, however the URL becomes unreadable and difficult to inspect.

On Nginx's end, I added this location block.

location = /_event {
access_log /var/log/nginx/event_access.log main;
error_log /var/log/nginx/event.log info;

log_by_lua_block {
ngx.log(ngx.INFO, require('cjson').encode(ngx.req.get_uri_args()))

# Disable Caching
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;


Pretty simple. Caching is disabled so that the CDN always forwards the request to the server where it's logged. Note that I'm using cjson which you'll need installed, ideally using LuaRocks.

2. Storing Events

In the location block shown earlier, we use Lua to log the query parameters as JSON. The way Lua is integrated into Nginx means that these log lines are logged into Nginx's error_log, rather than the usual access_log, which is reserved for... well, access logs.

One thing to mention is that the access logs also contain our events. So what's the point of the Lua block? The main reason is that it avoids parsing the query parameters externally and causing potential errors. By doing it all through Nginx, we are create a clear cut-off point from HTTP logging and event processing. We could even turn off access logs to reduce server load once everything's up and running.

Unfortunately, Nginx's error logs are wrapped with a lot of junk. For example, here's a truncated log line from our Lua block:

2019/09/19 03:22:04 [info] 29900#29900: *1966485 [lua] log_by_lua(nginx.conf:76):2: {"level":"info","version":"v1.319.0-0-g05d524a-production","href":"https:\/\/jahed.dev\/about","logger":"client","source":"PageLogger","createdAt":"2019-09-19T02:22:03.590Z","referrer":"https:\/\/google.com"} while logging request, client: ...

We can extract and parse the JSON by piping some common unix commands.

cat /var/log/nginx/event.log | fgrep log_by_lua | sed --unbuffered -r 's/.*log_by_lua[^{]+(\{.+\}) while.*/\1/' | jq '.'

This will output something like:

"referrer": "https://google.com",
"level": "info",
"version": "v1.319.0-0-g05d524a-production",
"href": "https://jahed.dev/about",
"logger": "client",
"source": "PageLogger",
"createdAt": "2019-09-19T02:22:03.590Z"

And there we go. No more personal information, just enough data to generate aggregates.

We can also use log rotation to automatically trigger extraction periodically and to delete older logs. I use logrotate myself but there are many others.

3. Querying Events

Since we now just have files of JSON, we can use any tool that consumes JSON to query our logs. jq provides more than enough functionality for my use cases. It's portable, fast, pipe-able and in general very convenient for most terminal-based work. But you can also push the data elsewhere like Logstash, Elasticsearch, really anything.


Well, that's pretty much everything. We have a client sending event data to a server which logs it as JSON. From there, we can do whatever we need to with the data. If I need to do anything more complicated such as understanding user journeys, I can easily add the necessary data on the client-side and query it server-side. If and when querying through the terminal becomes too laborious, I can easily import the data to something more suitable and run my queries there.

Thanks for reading.