Asus XT9 Mesh Tips and Tricks

Last summer I replaced my aging Google WiFi home network with Asus XT9 mesh – a main router to cover the central part of the house, and 2 nodes for the more distant wings. It worked reasonably well – until it didn’t. Several days ago both nodes suddenly went offline and ignored my attempts to add them back to the mesh, leaving parts of the house (including kids’ TV corner) without adequate coverage. As you may guess, it was a disaster to be taken care of on the first occasion!

It took some time, but at the end of the day everything was up and running. I will spare you the whole story, but there are several important tips about XT9 that I learned today – both from my chat with the Asus support and from my own trial and error – that are worth writing down.

Continue reading “Asus XT9 Mesh Tips and Tricks”

Time to Introduce Dependency Hygiene

This post is inspired by the recent supply chain attack on coa, a popular command-line parser package for NodeJS,. and the discussions that followed that – both on LinkedIn and in the local InfoSec Slack community. This attack was not the first one, and definitely won’t be the last one. It’s true that Node modules are small and bring a lot of sub-dependencies – but they are usually super-focused on what they are doing. The speed of development in Node is largely based on this modularity, which comes with a lot of inherent risks. So, how do you mitigate those risks?

It’s about time to introduce Dependency Hygiene.

TL;DR:
Use lock files.
Pin your dependencies.
Run audits.
Patch frequently – but don’t rush fresh upgrades in.

Now, let’s dive deeper into those items. I’ll explain what do I mean by each one of them and how they are relevant in the mitigation of the modern supply chain attacks.

My primary working environment nowadays is NodeJS-based, so the examples will come from this world, but most of what I’m going to tell applies to other ecosystems as well.

Continue reading “Time to Introduce Dependency Hygiene”

Do not follow HTTP redirects with Gaxios

Earlier today I was enhancing some relatively old piece of NodeJS code, so I decided to convert it from axios to gaxios along the way. Most of the work was pretty transparent, since much smaller and better maintained gaxios is pretty much a drop-in replacement for axios, which seems to be stuck in it’s v0.x days forever.

However, there was one request that stood apart.

This request actually resulted in HTTP redirect, and my goal was to intercept the target URL and extract some query string parameters. In axios, that was straight-forward – it was enough to allow at most 0 redirects, and also make sure that HTTP 302 is not treated as an error:

const axios = require('axios');

const response = await axios.request({
  ...
  maxRedirects: 0,
  validateStatus: (status) => (status === 302)
});
const url = response.headers.location;

Yet, the same configuration produced different outcome in gaxios – the request was failing with "max redirects reached" error. My attempt to adjust maxRedirects value to 1 along with keeping status validation in place was not successful either – gaxios just issued another request to the new target.

To my surprise, quick googling did not reveal any relevant results. I took that as a hint that this is not only doable, but likely so trivial that nobody had to ask – so I continued with the digging. One of the merged PRs caused me to realize that gaxios is basically a wrapper on top of the Fetch API, and the configuration object is passed down to fetch implementation with minimal modifications. From here, the solution was simple –

const { request } = require('gaxios');

const response = await request({
  ...
  redirect: 'manual',
  validateStatus: (status) => (status === 302)
});
const url = response.headers.location;

Enjoy!

React + GraphQL: defining undefined data in Unit Tests

You are a frontend developer (or maybe a full-stack developer doing some frontend now), and you were tasked to build just another React component that makes use of the shiny new GraphQL call – say, present details of the last credit card transaction to the end user.

Sure thing, you say, why not? The pattern is well-known and there are plenty of examples in the codebase to copy-paste from. Apollo Client provides you with it’s useful useQuery() hook, GraphQL query is conveniently generated for you by some home-grown or third-party tool and is ready to use. So you quickly end up with something like this:

function LastTransaction() {
  const { loading, data, error } = useQuery(gql(getLastTransaction));
  return (
    <>
      { loading && <Loader /> }
      { error && <Error /> }
      { data && <span>Last charge: {data.getLastTransaction.amount}</span> }
    </>
  );
}

You embed your new component in the app and it shows the transaction amount. So far, so good. Time to move on to the unit tests. Again, the pattern is known and no surprises expected. You have MockedProvider from Apollo Client, and you are well-aware that even the mocked client will take you through the loading stage of the component lifecycle – just need to waait a bit for the data to be rendered. Here it goes:

it('should fetch and display the last transaction', async () => {
  const mocks = [{
    request: { query: gql(getLastTransaction) },
    result: {
      data: {
        getLastTransaction: { 
          amount: '$42'
        }
      }
    }
  }];
  const wrapper = mount(
    <MockedProvider mocks={mocks} addTypename={false}>
      <LastTransaction />
    </MockedProvider>
  );
  expect(wrapper.find('Loader').length).toBe(1);
  await wait(0);
  expect(wrapper.find('Loader').length).toBe(0);
  expect(wrapper.find('span').at(0).text()).toContain('$42');
});

Here it gets more bumpy. Suddenly your test fails, with no data shown on the page.

Maybe your request is wrong, and it isn’t matched? You convert result to a function and expect it to be called. It does, so the request is matched, and the Loader is gone too. Time to switch to real debugging mode – and discover that the data is undefined in the main component! How come? All other similar tests are passing, and quick googling reveals nothing interesting or relevant…

It appears that the caveat is in the generated GraphQL query statement, which “conveniently” contains all the fields available on the getLastTransaction query. This includes fields that your new component is not using – and thus you did not bother to define in your mock data. However, Apollo Client still does it’s job to make sure that the data you get back matches your request. Note that this including null values for the nullable fields – that’s what you had supposedly requested, so Apollo Client is obliged to provide them back.

In our case, getLastTransaction query included merchant field, and once you add it to the mock the test works like a charm:

    result: {
      data: {
        getLastTransaction: { 
          merchant: 'Amazon',
          amount: '$42'
        }
      }

Another potential caveat in the generated queries lays in the future schema changes. Some day more fields may be added to the schema of the query you are using – for example, your backend people will realize that it makes sense to add a timestamp of the last transaction. The generated queries will be updated, and your unit tests will start failing again – this time with no frontend changes at all.

Repeatable build for Lambda Layers with Yarn Workspaces

I have pretty big monorepo, managed solely by Yarn Workspaces (no Lerna). One of the packages (“workspaces”) contains a set of 3rd party NodeJS libraries that we use as a shared layer for our Lambda functions, collected as dependencies in package.json of this package. Build script for this package is supposed to collect all dependencies in a zip file that will be later published by Terraform. Unfortunately, Yarn cannot build single workspace from the monorepo, so initially we opted to use NPM directly – copy package.json to a build folder, then run “npm install --production” there and zip the resulting node_modules tree.

My main problem with this approach (besides mixing the build tools) was that the build is not repeatable – each time we run npm install we could get newer compatible version of any dependent package, since the version is “locked” by Yarn in the top-level yarn.lock file and NPM (obviously) is not aware about it. So I decided to dive deeper and see how it can be solved in a better way.

It appears that while Yarn hoists all the dependencies to the node_modules of the top-level workspace, you can explicitly opt-out from this behavior for some dependencies – or, in my case, for all dependencies of the given workspace.

Yarn Workspaces configuration before:

"workspaces": [
  "packages/*"
]

Yarn Workspaces configuration after the change, assuming Lambda Layer dependencies are collected under common-lambda workspace:

"workspaces": {
  "packages": [
    "packages/*"
  ],
  "nohoist": [
    "common-lambda/**"
  ]
}

Note that nohoist array should contain the workspace name (including namespace when applicable) and not the workspace folder.

After this change packages/common-lambda/node_modules will contain proper versions of all the dependencies to be packaged as Lambda Layer. Those dependencies will be updated automatically on yarn install and the node_modules folder can be packaged directly.