Morning Routine

Thursday, 25 January 2024

Cold shower

If you have been like me, working until late in the evening or even pushing it into the night, and want to get back to a schedule that is more in line with the people around you or just for the sake of your health? Then I have the following morning routine for you, which worked out great for me. This routine really helps me to get very productive and energized during the entire day.

Waking Up: Alarm at 7:15 AM

Every day, including the weekends, set your alarm at the same time. For me 7:15 AM is ideal, as it gives me time to do my cardio exercises and spend time with my kids before they go to school (at 8:15 AM)

Knowing your alarm will always go off at 7:15 AM is a good trigger to go to bed on time. I also set an alarm in the evening at 11 PM to make sure I can get enough sleep (7-8 hours). In case you weren’t already aware, but getting enough sleep is crucial for our wellbeing. (Check out Why we Sleep?)

I know there are plenty of people that have to wake up even earlier everyday, and they don’t get enough praise. Not sure if I’m simply not a morning person, but I found it quite challenging to get up early, especially when I don’t really have to (e.g. on Sundays).

When the alarm goes off, get out, don’t snooze, simply don’t think about it. Don’t try to listen to how you feel that day, just get out of bed.

Energizing with Cardio: 15 Minutes to Kickstart Your Day

Immediately after waking up, engage in a 15-minute cardio workout. This could include jogging in place, jumping jacks, brisk walking, or any activity that increases your heart rate. The benefits of morning cardio are numerous:

  • Boosts Metabolism: Jump-starts your metabolism, helping you burn more calories throughout the day.
  • Enhances Mood: Releases endorphins, natural mood lifters, promoting a positive start to the day.
  • Increases Energy: Improves blood circulation, providing more energy and alertness.

There are also plenty of cardio workout videos on Youtube. Recently I used the following ones, but will add more to include some variation during the week:

I give myself one day rest from cardio on Saturdays, and on Sundays I go for a run outside.

Strength Training: Kettlebell Swings

Following your cardio session, transition into strength training with kettlebell swings. This exercise is a full-body workout, targeting several muscle groups:

  • Core Strength: Kettlebell swings engage your core, enhancing stability and overall strength.
  • Improved Posture: Strengthens back muscles, which is essential for good posture.
  • Flexibility and Balance: Enhances both flexibility and balance, reducing the risk of injury.

For more background information on why Kettlebells work so well, checkout out Tim Ferriss’s blog (author of The 4-hour Body)

Cooling Down and Refreshing with a Cold Shower

After allowing your body to cool down for about 30 minutes, it’s time for a cold shower. The idea of cold showers might be daunting, but they offer significant benefits, as explained by neuroscientist Andrew Huberman. According to Huberman’s research:

  • Enhances Circulation: Cold water causes blood to move to your organs, improving circulation.
  • Boosts Immunity: Increases white blood cell count, strengthening the immune system.
  • Improves Mental Resilience: Builds mental toughness and resilience, teaching your body to adapt to stressful situations.
  • Speeds Up Recovery: Helps in reducing muscle soreness and speeds up the recovery process.

I know it is quite the hype and trend to take cold showers or ice baths, but the research seems to back it up. So try it out and stick with it. Who doesn’t want to have a prolonged dopamine increase?

Conclusion: Embracing a Holistic Approach

By incorporating a mix of cardio, strength training, and the invigorating practice of cold showers, as recommended by experts like Andrew Huberman, you’re not just starting your day; you’re elevating your entire lifestyle. This routine is more than just a series of tasks; it’s a commitment to your physical and mental well-being. Embrace this journey and watch as you transform into a healthier, more resilient version of yourself. Remember, consistency is key, and the best day to start is today!

tags: lifestyle, health

Running your Startup on Kubernetes ($90 per month)

Thursday, 06 April 2023

Minimum Viable Kubernetes

When you made the decision to start a Startup and begin working on your MVP (Minimum Viable Product), you at some point need to deploy and run your software somewhere to get your product in the hands of actual users. With their feedback you can iterate your product to find that precious product-market fit and become the next unicorn (or at the very least start making money instead of losing it).

While your MVP is often guaranteed to change drastically over time (even without pivots), a strong foundation will help you iterate faster and scale your product at a later stage without major rewrites. You primarily want to spend your time developing product features and squashing bugs, and not so much on devops and infrastructure (but still keep running costs to a minimum).

In this blog post I like to present what I consider a great foundation with templates to bootstrap your project, which runs on Google Cloud and Kubernetes (K8s). I know there is an ongoing discussion and heated debates about Cloud vs Dedicated servers, and why you should or shouldn’t use Kubernetes for your business. Take for example this recent post from 37signals, discussing their journey on and off the cloud, on and off Kubernetes. As always, it depends on your situation and your expertise, and I’m aware that I’m biased having used Google Cloud and Kubernetes a lot, but I hope that my experience and the tools and resources that I’ll share here, will help you hit the ground running and create a successful company.

Contents


Minimum Viable Kubernetes (MVK)

I’m assuming here that you have heard of k8s and that you know that it is an open-source orchestration tool for containerized applications, originally developed at Google. In case you need an introduction or a refresher, I would refer you to their excellent documentation.

Like the minimum set of features you define for your MVP, I like to start defining the minimum set of features you should expect to get from your k8s cluster, in other words, your Minimum Viable Kubernetes (MVK):

  • Flexibility: Run any containerized (Docker) application or database. If your startup requires a high level of customization, then Kubernetes is great.
  • Scalability: Scaling your application up and down based on CPU-usage. If your startup expects to grow rapidly, you can easily horizontally scale your load.
  • Load balancing: Load-balance across pods and services
  • Self-healing and Auto-recovery: High availability is a nice bonus, but one of my main goals in life is to sleep at night, so we want self-healing and restart containers and nodes that fail.
  • Easy deployment: K8s comes with a great command line interface (CLI) to manage deployments, but I’ll later introduce a great CLI wrapper (Bedrock-CLI) that also helps in building (Docker) and provisioning (Terraform) your infrastructure and applications.
  • Future proofing: Kubernetes has become the de facto standard for container orchestration.
  • Maintainable over time: Yes, you can install and run Kubernetes everywhere, but k8s is quite complex with many running components (control plane nodes, compute nodes, scheduler, proxy networking, load balancers, kubelets, etc). Unless you have a dedicated team of devops people (which you don’t, and if you have, congrats on securing your Series A investment!), I would stay away from installing and managing k8s on dedicated servers and instead choose a Cloud managed solution. This supports auto-upgrading to newer versions and making sure you run with the latest security fixes. In the next section I’ll discuss why Google Cloud’s GKE is the best option.
  • Low cost: We always like to keep costs at a minimum too (traded off against gained features). Running in the Cloud is often synonymous with spending big bucks, but I’ll show you that a production ready cluster can be had for $90 a month (or even $50 if you cheap out on compute resources).
  • Supporting multiple environments: Last but not least, we like to support different environments (e.g, staging and production). Of course you can spin up a separate k8s cluster for each environment, but we already have low cost as a minimum feature, so I’ll show you later how you can manage multiple environments with Namespaces and Gateway load balancers in a single cluster.

So now that we have defined our MVK, where running in the Cloud is a must, let’s see why the Google Cloud managed solution (GKE) is the way to go.


Google Kubernetes Engine (GKE)

Google Cloud Platform (GCP) offers a managed k8s solution which is called Google Kubernetes Engine (GKE). As noted earlier, Google is the birthplace of Kubernetes and they have over 15 years of experience running massive workloads. However, that doesn’t mean Google is the only Cloud provider with a managed solution. Amazon Web Services (AWS) and Microsoft Azure both introduced managed clusters, as well as Digital Ocean and a long tail of other providers, adding to the immense growth and popularity of Kubernetes in the industry, overtaking other container orchestration tools like Mesos and Docker Swarm, and being a cloud-agnostic alternative to their own vendor lock-in services like AWS ECS for example. A push in Hybrid-cloud and Multi-cloud environments further fueled the need for k8s, allowing you to run the same container everywhere.

I don’t have much experience with Microsoft’s AKS (Azure Kubernetes Service) or any with Digital Ocean’s managed Kubernetes, but I have extensively used both EKS (AWS) and GKE (GCP). I know that choosing a Cloud provider often comes down to your personal preference and sticking to what you already know and have used before. When it comes to managed k8s however, I believe GKE is the clear winner for three main reasons:

  1. Usability: The AWS UI console for EKS is nowhere near as clean as GKE. Giving users access to your cluster is painful and logging out of the box is terrible. GKE also has better support and flawless version upgrades. All in all AWS EKS gives me the feeling that they try to push you to use ECS instead. Comparing EKS and ECS is a whole topic on its own, but some limitations of ECS are:
    • You can only mount EFS volumes, which are not to be used to run stateful applications like databases. While EKS actually has EBS support.
    • No configMaps.
    • No init containers
    • No post-start or pre-stop hooks
    • AWS vendor lock-in
  2. Competitive Advantage: GKE has features that are not available on other platforms, such as GKE Autopilot, which is a fully managed Kubernetes experience that eliminates the need for cluster management tasks such as node maintenance and upgrades. GCP has a strong focus on containerization which is evident in its portfolio of container-related services, including Cloud Run, Cloud Build, and Anthos. GCP also offers a range of integrated tools for monitoring, logging, and managing Kubernetes clusters, making it easier to manage and maintain Kubernetes workloads.
  3. Price: While both GKE and EKS have a management fee of $0.10 per cluster per hour (~$72 per month), your first cluster in GKE is free of management charge. This is also the reason why we will create multiple environments in one GKE cluster, to avoid this fee (and additional node compute resources). GCP offers sustained use discounts, which can lead to significant cost savings for long-running workloads.

So for starters I would recommend GKE over EKS. GCP’s pricing and integrated tools make it a more attractive option. If you were somehow determined to run on AWS, then I would still recommend running k8s over ECS, and migrate to GKE later ;) Both cloud providers also come with the additional benefit of getting access to a massive suite of other managed cloud services you can easily integrate with, like BigQuery, PubSub, VertexAI, and many more. And before you start spending your own money on these services, don’t forget to apply for free credits or in case you secured some funding, you might be eligible for the startups cloud program, which can cover up to $100,000 USD in Google Cloud credits in your first year.

One feature I like to highlight, which you get out of the box in the Cloud, but takes some effort to setup yourself when running your own dedicated solution, are automated backups of your persistent (database) disks. You can provision your cluster (as we will see later) with a disk policy attached to your persistent volumes that runs (hourly) incremental snapshots. So from the get go you are safeguarded against data corruption, storage failures and coding/human mistakes like dropping the wrong collection (or for maximum damage the entire database).

Before diving into the tools to setup your own GKE cluster, I briefly like to mention one other alternative platform that is often used to build MVPs and that is the Platform as a Service (PaaS): Heroku. While it is super easy to get started with Heroku and get up and running quickly, you will also quickly ramp up the costs when you need more compute resources (and there no longer is a free tier). You can create your own estimates here, but I ended up with an estimate that is at least 5 times higher for the same resources in Google Cloud. Besides the cost argument, I believe it is worth your time to learn k8s and GKE (even if you are a complete beginner) and not completely abstract away your infrastructure and limiting your flexibility and scalability in the future.

Next up, I’ll provide a project template to setup you own GKE cluster with Bedroock.io (Terraform + Node.js + MongoDB).


Project Template (Bedrock.io)

For creating your MVP and running it on GKE I like to introduce Bedrock.io, a platform template which was open-sourced over two years ago. Disclaimer, I’m one of the contributors, and it is the result of iterating over a decade on projects and platforms. This makes Bedrock a battle-tested collection of components, automation and patterns that allow you to rapidly build modern software solutions, tying together Node.js, MongoDB & React.

I would say Bedrock is ideally suited for startups and a strong foundation to build on. On the bedrock.io website you can find a great post on how to deploy a production ready Kubernetes Node+React platform in under 15 minutes. I will recapture some of those steps here, but also introduce some changes (that we are still planning to land in Bedrock). For your convenience I also included the Bedrock From 0 to production video (by Dominiek) below, which is slightly outdated (from 2 years ago) but still nicely captures all the steps involved:

All you need to get started is getting the bedrock-cli and creating your own bedrock project as follows:

$ curl -s https://install.bedrock.io | bash
$ bedrock create

Your bedrock project will be a Mono Repo that includes the following parts:

  • deployment/ - K8s (GKE) & Terraform deployment automation and playbooks.
  • services/api - A Node.js API, enabled with authentication middleware, OpenAPI, Mongoose ORM and other best practices.
  • services/web - A React Single Page App (SPA) that can interact with that API. Includes React Router, authentication screens, placeholder, API portal, dashboard and a repository of components and helper functions.
  • Documentation for all aspects of your new platform (Github markdown)
  • CI system. If besides Continuous Integration (CI) you also want to enable Continuous Deployment (CD), then you can extend the default Github actions with a GKE deploy workflow.

Currently the Bedrock template is designed to setup a separate GCP project (and hence separate GKE cluster) for each environment (e.g, staging and production). This is great to isolate the environments from each other, but for our startup we like to start with only one cluster to keep the costs down (and avoid the cluster management fee, remember that your first GKE cluster is free of the $0.10 an hour charge). In the next sections I’ll walk you through the following changes you have to make in order to support multiple environments on one cluster:

  1. Namespaces: Production, Staging, Data and Infra namespaces with limits and resource quotas.
  2. Gateway: Using a Gateway with HTTP routes instead of Ingresses with VPC native loadbalancing.
  3. Environment configuration: Updating the config.json for each environment, including the GCR image prefix and cluster details.

We plan to add a Startup starting template to Bedrock itself too (as an option during the CLI project creation), but until then, I created an example project (repo) for you as a reference. You can checkout the project running the following, and replacing all mentions of project name beatlevic across the repo with your the name of your own created GCP project:

$ git clone https://github.com/beatlevic/bedrock.git

Or you bedrock create a new project and make the following changes yourself.

Namespaces

Kubernetes namespaces are a way to logically partition a Kubernetes cluster into multiple virtual clusters. Each namespace provides a separate scope for the resources in the cluster, including pods, services, and other objects. Namespaces can be used for a variety of purposes, such as:

  • Resource isolation: Namespaces can be used to isolate resources between different teams or projects. In our case we will use separate namespaces for each environment, but also for shared data and infra namespaces.
  • Access control: Namespaces can be used to control access to resources in the cluster. For example, you can use namespaces to restrict access to certain pods or services based on user or role.
  • Resource quotas: Namespaces can be used to set resource quotas for the resources in the cluster. This can help prevent one team or project from monopolizing resources in the cluster.
  • Multitenancy: Namespaces can be used to support multitenancy in the cluster. For example, you can create a separate namespace for each tenant or customer, which can be very important for a SaaS startup.

Kubernetes includes a default namespace that is used if no other namespace is specified. You can create additional namespaces using the Kubernetes API or the command line interface. In the bedrock k8s template you will four templates that will be created when you bootstrap the cluster. Resources in K8s are defined in YAML files, like for example the following definition for the production namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: production

To make sure all the production resources are running in the production namespace, you need to specify the namespace for each resource in the metadata.namespace field. For example the API deployment for production starts with the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: production # <= Added environment namespace
## ...

We can now also share a single Database (MongoDB) deployment between environments (each with their own database, i.e., bedrock_staging and bedrock_production) by running a Mongo deployment and Mongo service in the data namespace. We can then access the Mongo service from other namespaces (staging and production) by appending .data as a suffix to the service name. So the MONGO_URI in the api-deployment environment variables becomes: mongodb://mongo.data:27017/bedrock_production.

Finally the infra namespace will contain infrastructure resources, i.e., Gateway and HTTPRoutes, for load balancing and making our services accessible to the public internet.

Gateway and HTTPRoutes

Default Bedrock uses GKE Ingresses for routing HTTP(S) traffic to applications running in a cluster. We can create separate ingresses for each environment, but each Ingress comes with its own load balancer that we have to pay for ($20 a month). However running a single Ingress will not work, because by default an Ingress does not have cross-namespace support. Luckily we can nowadays use a Gateway instead, which evolves the Ingress resource and does have cross-namespace support (and other improvements that I won’t go into detail here).

To use a Kubernetes Gateway, we first need to define the Gateway object in our Kubernetes configuration. You can then define an HTTPRoute object to specify how incoming traffic should be routed to your services.

The Gateway resource is defined as follows (in the infra namespace):

apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: external-http
  namespace: infra  # <= Gateway is deployed in infra namespace
spec:
  gatewayClassName: gke-l7-gxlb # <= Global external HTTP(S) load balancer
  listeners:
    - name: http
      protocol: HTTP
      port: 80
      allowedRoutes:
        namespaces:
          from: All
  addresses:
    - type: NamedAddress
      value: external-gateway # <= Name of GCP global external IP address, defined with Terraform

Then within each namespace you create HTTPRoutes that bind to the Gateway to route traffic from, and define which Services to route to, and rules that define what traffic the HTTPRoute matches. Take for example the following bedrock-api HTTPRoute that routes traffic from bedrock-api.beatlevic.dev to the api service in the staging namespace:

kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
  name: bedrock-api
  namespace: staging # <= HTTPRoute and api are deployed in the staging namespace
spec:
  parentRefs:
    - name: external-http
      namespace: infra # <= Gateway is deployed in the infra namespace
  hostnames:
    - bedrock-api.beatlevic.dev
  rules:
    - backendRefs:
        - name: api
          port: 80

Environment Configuration

One final (small) change we need to make is to the environment configuration of the staging environment. Each environment has its own config.json file to specify the GCP project and GKE cluster name. We just need to use the same values here as we do in the config for production, with only one additional variable that is setting the gcrPrefix to staging-. The reason for this is when we build and deploy a service using the bedrock cli (bedrock cloud deploy production api), it by default pushes to the Google Container Registry (GCR) and defines the specific registry for the service as follows: gcr.io/<PROJECT>/<REPO>-services-<SERVICE>, e.g., gcr.io/beatlevic/bedrock-services-api for the API service. We like to build separate images for production and staging, so the gcrPrefix adds an additional prefix to the registry name, e.g., gcr.io/beatlevic/staging-bedrock-services-api for the API staging docker image.

Staging config.json:

{
  "gcloud": {
    "envName": "staging",
    "project": "beatlevic",
    "gcrPrefix": "staging-",
    "dropDeploymentPostfix": true,
    "computeZone": "europe-west4-a",
    "kubernetes": {
      "clusterName": "cluster-1"
    },
    "label": "app"
  }
}

Deployment

Bootstrap your Project

Bootstrapping your project only requires you to follow 3 steps:

  1. Create your own GCP project. Or you can use an existing one. In our example we use project beatlevic.
  2. Clone and modify the example repo to your liking, or start with creating a bedrock project on the command line with bedrock create, and adding the namespace, gateway and environment configuration changes.
  3. Provision and bootstrap your cluster, services and other resources with one single command:
$ bedrock cloud bootstrap production <gcloud-project-name>

GKE Workloads

When the bootstrap command runs successfully and you also deployed the staging services (bedrock cloud deploy staging), you should be seeing the following workloads in the gcloud console (or run kubectl get pods):

Bedrock GKE workloads

You can see three services (api, api-cli and web) for each environment and mongo (and mongo-backup for daily MongoDB dumps to bucket storage) in the data namespace.

Cloudflare DNS

We prefer to use Cloudflare DNS to proxy traffic from your domain to the Gateway IP. You can get your Gateway global IP by checking:

$ gcloud compute addresses list
NAME              ADDRESS/RANGE   TYPE      PURPOSE  NETWORK  REGION  SUBNET  STATUS
external-gateway  34.110.231.132  EXTERNAL                                    IN_USE

Then next, you enter the IP for each of the (sub) domains that you have, which need to be the same hostnames that use in your HTTPRoutes. So in my case, for the beatlevic.dev domain:

Cloudflare DNS

You can check if the api is running successfully by requesting or opening the api endpoint in your browser:

$ curl http://bedrock-api.beatlevic.dev
{
  "environment": "staging",
  "version": "0.0.1",
  "openapiPath": "/openapi.json",
  "servedAt": "2023-03-20T09:49:51.520Z"
}

Bedrock Dashboard

With everything running you can now check the web service (dashboard) by opening bedrock.beatlevic.dev. There you should be greeted with a login/signup form.

Bedrock login screen


You can login with the user/password combination as defined in the api-cli deployment (Note: I changed the credentials for my own deployment), which sets up fixtures (including users) on its first run.

Bedrock products screen

The bedrock blog post goes a bit deeper into how you can make UI and API changes, and how you run things locally. However, it doesn’t yet include the generator and scaffolding features (both for code and documentation) that we introduced over time, which is due for another blog post.


Cost breakdown

Finally I’ll conclude with the cost breakdown of running a single production ready GKE cluster for multiple environments. The production config.json defines the cluster size and machine type that will be passed into the Terraform templates to provision your GKE cluster. I opted for machine type n2d-standard-2, which provides 2 CPUs and 8 GB of RAM, and costs $54,- monthly in the europ-west region including a sustained usage discount. It’s not only the compute resource your have to pay for, but also the load balancers, traffic, reserved IPs and storage. GCP billing reports provide a great and detailed overview of your costs, so let’s have a look at the past 30 days:

Daily cost breakdown for the past 30 days

As you can see the total is E82,64 which translates to approximately $90,- a month.

You could even bring this number down by using half the compute resources and selecting for example machine type n1-standard-1, reducing the costs with $28,-, bring the total down to $62,-. If that initially works for you, then you can always increase the resources later (and the cluster will already add more nodes when you start requesting more resources), but I prefer to start with nodes that have a bit more memory as we are also running a (shared) MongoDb instance. Running in the European region is also slightly more expensive than in the US, so you can save a couple of percentages there.

You might be tempted to run on spot instances, which are are even cheaper, but those are only for batch jobs and fault-tolerant workloads, and you want some guarantees that your production system is available. It can however be viable to expand with node pools running on spot instances if you require more batchable compute resources over time.

But there you have it. Running your Startup MVP on Kubernetes for $90 per month, a solid foundation, ready to scale and take over the world!


Resources

  1. Bedrock CLI: github.com/bedrockio/bedrock-cli
  2. Project template: github.com/beatlevic/bedrock

Goodbye breakfast: 6 months of Intermittent Fasting

Tuesday, 24 November 2020

Intermittent Fasting 16/8

DISCLAIMER: I’m not an MD, so please read this blog post only as an interesting starting point for your own research and always check with your own doctor or dietician if you want to try this at home. You are responsible for your own health.


For the past 6 months I have been doing intermittent fasting (IF) by eating daily only during an 8 hour window: between noon and 8pm. On top of that I had three water-only fasts where I didn’t eat anything for multiple days (4-5) in a row.

That’s madness you might say! Why would you starve yourself? Breakfast is the most important meal of the day and you are skipping it!

Well, I currently believe that it would be madness NOT to fast, and have both scientific and 6 months of anecdotal evidence to back that up. When I was just a few weeks into intermittent fasting, I was already so positively surprised by the initial results that I wanted to tell everybody about my “discovery”, especially because I believed I could also explain why and how it works after reading into the physiology and research behind it. I decided to first see if I could stick with it for a couple of months and then write about my experiences. So here I am, 6 months later, ready to tell you all about my journey and “why” intermittent fasting is so interesting.

Benefits

Before we dive in, what’s in it for you? What kind of benefits are we talking about? There are the following immediate known and lasting benefits that I experienced:

  • Weight loss
  • Higher levels of energy
  • Feeling stronger (due to increase in human growth hormone)
  • Better focus
  • Decrease in hay fever symptoms (to be fair, I could have been lucky with a mild season)

And there are (potential) long term benefits (1, 2, 3):

  • Cellular repair (Autopaghy)
  • Decreased insulin resistance
  • Decreased incidence of diseases, including cancers, obesity, neurological disorders and cardiovascular disease.
  • Increased stress resistance
  • Increased longevity and quality of life

Fasting sounds like a miracle drug doesn’t it? You don’t even have to pay money for it! That’s also probably why you won’t see any fasting ads on your timeline or tv commercials (e.g., “Stop buying our cornflakes and just skip breakfast now!”). It is essentially free and available for you to try out.

Without further ado, let’s explore intermittent fasting and why it works.

Intermittent Fasting

People have been actively fasting, i.e., periods of consciously not eating, since ancient times (4) and it has, unwillingly, been part of the eating pattern of our ancestors when food wasn’t always around (e.g. hunting on an empty stomach), although strictly speaking you would call it starvation if you don’t know when you will get your next meal. It just shows, that our bodies have been evolutionary adapted to handle feast and famine. It’s being exposed to stress, variability, volatility and randomness (up to a point and not continuously), that makes us stronger (i.e., antifragile).

Recently intermittent fasting has become a more popular form of fasting, which can be defined as an eating pattern in which you cycle between periods of eating and fasting, where you stretch each fasting period long enough to force your body into switching from burning glucose (sugar) and glycogen (stored sugar) to fat burning. This is what is called metabolic flexibility, where your body makes use of whatever fuel is available. As a bonus, it seems that ketosis (i.e., the metabolic state running on fat for fuel) is the main driver for fat burning in the abdomen region, belly fat!

So how long do you have to NOT eat to switch to fat burning? Apparently, energy intake restriction for 10 to 14 hours results in depletion of liver glycogen stores (1, 5) after which fat, fatty acids, are freed to form ketones that are used to fuel your body (as opposed to glucose). The more fat adapted you are, the quicker your body will switch to fat burning, something you get more adapted to as a result of prolonged intermittent fasting.

Given the required minimum of 10 to 14 hours of fasting to start producing ketones, you have different patterns for intermittent fasting you could follow:

  • 16/8: A daily window of 8 hours, often from noon to 8pm (so no breakfast), for eating and 16 hours of fasting (during the night and morning).
  • 5:2: 5 days eating, 2 days fasting.
  • Alternate day: Alternate days of eating and fasting
  • One Meal a Day (OMAD): Sticking to one meal a day, often dinner, and fast the rest of the day.

I chose 16/8, because it fits nicely with having kids that are not on a fasting schedule (nor should they ever be when they are young and still growing), having lunch and dinner together. I also like the consistency of following the same schedule every day, apart from sporadic multi-day periods of water-only fasts (more on that later).

Aren’t you also burning up your muscles during fasting? Nope. Your body is naturally preserving your muscles by increasing human growth hormone (HGH), which also helps building muscles after the fasting period as HGH levels remain high.

So all the benefits come from fat burning and the increase in human growth hormone? Actually those account for only part of the benefits. The third and arguable the most interesting process during fasting is called autophagy, which literally translates to “self eating”, an apt description for the cellular repair and rejuvenation that will happen in your body.

Autophagy

Your body continuously needs amino acids, the building blocks for new cells, and when you are not eating you are not taking in new amino acids (proteins). The body already recycles your old and damaged cells to harvest these building blocks, but during fasting has to work harder to get enough of this material. It does this by increasing your immune system in order to “scavenge” in all the nooks and crannies of your body for cells to break down. Cells that otherwise would be “good enough yet mediocre” are now also recycled.

This is the only process known to rejuvenate neural pathways when you are getting older, and you will be safeguarding and protecting yourself against neurological and auto-immune disorders (2).

The importance of autophagy has also been clearly demonstrated by Japanese cell biologist Yoshinori Ohsumi, who won, in 2016, the Nobel Prize in Medicine for his research on this very topic, showing how autophagy helps slow down the aging process (6).

Key concepts

Now that we covered what intermittent fasting is, how it works and benefits you, by going over some of the key concepts: the metabolic switch to fat burning, the increase in human growth hormone and autophagy. I like to move on to sharing my experience of putting intermittent fasting into practice.

My 6 Month Journey

During the first Coronavirus lockdown in April (in the Netherlands), I spent most of my time homeschooling my three kids and working for rekall.ai, while neglecting sporting activities and not eating healthy consistently (e.g., more snacks). So when the kids were allowed to go back to school again in May, I stepped on the scale and found myself nearing 100 kg. This for me, being 1.98m tall (6’6”), meant I was being borderline overweight according to my BMI calculation (>25). I have never seen myself weigh more than 100 kg (220 lb) and didn’t want to see that happen, so it was time for action!

I set a weight goal to lose 8 kg in 6 weeks and weigh no more than 90 kg (200 lb) on my birthday (June 26th). In order to get there I wanted to follow a low-carb Paleo diet (Caveman diet), which I had followed 10 years prior with great results. Doing some online research and catching up on Youtube with low-carb and Keto diets, is when I stumbled upon intermittent fasting videos (7, 8, 9). As you know by reading this far, the benefits of IF sounded amazing, so I decided, under the medical supervision of my wife, who is an actual MD, to go all in.

Weight loss results

In the following annotated graph you can view my weight over the course of the past 6 months. I’ll provide you with more context in the next sections.

First 2 weeks

I started May 13th weighing 97.9 kg (A). To keep track of my eating window I set two alarms, one at 12.30pm labeled ‘lunch’ and the other at 8pm ‘no more eating’. For my exercise routine I started to play tennis on Monday mornings, and I tried to run 6-7 km twice a week.

I switched to a low-carb diet (Paleo): eating more meat, salads, fruits (primarily berries), vegetables and nuts. No longer eating bread, pasta, rice and oatmeal.

After one week I already lost 2 kg, and another 1 kg after the second week. I found it very easy to stick to the 8 hour eating window and I was not experiencing hunger sensations in the morning or late evenings. Probably because I was already used to skipping breakfast quite often, and because a low-carb diet also helps lowering your insulin spikes and cravings for more sugar. With lower insulin levels, as a result of lower overall blood sugar, you are also quicker in switching to fat burning!

Water-only Fasting

With this great start, I was feeling bullish about the changes and progression I had made, but I wanted to push fasting a bit harder. So I decided to try water-only fasting, i.e., eating nothing for a couple of days and only consuming water and some minerals (salt for electrolytes). In theory, your body should just switch to fat burning after 12 hours, increase your level of human growth hormone and increase your adrenaline and metabolism.

So what about water-only fasting in practise? If you would have asked me a year ago, I would have guessed you would continuously feel very hungry and tired. Now I can tell you from experience that it is nothing like that, and that I continued to have plenty of energy throughout the 5 days that I fasted (B-C). Yes, you will feel a bit hungry around the times you would normally eat, but that feeling passes quickly. I believe being on IF together with a low-carb diet for two weeks helped me adjust quickly to fasting over a longer time period.

Tip: Drink lots and lots of water to stay hydrated. It also helps suppress hunger feelings, giving you a feeling of satiety. Next to water, you are also allowed to drink calorie free beverages like tea and coffee, as those won’t break your fast.

I experienced it as quite a powerful and liberating feeling, that you can do without food for so long. Having said that, it is both a reminder how privileged and comfortable we are here in the West, as well as how being too comfortable can lead to diseases of affluence, with worldwide obesity having nearly tripled since 1975 up to 1.9 Billion overweight adults in 2016. A little bit of fasting discomfort might go a long way in helping to reduce obesity.

It was also quite a joyful experience to eat again after 5 days. I broke my fast by eating a strawberry (see image below) that tasted delightful. You really start to appreciate food more with increased flavour sensations.

During the 5 day fast, I lost 3kg, of which I gained only 0.5kg back in the days after recovering from the fast. So at this point at 3 weeks in, I already lost a total of 6kg down to 92 kg.

HIIT Workout

I continued reading more into intermittent fasting and found resources (e.g., 10) pointing out that it improves fat burning if you workout during your fasting window, and also that the best type of workout is High-intensity Interval Training (HIIT). The reason why it works so well in combination with intermittent fasting, is that it also stimulates human growth hormone (HGH) secretion, which as we know by now, protects and grows your muscles. So I incorporated a 20 minute brutal HIIT ladder workout once a week into my exercise schedule.

If you don’t see yourself HIIT-ing, then know that walking is also a great exercise for fat burning that complements intermittent fasting.

Birthday goal after 6 weeks

June 14th, almost two weeks before my self-imposed deadline, I already hit my <90 kg (200 lb) goal, weighing 89.8 kg. Instead of calling it a day, I decided to move the goalposts and set a new target weight of <88 kg. In order to actually hit that target, I embarked on my second water-only fast (D-E) in the final week before my birthday. And lo and behold, June 26th (F), 6 weeks after I started this new lifestyle at a weight of 97.9 kg, I now weighed 87.7 kg (193 lb). I lost over 10 kg (22 lb)!

Not only did I lose a lot of weight, I was also feeling stronger and more focused in general, and gained higher levels of energy throughout the day, e.g., I was doing more chores around the house, writing more blog posts, had no more after lunch dips, no more being tired in the evening or going to bed early and it was easier to do more push-ups and plank exercises. I also experienced more calm and tranquility in my mind, making it easier to escape thought loops. All in all, quite a list of improvements!

Next 6 weeks

Would I be able to stay at my new weight, or yo-yo back up? And would I stick to intermittent fasting? I took it as an ongoing goal to stay under 90 kg and decided to be less restrictive with my eating window in the weekends and on holidays.

In my weight graph you can see the results from a two week camping trip in July (G-H) and a camping weekend in August (I). Yes I would gain some weight over these periods, but I would every time return back under my 90 kg baseline. 3 months in, on August 13th, I weighed 89.8 kg.

Last 3 months

For the last 3 months you can see that my weight fluctuated with low variability between 89 and 90 kg. At the end (during the second Coronavirus lockdown) I was going a little bit over 90 kg (1 kg), so I did my third 4-day water-only fast (K-L) to finish my 6 month stretch strongly, and ended with 89.2 kg.

While intermittent fasting on its own can provide you with all the fasting benefits, my plan going forward is to keep doing water-only fasts every 3-4 months as it resets your body and forces you to pay attention to your diet again after the fast. A great reminder to stay on track.

End of first water fast

This picture was taking right after breaking my first 5 days water-only fast, eating a delicious strawberry.

Wrap up

Of course, over the past 6 months, I not only changed my pattern of eating, but also changed my diet to a low-carb Paleo-diet and started exercising more. So which benefits can I directly address to fasting? Given I had prior experience with low-carb diets, I can say that while in both cases I experienced weight loss, I can clearly feel a positive difference in energy-levels and focus with intermittent fasting. Not to mention, all the potential long-term benefits I set myself up for with intermittent fasting.

Fasting and all the related body physiology is such a vast topic, and it has been a challenge to condense it into this blog post without going into too much detail. Hopefully, the scientific evidence I pointed out together with my experience over the past 6 months of putting intermittent fasting into practise, gives you enough reasons to look into intermittent fasting yourself and try it out. It is low risk, but with lots of potential upside.

To be honest, the only reasons why I think you should not fast, are if you have a medical condition and your doctor tells you not to, or if you don’t want to break out of our culture of eating all the time (hint: not a good reason). Changing a habit can be difficult, but just hang in there and give it a couple of weeks. And if you get tired of explaining yourself to others why you are skipping breakfast or have not been eating for several days, just forward them this blog post.

I very much like to hear from you, about your experience with fasting or if this post gets you started (or not). If you have any suggestions or corrections, just let me know.

Breakfast is dead, long live intermittent fasting!

Reddit Discussion

References

  1. Effects of Intermittent Fasting on Health, Aging, and Disease, The New England Journal of medicine
  2. Short-term fasting induces profound neuronal autophagy, Autophagy Journal
  3. 10 Evidence-Based Health Benefits of Intermittent Fasting, Healthline.com
  4. Fasting – A History Part I, The Fasting Method
  5. Fuel metabolism in starvation, Medicine, Biology - Annual review of nutrition
  6. 2016 Nobel Prize in Physiology or Medicine for discoveries of mechanisms for autophagy
  7. Video: Joe Rogan - Dr. Rhonda Patrick on the Carnivore Diet, Dr. Rhonda Patrick
  8. Video: Amazing New Study Reveals Miracle Benefits Of Fasting, Dr. Sten Ekberg
  9. Video: How to do Intermittent Fasting: Complete Guide, Thomas DeLauer
  10. Video: 5 Reasons You should ALWAYS Workout During a Fast [Burn more fat], Thomas DeLauer

Special thanks to my wife Femke Stevens for supporting my journey and Mark Voortman for proofreading!


Docker on macOS without noisy fans

Monday, 26 October 2020

Docker

TL;DR

  • Running Docker on macOS results in noisy CPU fans and low performance (build times)
  • Apple Silicon probably not the answer
  • The solution is using remote Linux Docker host (with setup instructions)

Noisy CPU fans

As a developer I’m a big fan of Apple hardware and software ever since I got my first Macbook back in 2008. MacOS is a UNIX operating system running on X86 hardware (changing to ARM soon, but more on that later) that makes developing for Linux production servers a smooth experience utilizing the same UNIX tooling. While my development stack changed over the years and nowadays includes Docker, Kubernetes and bedrock.io, macOS still continues to serve me well. It is providing these new CLI tools and templates on a stable OS with little to no issues, a nice UI with great HIDPI scaling (rocking a LG 5K monitor over here), and of course I’ve become so used to the interface that switching back to Windows or Linux Desktop would mess up my optimized workflows and habits.

One thing, however, that could use improvements is Docker Desktop (for mac), to increase Docker container performance, speed up docker build times and reduce CPU usage when running “idle”. The reason for being slow and performance hungry is succinctly explained in this Stackoverflow answer:

Docker needs a plain Linux kernel to run. Unfortunately, Mac OS and Windows cannot provide this. Therefore, there is a client on Mac OS to run Docker. In addition to this, there is an abstraction layer between Mac OS kernel and applications (Docker containers) and the filesystems are not the same.

In other words, there is a lot of extra (hyper)virtualization and filesystem overhead going on. Of course a relative performance hit (compared to running on Linux directly) is something I could live with, but my main annoyance presents itself loudly when I’m building (and deploying) a lot of Docker images: it spins up my Macbook pro CPU fans to audible levels, something in stark contrast to the absolute silence at which my laptop runs almost everything else (I guess I’m spoiled).

While you could argue that you should not build and push Docker images to staging or production from your local development machine and just incorporate it in your CI/CD pipeline, I prefer having this manual control, which is also nicely supported by the bedrock.io build and deploy commands. This point probably warrants a whole dedicated blog post, but if you are like me, and also run a lot of docker build commands locally, then you will be interested in the solution I found to my main problem: the lack of macOS docker performance (long build times) and noisy CPU fans.

Apple Silicon is not the solution

With myself being an Apple enthusiast, you can image why I’m excited about the upcoming transition from Intel CPUs to Apple Silicon, which is planned to make its debut in their Macbook Pro and iMac lineup later this year (Rumors point to a November event). So at first I thought that the answer to the lackluster Docker performance on the Mac would simply be waiting for the new ARM hardware release and buying a new Macbook Pro. However, this might not hold true as Apple Silicon will be an ARM SoC, which is a different architecture than X86_64. And X86_64 is still my deployment target for the foreseeable future. In other words, I still need to build X86_64 images and this will require emulation and virtualization that some are expecting to have a 2x to 5x performance hit.

It is also worth noting that Craig Federighi, Apple’s senior vice president of Software Engineering, said the following during an interview after the initial WWDC Apple Silicon announcement:

“Virtualization on the new Macs won’t support X86 at all” (source)

Craig even explicitly called out Docker containers being built for ARM, and being able to run them on ARM instances in AWS, but what about building your X86 images? I know you can build for multi-arch including ARM now with docker desktop (post), but what will the performance hit be for Apple Silicon?

All in all, upcoming Apple Silicon doesn’t seem like an immediate win for my specific problem with Docker performance. Also, I like to skip 1st gen hardware and wait on the sidelines a bit longer before I migrate my production workflows to new hardware running on ARM.

Luckily I did find a solution that serves me right away and might also help with the transition to Apple Silicon in the future. Docker remote to the rescue!

Remote Docker host Solution

Instead of beefing up my local development machine now or with upcoming Apple ARM hardware, I looked into off-loading all the Docker activities onto a remote machine, and as it turns out, it is trivially easy to setup access to a remote docker host. I kinda knew about this, back when I was using Docker Swarm and connecting to different machines, but I somehow never thought about this as a setup for my “local” docker environment.

Needless to say, but one does not setup remote access without a remote machine, so I’m assuming you have a remote Linux machine running somewhere. I personally use and recommend dedicated or cloud servers from Hetzner, that is if you are located in Europe. I’m using the AX51-NVMe Dedicated Root Server (Ryzen 7 3700X CPU, 64GB RAM, 1TB NVMe SSD), which seems comically cheap at only 59 Euros a month. Anyway, let’s get rolling.

1. Install Docker Engine on remote host

First, let’s install Docker on your remote Linux server (I’m using the Ubunty install instructions, but you can find instructions for other distros on that page as well):

 ## Update the apt package index and install packages
 ## to allow apt to use a repository over HTTPS:
 $ sudo apt-get update

 $ sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
     gnupg-agent \
     software-properties-common

 ## Add Docker’s official GPG key:
 $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

 ## Use the following command to set up the stable repository
 $ sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"

 ## Install docker engine
 $ sudo apt-get update
 $ sudo apt-get install docker-ce docker-ce-cli containerd.io

2. Create Docker Context

Next up we need to access the remote Docker host and make it our default “local” engine. For this we leverage Docker Contexts in the following way. First we create a context that will hold the connection path (e.g. ssh://coen@whatever.com) to the remote docker host:

 $ docker context create remote --docker host=ssh://coen@whatever.com
 remote
 Successfully created context “remote”

 $ docker context ls
 NAME       TYPE    DESCRIPTION              DOCKER ENDPOINT    KUBERNETES ENDPOINT    ORCHESTRATOR
 default *  moby    Current DOCKER_HOST...   unix:///var/run/docker.sock               swarm
 remote     moby                             ssh://coen@whatever.com

And the final step is to make remote your default context:

 $ docker context use remote
 remote
 Current context is now “remote”

3. Using your remote Docker host

With the previous 2 steps you are ready to start enjoying a silent local development machine when you are building docker images with an increased speed in build time! Any time you now run docker build, all the relevant (and changed) files that are required for the build are copied from your local computer to the remote Docker host in the background, and the docker build starts the build of the image on the remote host. The resulting image is also located on the remote host, which you can list with docker images, as all docker commands use the default remote context.

Benefits

To show you an example of Docker build speed improvements, let’s have a look at the following time measurements for building the web service component, a React SPA build with Webpack, of bedrock on a clean build:

  +---------+-------------------+--------------------+------------+---------------+
  | Context | Yarn install time | Webpack build time | Total time | Relative time |
  +---------+-------------------+--------------------+------------+---------------+
  | Local   | 75.82s            | 82.05s             | 157.87s    | 296% (~3x)    |
  +---------+-------------------+--------------------+------------+---------------+
  | Remote  | 18.19s            | 35.11s             |  53.30s    | 100% (1x)     |
  +---------+-------------------+--------------------+------------+---------------+

Of course a 3x improvement doesn’t come as a surprise when you are basically comparing a Macbook Pro with 2 (virtual) CPUs allocated to Docker, versus an 8 core dedicated Linux machine (with hyperthreading and higher internet bandwidth) even though not everything runs multi-threaded. But it is a big improvement that you can quickly and easily get for yourself and reap the benefits, which quickly adds up over time. Not to mention the complete silence at which you gain the faster build times.

As a bonus, you can now stop running Docker Desktop entirely, and no longer see a Docker process “idling” in the background that would otherwise still be using around 20% CPU all the time (doing nothing).

Being able to build X86_64 images on a remote machine, incidentally also makes it more viable to migrate my development environment to Apple Silicon in the future, not depending on Apple ARM to build and run my X86_64 images.

Finally, of course you can also docker run containers on your remote host. For example running MongoDB:

 $ docker run --name mongo -d -p 27017:27017 -v /root/data:/data/db mongo:4.2.8
 # Note: /root/data is located on your remote machine

In order to access and use the running MongoDB container locally, make sure to SSH tunnel to your remote machine and forward the relevant ports. For this you can update your SSH config file: .ssh/config:

1
2
3
4
5
6
7
Host coentunnel
  HostName whatever.com
  User coen
  ForwardAgent yes
  ServerAliveInterval 60
  ServerAliveCountMax 10
  LocalForward 27017 localhost:27017 # Mongo

A big word of caution here is to make sure you firewall your ports properly on your remote server, because Docker is known to bypass ufw firewall rules!!!

Downsides

Using a remote Docker host works really well for me, but there are a couple of downsides to this approach:

The first one would be “added complexity & dependency”, because now you have added an extra machine to your development environment that you need to keep up-to-date and secure. Especially if you also use it to docker run containers for local development.

Secondly, if you use docker-compose for local development (often used to inject your live code changes), then you cannot map your local drive into the docker containers, as the mounted volumes will be pointing to folders on your remote machine. You can find more information about how to deploy on remote Docker hosts with docker-compose here.

Last but not least, remote servers are not free, so there is an additional cost involved. However, as pointed out earlier, Hetzner is a very cheap server provider you can use, or you can start with any other small cloud instance (e.g. AWS, Google, etc). I guess that a lot of developers actually already run a Linux box somewhere (often for testing) that they can use as their remote docker host too.

Wrap up

As an Apple enthusiast, I don’t see myself switching to a local Linux development computer anytime soon, but I was kind of annoyed with the lack of macOS Docker performance and noisy CPU fans when building images.

Having a hard time believing that the upcoming switch to Apple Silicon will alleviate this issue (still hope for the best, but prepare for the worst), I was happy that I found a solution by introducing a remote Docker host, which was trivially easy to setup and use with Docker Contexts. Sometimes a small change can have a big impact.

IMHO the benefits I described in this article more than make up for the downsides. Hopefully this may benefit you too. If you have any suggestions for improvements, then please let me know!

Jekyll and Algolia search integration

Sunday, 16 August 2020

Algolia

Recently I decided to add search functionality to my BeatleTech site (indeed, the one you are visiting right now). Not because I needed my readers to filter through the overwhelming number of articles I have written here (which I have not), but simply because I thought it would be a cool feature that would bring some nice interactivity to the site and to spark my ambition to write more blog posts going forward.

In this article I will explain why I picked Algolia Search and how it was integrated with this Jekyll generated static site, including some interesting improvements.

While tinkering a bit with the design and layout of this site during a rainy day at a Dutch campsite, I was looking for an easy Jekyll plugin to bring search to my website. Although I have a lot of experience in working directly with Elasticsearch, I didn’t want to go down a route of building and deploying everything from scratch. While certainly a nice exercise, this would definitely take too much time and maintenance down the line, which wouldn’t be worth it (yet) looking at the traffic statistics. So, like I said, looking for something easy and quick to get up and running within a day.

After some Google search queries, I quickly stumbled upon some people recommending Algolia at the Jekyll talk discussion board.

Digging into Algolia, I found the following compelling reasons to integrate with Algolia Search:

  • Known brand. It turns out I was already quite familiar (and satisfied) with Algolia Search as a consumer of Hacker News Search which is powered by Algolia.
  • Free tier. They recently (July 1st) introduced more customer-friendly pricing with a starting free tier as long as you would show the “Search by Algolia” next to your search results. Also with reasonable pricing when I need to scale up.
  • Great documentation. Their Getting started is clear, complete and up-to-date.
  • Open source. Jekykll Algolia Plugin for indexing Jekyll posts
  • UI Template. Template with UI component using instantsearch.js
  • Example project. Github project jekyll-algolia-example

How to integrate Algolia and Jekyll?

To integrate with Jeyll we first need to install and run the jekyll-algolia-plugin to push the content of our Jekyll website to our Algolia index. Secondly we need to update our HTML with templating and Instantsearch.js.

1. Pushing content to your Algolia index

This is a simple three step process, as lined out in the README of the jekyll-algolia-plugin repository. First add the jekyll-algolia gem to your Gemfile, after which you run bundle install to fetch all the dependencies:

1
2
3
4
5
  # Gemfile

  group :jekyll_plugins do
    gem 'jekyll-algolia', '~> 1.0'
  end

Next, add your application_id, index_name and search_only_api_key to the Jekyll _.config.yml file:

1
2
3
4
5
6
  # _.config.yaml

  algolia:
    application_id: 'your_application_id'
    index_name: 'your_indexname'
    search_only_api_key: '2b61f303d605176b556cb377d5ef3acd'

Finally, get your private Algolia admin key (which you can find in your Algolia dashboard) and run the following to execute the indexing:

  ALGOLIA_API_KEY='your_admin_api_key' bundle exec jekyll algolia

2. Adding instantsearch.js to the front-end

For the front-end part I followed the excellent Algolia community tutorial. Instead of repeating all the documented steps here, I’ll only highlight the relative changes I made.

The integration consists of two parts:

  • A search-hits-wrapper div element where we load the search results. These results are located front and center under the navigation bar (pushing the rest of the content down).
  • The instantsearch.js dependency, template configuration and styling. All of which is located in the _includes/algolia.html file, which can be viewed in full in the source code of this site.

I made the following changes compared to the community tutorial:

  • Hide Search results by default (style=”display:none”) and don’t fire off an empty query that returns all articles. The default empty query returns all articles and to mitigate this I added a searchFunction to the instantsearch options:
1
2
3
4
5
6
7
8
9
10
11
12
13
  const search = instantsearch({
    appId: "K4MUG7LHCA",
    apiKey: "2b61f303d605176b556cb377d5ef3acd", // public read-only key
    indexName: "prod_beatletech",
    searchFunction: function (helper) {
      var searchResults = document.getElementById("search-hits-wrapper");
      if (helper.state.query === "" && searchResults.style.display === "none") {
        return;
      }
      searchResults.style.display = helper.state.query ? "block" : "none";
      helper.search();
    },
  });
  • Don’t fire off a query on every keystroke. While the default of triggering a search query with every keystroke is great in terms of responsiveness, it will also help you burn quickly through your free 10k search requests. In order to trade off query responsiveness for less api requests, I added the following queryHook with a 500ms delay:
1
2
3
4
5
6
7
8
9
10
11
12
13
  search.addWidget(
    instantsearch.widgets.searchBox({
      container: "#search-searchbar",
      placeholder: "Search into posts...",
      poweredBy: false,
      autofocus: false,
      showLoadingIndicator: true,
      queryHook(query, refine) {
        clearTimeout(timerId);
        timerId = setTimeout(() => refine(query), 500);
      },
    })
  );
  • Show “Search by Algolia” badge. If you want to make use of the free plan, they ask you in exchange that you display a “Search by Algolia” logo next to your search results. You can use the Instantsearch searchBox options Boolean flag poweredBy or if you want more flexibility, as I did, you can find different versions of their logo here and add it to the search-hits-wrapper div.
  • Turn off autofocus. Add and set the searchBox option autofocus to false if you don’t want the input search to autofocus. While I at first liked the autofocus, because the user can immediately type their search query, it turns out on mobile devices you automatically zoom in on the search input field. So I recommend turning it off.

Experience so far

I really liked the whole integration process, which was really smooth and not much work. I mean, I even went as far as to write all about it here. 🙂

One additional benefit of Algolia, which I haven’t listed yet, is gaining statistics on your site’s search queries with weekly email updates and an interactive dashboard. Helping you figure out what your readers and followers are looking for.

I am using the slightly older v2 of instantsearch.js, so at some point I will want to update to the latest version, which will decrease to javascript library size. Running PageSpeed Insights is still get a very comfortable 96/100 score, so there is no immediate need, but less is more when it comes to JS dependencies.

If my search query volume increases above the free 10k a month, then I’m happy to pay for this service. I do have one feature request for the Algolia team for the paid service, which is adding a monthly payment limit with alerting, to make sure you won’t get any surprise overcharge bill.

Anyway, so far a big thumbs up for Algolia. 👍

HN Discussion

Chief Architect at Rekall.ai

Monday, 02 September 2019

For the past year I have been working as Tech Lead at Air France - KLM, in the ODS (Operations Decision Support) department. My team was responsible for the on-premise Amsterdam Data Lake, developing ETL pipelines with Spark streaming jobs in Scala, including data modeling of the source system (events) and coordinating with the Data Science team.

The on-premise Hadoop cluster was based on Hortonworks HDP, including Spark, HBase, Kafka and Hive. There was a lot of work involved in configuring and managing the on-premise cluster, and we would spend more time than we would like on infrastructure related issues. Based on my previous cloud experiences I started advocating for a push to Azure Cloud (as KLM already had Azure Active Directory and Office 365) for the Data Lake, and started working on the required Architecture design and proposal.

Even though I really liked learning the enormous organization (with two distinct cultures) in order to navigate my way to get in contact with the right people to make Cloud happen, all the way up to CTO and VP levels. I came across another opportunity outside of KLM to work in an environment that was more startup like, more agile, and changing faster with more direct personal impact on the course of the company.

This new opportunity, freelancing as Chief Architect at Rekall.ai to work on data driven solutions powered by Blockchain and AI, is actually where I started today and I’m excited that I’m reunited with some of my former colleagues from bottlenose.com! I’m looking forward to work with an elite team of highly experienced Developers, Designers and Biz Devs. It’s time to go full throttle!

Rekall.ai

The post-Persgroep era

Monday, 06 August 2018

Last week I finished my job as Lead Data Engineer at the Persgroep in Amsterdam, where I worked for the past year. I had a great time crafting ETL pipelines, writing Scala and Python for Spark (AWS EMR), scheduling tasks (DAGs) with Airflow, and scaling with Redshift (Minus some boring, but important GDPR work). But most of all I liked the (Kanban) team, which grew during my time from just 3 members to 15, and I enjoyed coaching and helping out the new recruits, as well as providing technical support and architectural advise to other teams.

So why leave my esteemed colleagues, when everything is fine and dandy? Can Christian van Thillo really do without me? (Better known by insiders as Christiaan van de Thillo). Well, I have this tendency to shake things up once in a while, and I felt the need to explore new startup opportunities. So here we are, I cleared my schedule for the next half a year, and will be diving into Strong-AI, Genomics, Crypto and Healthcare to find interesting (and doable) business angles. First I like to explore the latest research and follow a couple of Coursera courses, while simultaneously hack on some neural nets for crypto value predictions. In other words, playing around until I hit something concrete and tangible. I have a couple of friends that also have available overtime to join in on the fun. I’m also keeping an ear to the ground for any other opportunities in these domains.

So what was the first thing I did last week with my free time? I updated my BeatleTech website, the site you are actually reading right now! One of my TODOs was to run everything over https, now that Chrome by default shows your website as not secure when it is served over http (news). This was relatively easy to fix thanks to Let’s Encrypt and Certbot. As long as you know that Letsencrypt will verify your website over IPv6 if you added an AAAA DNS record, so make sure it points to the correct IPv6 address (mine wasn’t :/ so I dropped the incorrect AAAA record).

Secondly I was inspired by a Hacker News discussion on improving your portfolio website, to include a section about my home office setup. I have spent many hours perfecting my work setup, so it was only logical to also write about it. You can find it here and in the top menu.

Finally after cleaning up my HTML, upgrading Jekyll, minimizing assets and moving them to S3 (source), I’m concluding my stint on BeatleTech improvements with this blog post. Next up, research and play time!

Wish me luck.

Othot raises $1.7 Million

Monday, 02 May 2016

Great news for Othot as they just secured $1.7 Million in their first round of outside financing. For more details and a shoutout for Dr. Mark Voortman check out MercuryNews.

tags: othot, funding

More...