Goodbye breakfast: 6 months of Intermittent Fasting

Tuesday, 24 November 2020

Intermittent Fasting 16/8

DISCLAIMER: I’m not an MD, so please read this blog post only as an interesting starting point for your own research and always check with your own doctor or dietician if you want to try this at home. You are responsible for your own health.

For the past 6 months I have been doing intermittent fasting (IF) by eating daily only during an 8 hour window: between noon and 8pm. On top of that I had three water-only fasts where I didn’t eat anything for multiple days (4-5) in a row.

That’s madness you might say! Why would you starve yourself? Breakfast is the most important meal of the day and you are skipping it!

Well, I currently believe that it would be madness NOT to fast, and have both scientific and 6 months of anecdotal evidence to back that up. When I was just a few weeks into intermittent fasting, I was already so positively surprised by the initial results that I wanted to tell everybody about my “discovery”, especially because I believed I could also explain why and how it works after reading into the physiology and research behind it. I decided to first see if I could stick with it for a couple of months and then write about my experiences. So here I am, 6 months later, ready to tell you all about my journey and “why” intermittent fasting is so interesting.


Before we dive in, what’s in it for you? What kind of benefits are we talking about? There are the following immediate known and lasting benefits that I experienced:

  • Weight loss
  • Higher levels of energy
  • Feeling stronger (due to increase in human growth hormone)
  • Better focus
  • Decrease in hay fever symptoms (to be fair, I could have been lucky with a mild season)

And there are (potential) long term benefits (1, 2, 3):

  • Cellular repair (Autopaghy)
  • Decreased insulin resistance
  • Decreased incidence of diseases, including cancers, obesity, neurological disorders and cardiovascular disease.
  • Increased stress resistance
  • Increased longevity and quality of life

Fasting sounds like a miracle drug doesn’t it? You don’t even have to pay money for it! That’s also probably why you won’t see any fasting ads on your timeline or tv commercials (e.g., “Stop buying our cornflakes and just skip breakfast now!”). It is essentially free and available for you to try out.

Without further ado, let’s explore intermittent fasting and why it works.

Intermittent Fasting

People have been actively fasting, i.e., periods of consciously not eating, since ancient times (4) and it has, unwillingly, been part of the eating pattern of our ancestors when food wasn’t always around (e.g. hunting on an empty stomach), although strictly speaking you would call it starvation if you don’t know when you will get your next meal. It just shows, that our bodies have been evolutionary adapted to handle feast and famine. It’s being exposed to stress, variability, volatility and randomness (up to a point and not continuously), that makes us stronger (i.e., antifragile).

Recently intermittent fasting has become a more popular form of fasting, which can be defined as an eating pattern in which you cycle between periods of eating and fasting, where you stretch each fasting period long enough to force your body into switching from burning glucose (sugar) and glycogen (stored sugar) to fat burning. This is what is called metabolic flexibility, where your body makes use of whatever fuel is available. As a bonus, it seems that ketosis (i.e., the metabolic state running on fat for fuel) is the main driver for fat burning in the abdomen region, belly fat!

So how long do you have to NOT eat to switch to fat burning? Apparently, energy intake restriction for 10 to 14 hours results in depletion of liver glycogen stores (1, 5) after which fat, fatty acids, are freed to form ketones that are used to fuel your body (as opposed to glucose). The more fat adapted you are, the quicker your body will switch to fat burning, something you get more adapted to as a result of prolonged intermittent fasting.

Given the required minimum of 10 to 14 hours of fasting to start producing ketones, you have different patterns for intermittent fasting you could follow:

  • 16/8: A daily window of 8 hours, often from noon to 8pm (so no breakfast), for eating and 16 hours of fasting (during the night and morning).
  • 5:2: 5 days eating, 2 days fasting.
  • Alternate day: Alternate days of eating and fasting
  • One Meal a Day (OMAD): Sticking to one meal a day, often dinner, and fast the rest of the day.

I chose 16/8, because it fits nicely with having kids that are not on a fasting schedule (nor should they ever be when they are young and still growing), having lunch and dinner together. I also like the consistency of following the same schedule every day, apart from sporadic multi-day periods of water-only fasts (more on that later).

Aren’t you also burning up your muscles during fasting? Nope. Your body is naturally preserving your muscles by increasing human growth hormone (HGH), which also helps building muscles after the fasting period as HGH levels remain high.

So all the benefits come from fat burning and the increase in human growth hormone? Actually those account for only part of the benefits. The third and arguable the most interesting process during fasting is called autophagy, which literally translates to “self eating”, an apt description for the cellular repair and rejuvenation that will happen in your body.


Your body continuously needs amino acids, the building blocks for new cells, and when you are not eating you are not taking in new amino acids (proteins). The body already recycles your old and damaged cells to harvest these building blocks, but during fasting has to work harder to get enough of this material. It does this by increasing your immune system in order to “scavenge” in all the nooks and crannies of your body for cells to break down. Cells that otherwise would be “good enough yet mediocre” are now also recycled.

This is the only process known to rejuvenate neural pathways when you are getting older, and you will be safeguarding and protecting yourself against neurological and auto-immune disorders (2).

The importance of autophagy has also been clearly demonstrated by Japanese cell biologist Yoshinori Ohsumi, who won, in 2016, the Nobel Prize in Medicine for his research on this very topic, showing how autophagy helps slow down the aging process (6).

Key concepts

Now that we covered what intermittent fasting is, how it works and benefits you, by going over some of the key concepts: the metabolic switch to fat burning, the increase in human growth hormone and autophagy. I like to move on to sharing my experience of putting intermittent fasting into practice.

My 6 Month Journey

During the first Coronavirus lockdown in April (in the Netherlands), I spent most of my time homeschooling my three kids and working for, while neglecting sporting activities and not eating healthy consistently (e.g., more snacks). So when the kids were allowed to go back to school again in May, I stepped on the scale and found myself nearing 100 kg. This for me, being 1.98m tall (6’6”), meant I was being borderline overweight according to my BMI calculation (>25). I have never seen myself weigh more than 100 kg (220 lb) and didn’t want to see that happen, so it was time for action!

I set a weight goal to lose 8 kg in 6 weeks and weigh no more than 90 kg (200 lb) on my birthday (June 26th). In order to get there I wanted to follow a low-carb Paleo diet (Caveman diet), which I had followed 10 years prior with great results. Doing some online research and catching up on Youtube with low-carb and Keto diets, is when I stumbled upon intermittent fasting videos (7, 8, 9). As you know by reading this far, the benefits of IF sounded amazing, so I decided, under the medical supervision of my wife, who is an actual MD, to go all in.

Weight loss results

In the following annotated graph you can view my weight over the course of the past 6 months. I’ll provide you with more context in the next sections.

First 2 weeks

I started May 13th weighing 97.9 kg (A). To keep track of my eating window I set two alarms, one at 12.30pm labeled ‘lunch’ and the other at 8pm ‘no more eating’. For my exercise routine I started to play tennis on Monday mornings, and I tried to run 6-7 km twice a week.

I switched to a low-carb diet (Paleo): eating more meat, salads, fruits (primarily berries), vegetables and nuts. No longer eating bread, pasta, rice and oatmeal.

After one week I already lost 2 kg, and another 1 kg after the second week. I found it very easy to stick to the 8 hour eating window and I was not experiencing hunger sensations in the morning or late evenings. Probably because I was already used to skipping breakfast quite often, and because a low-carb diet also helps lowering your insulin spikes and cravings for more sugar. With lower insulin levels, as a result of lower overall blood sugar, you are also quicker in switching to fat burning!

Water-only Fasting

With this great start, I was feeling bullish about the changes and progression I had made, but I wanted to push fasting a bit harder. So I decided to try water-only fasting, i.e., eating nothing for a couple of days and only consuming water and some minerals (salt for electrolytes). In theory, your body should just switch to fat burning after 12 hours, increase your level of human growth hormone and increase your adrenaline and metabolism.

So what about water-only fasting in practise? If you would have asked me a year ago, I would have guessed you would continuously feel very hungry and tired. Now I can tell you from experience that it is nothing like that, and that I continued to have plenty of energy throughout the 5 days that I fasted (B-C). Yes, you will feel a bit hungry around the times you would normally eat, but that feeling passes quickly. I believe being on IF together with a low-carb diet for two weeks helped me adjust quickly to fasting over a longer time period.

Tip: Drink lots and lots of water to stay hydrated. It also helps suppress hunger feelings, giving you a feeling of satiety. Next to water, you are also allowed to drink calorie free beverages like tea and coffee, as those won’t break your fast.

I experienced it as quite a powerful and liberating feeling, that you can do without food for so long. Having said that, it is both a reminder how privileged and comfortable we are here in the West, as well as how being too comfortable can lead to diseases of affluence, with worldwide obesity having nearly tripled since 1975 up to 1.9 Billion overweight adults in 2016. A little bit of fasting discomfort might go a long way in helping to reduce obesity.

It was also quite a joyful experience to eat again after 5 days. I broke my fast by eating a strawberry (see image below) that tasted delightful. You really start to appreciate food more with increased flavour sensations.

During the 5 day fast, I lost 3kg, of which I gained only 0.5kg back in the days after recovering from the fast. So at this point at 3 weeks in, I already lost a total of 6kg down to 92 kg.

HIIT Workout

I continued reading more into intermittent fasting and found resources (e.g., 10) pointing out that it improves fat burning if you workout during your fasting window, and also that the best type of workout is High-intensity Interval Training (HIIT). The reason why it works so well in combination with intermittent fasting, is that it also stimulates human growth hormone (HGH) secretion, which as we know by now, protects and grows your muscles. So I incorporated a 20 minute brutal HIIT ladder workout once a week into my exercise schedule.

If you don’t see yourself HIIT-ing, then know that walking is also a great exercise for fat burning that complements intermittent fasting.

Birthday goal after 6 weeks

June 14th, almost two weeks before my self-imposed deadline, I already hit my <90 kg (200 lb) goal, weighing 89.8 kg. Instead of calling it a day, I decided to move the goalposts and set a new target weight of <88 kg. In order to actually hit that target, I embarked on my second water-only fast (D-E) in the final week before my birthday. And lo and behold, June 26th (F), 6 weeks after I started this new lifestyle at a weight of 97.9 kg, I now weighed 87.7 kg (193 lb). I lost over 10 kg (22 lb)!

Not only did I lose a lot of weight, I was also feeling stronger and more focused in general, and gained higher levels of energy throughout the day, e.g., I was doing more chores around the house, writing more blog posts, had no more after lunch dips, no more being tired in the evening or going to bed early and it was easier to do more push-ups and plank exercises. I also experienced more calm and tranquility in my mind, making it easier to escape thought loops. All in all, quite a list of improvements!

Next 6 weeks

Would I be able to stay at my new weight, or yo-yo back up? And would I stick to intermittent fasting? I took it as an ongoing goal to stay under 90 kg and decided to be less restrictive with my eating window in the weekends and on holidays.

In my weight graph you can see the results from a two week camping trip in July (G-H) and a camping weekend in August (I). Yes I would gain some weight over these periods, but I would every time return back under my 90 kg baseline. 3 months in, on August 13th, I weighed 89.8 kg.

Last 3 months

For the last 3 months you can see that my weight fluctuated with low variability between 89 and 90 kg. At the end (during the second Coronavirus lockdown) I was going a little bit over 90 kg (1 kg), so I did my third 4-day water-only fast (K-L) to finish my 6 month stretch strongly, and ended with 89.2 kg.

While intermittent fasting on its own can provide you with all the fasting benefits, my plan going forward is to keep doing water-only fasts every 3-4 months as it resets your body and forces you to pay attention to your diet again after the fast. A great reminder to stay on track.

End of first water fast

This picture was taking right after breaking my first 5 days water-only fast, eating a delicious strawberry.

Wrap up

Of course, over the past 6 months, I not only changed my pattern of eating, but also changed my diet to a low-carb Paleo-diet and started exercising more. So which benefits can I directly address to fasting? Given I had prior experience with low-carb diets, I can say that while in both cases I experienced weight loss, I can clearly feel a positive difference in energy-levels and focus with intermittent fasting. Not to mention, all the potential long-term benefits I set myself up for with intermittent fasting.

Fasting and all the related body physiology is such a vast topic, and it has been a challenge to condense it into this blog post without going into too much detail. Hopefully, the scientific evidence I pointed out together with my experience over the past 6 months of putting intermittent fasting into practise, gives you enough reasons to look into intermittent fasting yourself and try it out. It is low risk, but with lots of potential upside.

To be honest, the only reasons why I think you should not fast, are if you have a medical condition and your doctor tells you not to, or if you don’t want to break out of our culture of eating all the time (hint: not a good reason). Changing a habit can be difficult, but just hang in there and give it a couple of weeks. And if you get tired of explaining yourself to others why you are skipping breakfast or have not been eating for several days, just forward them this blog post.

I very much like to hear from you, about your experience with fasting or if this post gets you started (or not). If you have any suggestions or corrections, just let me know.

Breakfast is dead, long live intermittent fasting!

Reddit Discussion


  1. Effects of Intermittent Fasting on Health, Aging, and Disease, The New England Journal of medicine
  2. Short-term fasting induces profound neuronal autophagy, Autophagy Journal
  3. 10 Evidence-Based Health Benefits of Intermittent Fasting,
  4. Fasting – A History Part I, The Fasting Method
  5. Fuel metabolism in starvation, Medicine, Biology - Annual review of nutrition
  6. 2016 Nobel Prize in Physiology or Medicine for discoveries of mechanisms for autophagy
  7. Video: Joe Rogan - Dr. Rhonda Patrick on the Carnivore Diet, Dr. Rhonda Patrick
  8. Video: Amazing New Study Reveals Miracle Benefits Of Fasting, Dr. Sten Ekberg
  9. Video: How to do Intermittent Fasting: Complete Guide, Thomas DeLauer
  10. Video: 5 Reasons You should ALWAYS Workout During a Fast [Burn more fat], Thomas DeLauer

Special thanks to my wife Femke Stevens for supporting my journey and Mark Voortman for proofreading!

Docker on macOS without noisy fans

Monday, 26 October 2020



  • Running Docker on macOS results in noisy CPU fans and low performance (build times)
  • Apple Silicon probably not the answer
  • The solution is using remote Linux Docker host (with setup instructions)

Noisy CPU fans

As a developer I’m a big fan of Apple hardware and software ever since I got my first Macbook back in 2008. MacOS is a UNIX operating system running on X86 hardware (changing to ARM soon, but more on that later) that makes developing for Linux production servers a smooth experience utilizing the same UNIX tooling. While my development stack changed over the years and nowadays includes Docker, Kubernetes and, macOS still continues to serve me well. It is providing these new CLI tools and templates on a stable OS with little to no issues, a nice UI with great HIDPI scaling (rocking a LG 5K monitor over here), and of course I’ve become so used to the interface that switching back to Windows or Linux Desktop would mess up my optimized workflows and habits.

One thing, however, that could use improvements is Docker Desktop (for mac), to increase Docker container performance, speed up docker build times and reduce CPU usage when running “idle”. The reason for being slow and performance hungry is succinctly explained in this Stackoverflow answer:

Docker needs a plain Linux kernel to run. Unfortunately, Mac OS and Windows cannot provide this. Therefore, there is a client on Mac OS to run Docker. In addition to this, there is an abstraction layer between Mac OS kernel and applications (Docker containers) and the filesystems are not the same.

In other words, there is a lot of extra (hyper)virtualization and filesystem overhead going on. Of course a relative performance hit (compared to running on Linux directly) is something I could live with, but my main annoyance presents itself loudly when I’m building (and deploying) a lot of Docker images: it spins up my Macbook pro CPU fans to audible levels, something in stark contrast to the absolute silence at which my laptop runs almost everything else (I guess I’m spoiled).

While you could argue that you should not build and push Docker images to staging or production from your local development machine and just incorporate it in your CI/CD pipeline, I prefer having this manual control, which is also nicely supported by the build and deploy commands. This point probably warrants a whole dedicated blog post, but if you are like me, and also run a lot of docker build commands locally, then you will be interested in the solution I found to my main problem: the lack of macOS docker performance (long build times) and noisy CPU fans.

Apple Silicon is not the solution

With myself being an Apple enthusiast, you can image why I’m excited about the upcoming transition from Intel CPUs to Apple Silicon, which is planned to make its debut in their Macbook Pro and iMac lineup later this year (Rumors point to a November event). So at first I thought that the answer to the lackluster Docker performance on the Mac would simply be waiting for the new ARM hardware release and buying a new Macbook Pro. However, this might not hold true as Apple Silicon will be an ARM SoC, which is a different architecture than X86_64. And X86_64 is still my deployment target for the foreseeable future. In other words, I still need to build X86_64 images and this will require emulation and virtualization that some are expecting to have a 2x to 5x performance hit.

It is also worth noting that Craig Federighi, Apple’s senior vice president of Software Engineering, said the following during an interview after the initial WWDC Apple Silicon announcement:

“Virtualization on the new Macs won’t support X86 at all” (source)

Craig even explicitly called out Docker containers being built for ARM, and being able to run them on ARM instances in AWS, but what about building your X86 images? I know you can build for multi-arch including ARM now with docker desktop (post), but what will the performance hit be for Apple Silicon?

All in all, upcoming Apple Silicon doesn’t seem like an immediate win for my specific problem with Docker performance. Also, I like to skip 1st gen hardware and wait on the sidelines a bit longer before I migrate my production workflows to new hardware running on ARM.

Luckily I did find a solution that serves me right away and might also help with the transition to Apple Silicon in the future. Docker remote to the rescue!

Remote Docker host Solution

Instead of beefing up my local development machine now or with upcoming Apple ARM hardware, I looked into off-loading all the Docker activities onto a remote machine, and as it turns out, it is trivially easy to setup access to a remote docker host. I kinda knew about this, back when I was using Docker Swarm and connecting to different machines, but I somehow never thought about this as a setup for my “local” docker environment.

Needless to say, but one does not setup remote access without a remote machine, so I’m assuming you have a remote Linux machine running somewhere. I personally use and recommend dedicated or cloud servers from Hetzner, that is if you are located in Europe. I’m using the AX51-NVMe Dedicated Root Server (Ryzen 7 3700X CPU, 64GB RAM, 1TB NVMe SSD), which seems comically cheap at only 59 Euros a month. Anyway, let’s get rolling.

1. Install Docker Engine on remote host

First, let’s install Docker on your remote Linux server (I’m using the Ubunty install instructions, but you can find instructions for other distros on that page as well):

 ## Update the apt package index and install packages
 ## to allow apt to use a repository over HTTPS:
 $ sudo apt-get update

 $ sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
     gnupg-agent \

 ## Add Docker’s official GPG key:
 $ curl -fsSL | sudo apt-key add -

 ## Use the following command to set up the stable repository
 $ sudo add-apt-repository \
    "deb [arch=amd64] \
    $(lsb_release -cs) \

 ## Install docker engine
 $ sudo apt-get update
 $ sudo apt-get install docker-ce docker-ce-cli

2. Create Docker Context

Next up we need to access the remote Docker host and make it our default “local” engine. For this we leverage Docker Contexts in the following way. First we create a context that will hold the connection path (e.g. ssh:// to the remote docker host:

 $ docker context create remote --docker host=ssh://
 Successfully created context “remote”

 $ docker context ls
 default *  moby    Current DOCKER_HOST...   unix:///var/run/docker.sock               swarm
 remote     moby                             ssh://

And the final step is to make remote your default context:

 $ docker context use remote
 Current context is now “remote”

3. Using your remote Docker host

With the previous 2 steps you are ready to start enjoying a silent local development machine when you are building docker images with an increased speed in build time! Any time you now run docker build, all the relevant (and changed) files that are required for the build are copied from your local computer to the remote Docker host in the background, and the docker build starts the build of the image on the remote host. The resulting image is also located on the remote host, which you can list with docker images, as all docker commands use the default remote context.


To show you an example of Docker build speed improvements, let’s have a look at the following time measurements for building the web service component, a React SPA build with Webpack, of bedrock on a clean build:

  | Context | Yarn install time | Webpack build time | Total time | Relative time |
  | Local   | 75.82s            | 82.05s             | 157.87s    | 296% (~3x)    |
  | Remote  | 18.19s            | 35.11s             |  53.30s    | 100% (1x)     |

Of course a 3x improvement doesn’t come as a surprise when you are basically comparing a Macbook Pro with 2 (virtual) CPUs allocated to Docker, versus an 8 core dedicated Linux machine (with hyperthreading and higher internet bandwidth) even though not everything runs multi-threaded. But it is a big improvement that you can quickly and easily get for yourself and reap the benefits, which quickly adds up over time. Not to mention the complete silence at which you gain the faster build times.

As a bonus, you can now stop running Docker Desktop entirely, and no longer see a Docker process “idling” in the background that would otherwise still be using around 20% CPU all the time (doing nothing).

Being able to build X86_64 images on a remote machine, incidentally also makes it more viable to migrate my development environment to Apple Silicon in the future, not depending on Apple ARM to build and run my X86_64 images.

Finally, of course you can also docker run containers on your remote host. For example running MongoDB:

 $ docker run --name mongo -d -p 27017:27017 -v /root/data:/data/db mongo:4.2.8
 # Note: /root/data is located on your remote machine

In order to access and use the running MongoDB container locally, make sure to SSH tunnel to your remote machine and forward the relevant ports. For this you can update your SSH config file: .ssh/config:

Host coentunnel
  User coen
  ForwardAgent yes
  ServerAliveInterval 60
  ServerAliveCountMax 10
  LocalForward 27017 localhost:27017 # Mongo

A big word of caution here is to make sure you firewall your ports properly on your remote server, because Docker is known to bypass ufw firewall rules!!!


Using a remote Docker host works really well for me, but there are a couple of downsides to this approach:

The first one would be “added complexity & dependency”, because now you have added an extra machine to your development environment that you need to keep up-to-date and secure. Especially if you also use it to docker run containers for local development.

Secondly, if you use docker-compose for local development (often used to inject your live code changes), then you cannot map your local drive into the docker containers, as the mounted volumes will be pointing to folders on your remote machine. You can find more information about how to deploy on remote Docker hosts with docker-compose here.

Last but not least, remote servers are not free, so there is an additional cost involved. However, as pointed out earlier, Hetzner is a very cheap server provider you can use, or you can start with any other small cloud instance (e.g. AWS, Google, etc). I guess that a lot of developers actually already run a Linux box somewhere (often for testing) that they can use as their remote docker host too.

Wrap up

As an Apple enthusiast, I don’t see myself switching to a local Linux development computer anytime soon, but I was kind of annoyed with the lack of macOS Docker performance and noisy CPU fans when building images.

Having a hard time believing that the upcoming switch to Apple Silicon will alleviate this issue (still hope for the best, but prepare for the worst), I was happy that I found a solution by introducing a remote Docker host, which was trivially easy to setup and use with Docker Contexts. Sometimes a small change can have a big impact.

IMHO the benefits I described in this article more than make up for the downsides. Hopefully this may benefit you too. If you have any suggestions for improvements, then please let me know!

Jekyll and Algolia search integration

Sunday, 16 August 2020


Recently I decided to add search functionality to my BeatleTech site (indeed, the one you are visiting right now). Not because I needed my readers to filter through the overwhelming number of articles I have written here (which I have not), but simply because I thought it would be a cool feature that would bring some nice interactivity to the site and to spark my ambition to write more blog posts going forward.

In this article I will explain why I picked Algolia Search and how it was integrated with this Jekyll generated static site, including some interesting improvements.

While tinkering a bit with the design and layout of this site during a rainy day at a Dutch campsite, I was looking for an easy Jekyll plugin to bring search to my website. Although I have a lot of experience in working directly with Elasticsearch, I didn’t want to go down a route of building and deploying everything from scratch. While certainly a nice exercise, this would definitely take too much time and maintenance down the line, which wouldn’t be worth it (yet) looking at the traffic statistics. So, like I said, looking for something easy and quick to get up and running within a day.

After some Google search queries, I quickly stumbled upon some people recommending Algolia at the Jekyll talk discussion board.

Digging into Algolia, I found the following compelling reasons to integrate with Algolia Search:

  • Known brand. It turns out I was already quite familiar (and satisfied) with Algolia Search as a consumer of Hacker News Search which is powered by Algolia.
  • Free tier. They recently (July 1st) introduced more customer-friendly pricing with a starting free tier as long as you would show the “Search by Algolia” next to your search results. Also with reasonable pricing when I need to scale up.
  • Great documentation. Their Getting started is clear, complete and up-to-date.
  • Open source. Jekykll Algolia Plugin for indexing Jekyll posts
  • UI Template. Template with UI component using instantsearch.js
  • Example project. Github project jekyll-algolia-example

How to integrate Algolia and Jekyll?

To integrate with Jeyll we first need to install and run the jekyll-algolia-plugin to push the content of our Jekyll website to our Algolia index. Secondly we need to update our HTML with templating and Instantsearch.js.

1. Pushing content to your Algolia index

This is a simple three step process, as lined out in the README of the jekyll-algolia-plugin repository. First add the jekyll-algolia gem to your Gemfile, after which you run bundle install to fetch all the dependencies:

  # Gemfile

  group :jekyll_plugins do
    gem 'jekyll-algolia', '~> 1.0'

Next, add your application_id, index_name and search_only_api_key to the Jekyll _.config.yml file:

  # _.config.yaml

    application_id: 'your_application_id'
    index_name: 'your_indexname'
    search_only_api_key: '2b61f303d605176b556cb377d5ef3acd'

Finally, get your private Algolia admin key (which you can find in your Algolia dashboard) and run the following to execute the indexing:

  ALGOLIA_API_KEY='your_admin_api_key' bundle exec jekyll algolia

2. Adding instantsearch.js to the front-end

For the front-end part I followed the excellent Algolia community tutorial. Instead of repeating all the documented steps here, I’ll only highlight the relative changes I made.

The integration consists of two parts:

  • A search-hits-wrapper div element where we load the search results. These results are located front and center under the navigation bar (pushing the rest of the content down).
  • The instantsearch.js dependency, template configuration and styling. All of which is located in the _includes/algolia.html file, which can be viewed in full in the source code of this site.

I made the following changes compared to the community tutorial:

  • Hide Search results by default (style=”display:none”) and don’t fire off an empty query that returns all articles. The default empty query returns all articles and to mitigate this I added a searchFunction to the instantsearch options:
  const search = instantsearch({
    appId: "K4MUG7LHCA",
    apiKey: "2b61f303d605176b556cb377d5ef3acd", // public read-only key
    indexName: "prod_beatletech",
    searchFunction: function (helper) {
      var searchResults = document.getElementById("search-hits-wrapper");
      if (helper.state.query === "" && === "none") {
      } = helper.state.query ? "block" : "none";;
  • Don’t fire off a query on every keystroke. While the default of triggering a search query with every keystroke is great in terms of responsiveness, it will also help you burn quickly through your free 10k search requests. In order to trade off query responsiveness for less api requests, I added the following queryHook with a 500ms delay:
      container: "#search-searchbar",
      placeholder: "Search into posts...",
      poweredBy: false,
      autofocus: false,
      showLoadingIndicator: true,
      queryHook(query, refine) {
        timerId = setTimeout(() => refine(query), 500);
  • Show “Search by Algolia” badge. If you want to make use of the free plan, they ask you in exchange that you display a “Search by Algolia” logo next to your search results. You can use the Instantsearch searchBox options Boolean flag poweredBy or if you want more flexibility, as I did, you can find different versions of their logo here and add it to the search-hits-wrapper div.
  • Turn off autofocus. Add and set the searchBox option autofocus to false if you don’t want the input search to autofocus. While I at first liked the autofocus, because the user can immediately type their search query, it turns out on mobile devices you automatically zoom in on the search input field. So I recommend turning it off.

Experience so far

I really liked the whole integration process, which was really smooth and not much work. I mean, I even went as far as to write all about it here. 🙂

One additional benefit of Algolia, which I haven’t listed yet, is gaining statistics on your site’s search queries with weekly email updates and an interactive dashboard. Helping you figure out what your readers and followers are looking for.

I am using the slightly older v2 of instantsearch.js, so at some point I will want to update to the latest version, which will decrease to javascript library size. Running PageSpeed Insights is still get a very comfortable 96/100 score, so there is no immediate need, but less is more when it comes to JS dependencies.

If my search query volume increases above the free 10k a month, then I’m happy to pay for this service. I do have one feature request for the Algolia team for the paid service, which is adding a monthly payment limit with alerting, to make sure you won’t get any surprise overcharge bill.

Anyway, so far a big thumbs up for Algolia. 👍

HN Discussion

Chief Architect at

Monday, 02 September 2019

For the past year I have been working as Tech Lead at Air France - KLM, in the ODS (Operations Decision Support) department. My team was responsible for the on-premise Amsterdam Data Lake, developing ETL pipelines with Spark streaming jobs in Scala, including data modeling of the source system (events) and coordinating with the Data Science team.

The on-premise Hadoop cluster was based on Hortonworks HDP, including Spark, HBase, Kafka and Hive. There was a lot of work involved in configuring and managing the on-premise cluster, and we would spend more time than we would like on infrastructure related issues. Based on my previous cloud experiences I started advocating for a push to Azure Cloud (as KLM already had Azure Active Directory and Office 365) for the Data Lake, and started working on the required Architecture design and proposal.

Even though I really liked learning the enormous organization (with two distinct cultures) in order to navigate my way to get in contact with the right people to make Cloud happen, all the way up to CTO and VP levels. I came across another opportunity outside of KLM to work in an environment that was more startup like, more agile, and changing faster with more direct personal impact on the course of the company.

This new opportunity, freelancing as Chief Architect at to work on data driven solutions powered by Blockchain and AI, is actually where I started today and I’m excited that I’m reunited with some of my former colleagues from! I’m looking forward to work with an elite team of highly experienced Developers, Designers and Biz Devs. It’s time to go full throttle!

The post-Persgroep era

Monday, 06 August 2018

Last week I finished my job as Lead Data Engineer at the Persgroep in Amsterdam, where I worked for the past year. I had a great time crafting ETL pipelines, writing Scala and Python for Spark (AWS EMR), scheduling tasks (DAGs) with Airflow, and scaling with Redshift (Minus some boring, but important GDPR work). But most of all I liked the (Kanban) team, which grew during my time from just 3 members to 15, and I enjoyed coaching and helping out the new recruits, as well as providing technical support and architectural advise to other teams.

So why leave my esteemed colleagues, when everything is fine and dandy? Can Christian van Thillo really do without me? (Better known by insiders as Christiaan van de Thillo). Well, I have this tendency to shake things up once in a while, and I felt the need to explore new startup opportunities. So here we are, I cleared my schedule for the next half a year, and will be diving into Strong-AI, Genomics, Crypto and Healthcare to find interesting (and doable) business angles. First I like to explore the latest research and follow a couple of Coursera courses, while simultaneously hack on some neural nets for crypto value predictions. In other words, playing around until I hit something concrete and tangible. I have a couple of friends that also have available overtime to join in on the fun. I’m also keeping an ear to the ground for any other opportunities in these domains.

So what was the first thing I did last week with my free time? I updated my BeatleTech website, the site you are actually reading right now! One of my TODOs was to run everything over https, now that Chrome by default shows your website as not secure when it is served over http (news). This was relatively easy to fix thanks to Let’s Encrypt and Certbot. As long as you know that Letsencrypt will verify your website over IPv6 if you added an AAAA DNS record, so make sure it points to the correct IPv6 address (mine wasn’t :/ so I dropped the incorrect AAAA record).

Secondly I was inspired by a Hacker News discussion on improving your portfolio website, to include a section about my home office setup. I have spent many hours perfecting my work setup, so it was only logical to also write about it. You can find it here and in the top menu.

Finally after cleaning up my HTML, upgrading Jekyll, minimizing assets and moving them to S3 (source), I’m concluding my stint on BeatleTech improvements with this blog post. Next up, research and play time!

Wish me luck.

Othot raises $1.7 Million

Monday, 02 May 2016

Great news for Othot as they just secured $1.7 Million in their first round of outside financing. For more details and a shoutout for Dr. Mark Voortman check out MercuryNews.

tags: othot, funding

O Data Scientist, Where Art Thou?

Thursday, 10 September 2015

Note: This is a repost of my original blog post at OThot, where I also work as Senior Data Scientist.

By now you have probably heard about the “Big Data” hype in one form [1] or another [2], or read about how other companies are achieving success harnessing their data [3]. With all the attention for Big Data and the accompanied field of Data Science to make sense of the data, it would be no surprise if you want your company to benefit as well from one or more Data Scientists generating actionable insights from your ever-growing sea of data.

When you are at this stage with your company and start to search for some great Data Scientists you will quickly find out that these people are in short supply. Now why is that? And why is it potentially dangerous if you simply want to bump or educate one of your company’s developers to a Data Scientist role if your search keeps turning up empty?

The reason why Data Scientists are scarce is twofold. First off, the aforementioned Big Data hype with its growing need for Data Scientists (with the amount of data outgrowing the number of data analysts [4]), is creating a high demand for Data Scientists. With many Data Science positions opening up, there is also the troublesome side effect of developers starting to market themselves as Data Scientists, while having zero to none of the required expertise.

Secondly, and somewhat problematic, is the required skill set for a Data Scientist. Why problematic? Well, this is expressed clearly in the by now classic illustration of the Data Science skill set, the Data Science Venn Diagram [5]:

Data Science Venn Diagram

The three main skills (indicated by the primary colors) are hacking skills, math and stats knowledge, and substantive expertise. What this implies is that you need to be a programmer and statistician, while also having a lot of experience in these fields and in the relevant problem domain and business context. Given that each of these skills on their own already poses a challenge when you want to find a great candidate, then searching for all of them combined in one person can send you on a wild goose chase.

Earlier I mentioned the danger of turning developers (with hacking skills) into Data Scientists, which if you look at the diagram, might put you in the Danger Zone! Reason being that if you have hackers without substantive Math and Statistics knowledge you “[..] give people the ability to create what appears to be a legitimate analysis without any understanding of how they got there or what they have created” [5]. This could give rise to flat-out wrong business decisions based on wrong interpretations of the data.

This is not meant to say that you cannot turn a developer into a Data Scientist, but rather that you have be aware that you also have to teach them the required math and statistics background. Or the other way around, you need to make sure that you are teaching your statisticians to gain better development skills. Nowadays there are many resources on the internet for learning Data Science [6]. This will get you started, but it will take a lot of time and practice to gain enough experience for a desired level of proficiency.

We believe there might be a better alternative to growing your own in-house Data Science team and that is to have OThot be of service. OThot can either take the Data Science challenge off your hands completely or complement your Data Scientists with substantial expertise in all required skills. So don’t hesitate to reach out if you want to know more about what OThot has to offer!


[1] Big Data, Big Hype?

[2] Big Data Is A Big Problem That’s Getting Bigger

[3] Who is ready for some big data success stories

[4] Growth of Data vs Growth of Data Analysts

[5] The Data Science Venn Diagram

[6] How to actually learn data science

New Job at ThinkTopic

Monday, 01 June 2015

Two months ago a good friend of mine (Jeff Rose), who is a former DeepMind engineer (acquired by Google), presented me with an opportunity to come work at ThinkTopic to build intelligent tools for a big publisher that involves image search, collaborative filtering, an intelligent catalog, and much more by applying Deep Learning techniques for learning in Neural Networks. Additionally most development work would be done in Clojure (a Lisp dialect on the JVM), which for me really is the icing on the cake, being a longtime Clojure and functional programming fan.

So after almost 4 years of working as Chief Architect and Data Scientist at Bottlenose, I decided to take on this new challenge to start working for ThinkTopic, of which today is my first day.

I had a great time at Bottlenose working with a great group of people. It has been amazing to see the company grow to where it is right now, and the potential of where it could go. Getting the KPMG investment has been a great milestone and together with all the work over the years the idea of no longer being part of Bottlenose feels a bit weird. Bottlenose has given me a lot in terms of experience, joy and friendships.

Having said that, I’m keeping my desk at the Hackers and Founders building in Amsterdam, so I’m not going cold turkey on my former colleagues (only switching rooms).

I’m excited to go on this new ThinkTopic adventure, which for today means diving back into Convolutional Neural Networks. May the fun and challenge begin.