Hi! 👋 I’m Henrique and I’m a 20 years-old Computer Engineering student and technology enthusiast in Lisbon. I’m also working on decentralizing the web with IPFS. In this homepage, you can find some of my posts. For all posts, you can check my all page.

This is just a FYI: if you’re interested, I recently added a knowledge base section to the website where I automatically deploy de notes I write with Notable. I actually have some ideas of things to add to these notes when I have time. It’s actually quite simple: just set up a GitHub hook, pull the notes repo, convert them to Hugo’s format and et voilá!

Jan-Lukas Else 24 Feb 2020 08:49

I left Twitter one month ago and didn’t miss it a single time. Reading stories about “Brand Blockers” (Medium paywall) just gives me the feeling that this was the right decision.Regarding the “Brand Blockers”: Instead of trying to block hundreds of thousands brands, maybe just block Twitter? Twitter …

I’ve actually thought about doing this multiple times. Not Twitter actually, but Facebook for now. The only thing that stops me is some university groups that we only have on Facebook that are useful sometimes. However, I already uninstalled it from my phone. Sometimes I wanna show someone something and then I think “I don’t have FB installed”, but it doesn’t matter anyway. Usually it’s not something that important.

Analyzing my shows and movie habits

Today, I decided to readd a watches page, but this time it isn’t built from hundreds of posts, but from the data that I get directly from Trakt’s API. I built a small tool called trakt-collector used to collect your history and save it in JSON format.

The Trakt API gives you so much information about every episode and every movie: from the title, to the rating, description, channel where it aired, when it aired first, the countries where it aired, etc, etc. I don’t actually need all that information, but it’s never too much to store.

I’m wondering if there’s any interesting software to build git based wikis (or at least with a flat file storage) that support Markdown and ACL for private and public posts and is extensible enough to allow me to customize it the way I want. I have a bunch of notes I’d like to make public just because they might be helpful for someone else. I thought about integrating it into this website but I’d love to have a way to have private posts, as well as easily linking between pages. The most interesting option I found was DokuWiki which was tons of plugins. However, it’s PHP and I’m not sure about how maintained it is nowadays. It looks like wiki software is dying… besides the big beasts.

After wondering whether or not I should keep the check-ins and watches on my website, I decided not to. I know… I change my mind quite a lot. But here are some reasons why:

  • I don’t believe I need one post for each checkin and for each watch since I don’t actually add content to it besides the fact that I’ve been there or watched something.
  • Check-ins? Privacy! Which is important. I like to collect data about myself, but not to make everything public.
  • I had more than 6000 pages on this website (?).

However, I plan on recreating a watches page similar to my bookshelf where I “showcase” the movies and the series I’ve watched. I’m creating a bunch of scripts to collect my Trakt history. I plan on doing something similar for the check-ins, but as a map.

Along the same lines, I’m planning on removing the read logs but keeping the bookshelf. I’m just not removing them now because the bookshelf is working and I prefer to deal with that afterwards.

Jan-Lukas Else 17 Feb 2020 18:38

Opinions can change over time. And since I often post opinions on my blog, I’ve added a feature to my blog theme that displays a warning message above posts that are over one year old (example).I have been blogging for some time now. There were times when blogging was my escape to deal with …

Completely agree with all you’ve said. “But still, I don’t want to delete most of what I published back then just like that.” this is something I’ve come to learn. My website passed through many phases where I removed a lot of the posts I had there. But now? Now I restored everything.

Just wondering if it’s worth it to keep my watches log and checkins on my website. I know I had some work setting that up. But, is it worth it? Is it worth for you, readers?

I like having that data accessible and I still can by just using the APIs and backing up the data myself. It can be useful and it can have many uses. But is it worthy to have on this website?

For maintenance purposes, it’s a bit harder but not impossible. For you, readers. I’d love your opinion on this. I’ve b en thinking about removing swarm checkins because of privacy issues.

About the watches: maybe it’d be nice to have a page listing the series and movies I’ve watched but not as logs.

Henrique Dias 16 Feb 2020 11:06

So I just made a few changes to my website and I hope it didn’t break anything like feeds and such. Here’s a small changelog of the changes:Stopped using Hugo categories for post types (replies, notes, articles, etc) and started using sections, i.e., I know put a note under the /note path. So this …

After publishing the post to which I’m replying to, @jlelse contacted me and I noticed the import directive in Caddy can be used to import files:

import allows you to use configuration from another file or a reusable snippet. It gets replaced with the contents of that file or snippet.

So I just decided to build the redirects file using Hugo itself. First of all, I needed to import a lot of redirects as aliases because I had them in a separate file, but this way it’s much better. After that, I needed to add a new output type to Hugo’s config:

disableAliases: true

outputFormats:
  redir:
    mediaType: text/plain
    baseName: redirects
    isPlainText: true
    notAlternative: true

outputs:
  home:
    - redir

Then, I created a layouts/index.redir.txt file with the following content:

{{- range $p := .Site.Pages -}}
{{ range .Aliases }}
{{  . | printf "%-70s" }}	{{ $p.RelPermalink -}}
{{ end -}}
{{- end -}}

This is mostly what you can see on this commit of the official hugo docs for their netlify redirects. With this, my Hugo website does not build any HTML aliases (disableAliases), but creates a file on the root called redirects.txt which you see here. I can just block the access through Caddy but there’s no reason I should do so. Is there?

On Caddyland, I just added this snipped:

hacdias.com {
  root /the/public/path/

  redir 301 {
    import /the/public/path/redirects.txt
  }
}

And voilá! It works! But now you ask: what if we change the redirects file and we don’t wanna have any downtime? Just configure your Micropub entrypoint or whatever software you’re using on the backend to do a config hot reload by executing the following command:

pkill -USR1 caddy

There it is! 301 redirects working flawlessly!

So I just made a few changes to my website and I hope it didn’t break anything like feeds and such. Here’s a small changelog of the changes:

  • Stopped using Hugo categories for post types (replies, notes, articles, etc) and started using sections, i.e., I know put a note under the /note path. So this also changed the URLs, hopefully for better and now it’s easier to restrict access or remove something if I want.
  • I added about ~2000 redirect rules. Does anyone know if Caddy allows me to import the redirect rules from another file? My Caddyfile is getting huge.
  • Started using partialCached in some places which improved the build time a tiny bit.
  • Moved the Articles page from /blog to /articles which I already wanted to do for a while.
  • Added a contact page.
  • Updated the more page with more links!

And… that’s it I think. I’d also love to use this website as a “knowledge base” so I’ll probably create a section for that later. I always want to organize the knowledge I get somehow but I just have tons of files from university and other stuff laying around without any organization. I really loved this braindump from Jethro.

Most likely you didn’t notice, but yesterday I created a new page on this website called pins. On that page, you can find links for some pinned resources on the IPFS network that I like to keep pinned for archiving purposes. There are so many old gems on the Internet!

OwnYourTrakt

For quite some time, I have been getting more and more into the IndieWeb world and trying to own my own data. I have started publishing more to my website and using it as a place to store most of my public data, i.e., data I already published on other social media and platforms.

It now holds my web interactions, such as replies, likes and reposts, as well as my reading log. Since the beginning, I also wanted to this website as a place to store my watch logs. With watch I mean watching movies and TV series.

Why are houses in Portugal so cold in general? Ugh… it seems we don’t know how to build houses. Everywhere we go in Europe, we can see warm houses. Ours are cold in the winter and some can even be cold enough in summer to be uncomfortable.

Jan-Lukas Else 10 Feb 2020 18:36

I promised and people already asked, so here is the first part of the documentation about how I enabled ActivityPub support on my Hugo-based blog:The first step to enable ActivityPub support, was to get Hugo to generate ActivityStreams representations for posts and the ActivityPub actor. I did this …

I was looking at your single template and I was wondering if content_html shouldn’t be content only? According to the spec, there’s no content_html property and the content can contain HTML by default.

Jan-Lukas Else 07 Feb 2020 10:48

I think I finally got ActivityPub support for this blog working. On Mastodon, you can search for @[email protected] and @[email protected] to follow the English and German blog. You should also be able to search for the URL of any post and reply to it. But remember that it’s only possible for the reply to …

I’d love to know more about your implementation. Is it this repo (https://codeberg.org/jlelse/jsonpub)?

I just set up a media endpoint based on BunnyCDN, inspired by @jlelse’s post. So far, it’s working really well.

For now, I’m not actually using it to post many of the images of the website, even though I could. However, I’m using it to store the webmentions author’s photos. They were being served directly by webmention.io but I think it’s better to serve them myself.

The media endpoint works well: it receives an object and stores it on BunnyCDN. However, I want to add some customization options such as resizings and compressions for images through query parameters, as well as some default ones so I don’t need to always specify them.

Just solved the deadlock! I’m currently using the p-limit package to limit the number of concurrent actions made to the website source. Basically, inside a function wrapped by that limitation, I was waiting for another function that would require the limit to be complete! Of course, that would create a never-ending deadlock. Fixed now!

On a second thought: I don’t actually like the structure of the internal code I use to process all of this. Maybe I should rearrange some things to make them… better.