meta

Analyzing my shows and movie habits

Today, I decided to readd a watches page, but this time it isn’t built from hundreds of posts, but from the data that I get directly from Trakt’s API. I built a small tool called trakt-collector used to collect your history and save it in JSON format.

The Trakt API gives you so much information about every episode and every movie: from the title, to the rating, description, channel where it aired, when it aired first, the countries where it aired, etc, etc. I don’t actually need all that information, but it’s never too much to store.

I’m wondering if there’s any interesting software to build git based wikis (or at least with a flat file storage) that support Markdown and ACL for private and public posts and is extensible enough to allow me to customize it the way I want. I have a bunch of notes I’d like to make public just because they might be helpful for someone else. I thought about integrating it into this website but I’d love to have a way to have private posts, as well as easily linking between pages. The most interesting option I found was DokuWiki which was tons of plugins. However, it’s PHP and I’m not sure about how maintained it is nowadays. It looks like wiki software is dying… besides the big beasts.

Just wondering if it’s worth it to keep my watches log and checkins on my website. I know I had some work setting that up. But, is it worth it? Is it worth for you, readers?

I like having that data accessible and I still can by just using the APIs and backing up the data myself. It can be useful and it can have many uses. But is it worthy to have on this website?

For maintenance purposes, it’s a bit harder but not impossible. For you, readers. I’d love your opinion on this. I’ve b en thinking about removing swarm checkins because of privacy issues.

About the watches: maybe it’d be nice to have a page listing the series and movies I’ve watched but not as logs.

Henrique Dias 16 Feb 2020 11:06

So I just made a few changes to my website and I hope it didn’t break anything like feeds and such. Here’s a small changelog of the changes:Stopped using Hugo categories for post types (replies, notes, articles, etc) and started using sections, i.e., I know put a note under the /note path. So this …

After publishing the post to which I’m replying to, @jlelse contacted me and I noticed the import directive in Caddy can be used to import files:

import allows you to use configuration from another file or a reusable snippet. It gets replaced with the contents of that file or snippet.

So I just decided to build the redirects file using Hugo itself. First of all, I needed to import a lot of redirects as aliases because I had them in a separate file, but this way it’s much better. After that, I needed to add a new output type to Hugo’s config:

disableAliases: true

outputFormats:
  redir:
    mediaType: text/plain
    baseName: redirects
    isPlainText: true
    notAlternative: true

outputs:
  home:
    - redir

Then, I created a layouts/index.redir.txt file with the following content:

{{- range $p := .Site.Pages -}}
{{ range .Aliases }}
{{  . | printf "%-70s" }}	{{ $p.RelPermalink -}}
{{ end -}}
{{- end -}}

This is mostly what you can see on this commit of the official hugo docs for their netlify redirects. With this, my Hugo website does not build any HTML aliases (disableAliases), but creates a file on the root called redirects.txt which you see here. I can just block the access through Caddy but there’s no reason I should do so. Is there?

On Caddyland, I just added this snipped:

hacdias.com {
  root /the/public/path/

  redir 301 {
    import /the/public/path/redirects.txt
  }
}

And voilá! It works! But now you ask: what if we change the redirects file and we don’t wanna have any downtime? Just configure your Micropub entrypoint or whatever software you’re using on the backend to do a config hot reload by executing the following command:

pkill -USR1 caddy

There it is! 301 redirects working flawlessly!

So I just made a few changes to my website and I hope it didn’t break anything like feeds and such. Here’s a small changelog of the changes:

  • Stopped using Hugo categories for post types (replies, notes, articles, etc) and started using sections, i.e., I know put a note under the /note path. So this also changed the URLs, hopefully for better and now it’s easier to restrict access or remove something if I want.
  • I added about ~2000 redirect rules. Does anyone know if Caddy allows me to import the redirect rules from another file? My Caddyfile is getting huge.
  • Started using partialCached in some places which improved the build time a tiny bit.
  • Moved the Articles page from /blog to /articles which I already wanted to do for a while.
  • Added a contact page.
  • Updated the more page with more links!

And… that’s it I think. I’d also love to use this website as a “knowledge base” so I’ll probably create a section for that later. I always want to organize the knowledge I get somehow but I just have tons of files from university and other stuff laying around without any organization. I really loved this braindump from Jethro.

OwnYourTrakt

For quite some time, I have been getting more and more into the IndieWeb world and trying to own my own data. I have started publishing more to my website and using it as a place to store most of my public data, i.e., data I already published on other social media and platforms.

It now holds my web interactions, such as replies, likes and reposts, as well as my reading log. Since the beginning, I also wanted to this website as a place to store my watch logs. With watch I mean watching movies and TV series.

I just set up a media endpoint based on BunnyCDN, inspired by @jlelse’s post. So far, it’s working really well.

For now, I’m not actually using it to post many of the images of the website, even though I could. However, I’m using it to store the webmentions author’s photos. They were being served directly by webmention.io but I think it’s better to serve them myself.

The media endpoint works well: it receives an object and stores it on BunnyCDN. However, I want to add some customization options such as resizings and compressions for images through query parameters, as well as some default ones so I don’t need to always specify them.

Just solved the deadlock! I’m currently using the p-limit package to limit the number of concurrent actions made to the website source. Basically, inside a function wrapped by that limitation, I was waiting for another function that would require the limit to be complete! Of course, that would create a never-ending deadlock. Fixed now!

On a second thought: I don’t actually like the structure of the internal code I use to process all of this. Maybe I should rearrange some things to make them… better.

Owning my reading log

As Tom once said, it is now time to own my own reading log. Why? Despite all the reasons mentioned on Tom’s post, I also got bored of Goodreads and I ended up not using it as much as I should have.

With university, work and… life… I stop reading as much as I did before. But it’s now time to get back to some reading. Even if it’s not that much, I need to read something. I must do it.

Just made a few updates to my website:

  • Added a more page, inspired by @jlelse’s.
  • Updated my now page to actually include what I’m doing right now.
  • Updated the highlight theme to swapoff. It’s a really pretty syntax highlight theme provided by Chroma, the library Hugo is using. The best thing is: using the invert() CSS filter it keeps looking good. This way, I have good syntax highlight on both the light and dark themes.
  • Added the category of the post right besides the publish date.
  • Now I’m only showing notes, articles and replies on the homepage.
  • I have a all page dedicated to show every post category.

I just noticed the website on the dark mode flickers sometimes (from the light to the dark theme). I know this is caused by the fact that I’m using JavaScript to pick the theme depending on the OS choice + the manual overriding done by the user using the option on the bottom of the page.

One option to remove this problem would be to just follow the user’s OS choice (either dark or light) and there wouldn’t be a manual override. Unfortunately, this wouldn’t let the users pick a different theme if they prefer to read websites on a different light - no pun intended.

I just added a little form in the webmentions section on the posts so you can send your webmention manually if it didn’t hit my website. Or, you can also use the button ‘Write a comment’ to create a comment (that can be anonymous) through the comment parade service.

For the curious ones out there, the code is simple. Please remember that I’m using Go templates with Hugo:

<form action="https://webmention.io/hacdias.com/webmention" method="post">
  <input name="source" placeholder="Have you written a response? Paste its URL here!" type="url" required>
  <input name="target" value="{{ .Permalink }}" type="hidden">
  <input value="Send Webmention" type="submit">
</form>

<form method="get" action="https://quill.p3k.io/" target="_blank">
  <input type="hidden" name="dontask" value="1">
  <input type="hidden" name="me" value="https://commentpara.de/">
  <input type="hidden" name="reply" value="{{ .Permalink }}">
  <input type="submit" value="Write a comment">
</form>

Jan-Lukas Else 22 Jan 2020 10:28

I don’t show webmention content at all. 😅 Instead I just show a link to the “interaction”. That removes a lot of complexity with parsing, storing etc. but probably isn’t as intuitive: it requires opening the “interactions” section below the post and visiting the link.

I enjoy showing the webmention and the context (reply to what? repost of what? like of what?) because, as you know, the content on the Internet is ephemeral and if I don’t store it, I don’t have assurance that it will remain available. And that’s the main reason why I show the webmentions.

In any case, it’s not the webmentions that worry me, but the post contexts that I show on replies, likes and reposts… Need to decide on that: either remove the pictures, or store them.

Just made a few updates to my website:

  • Removed Tachyons.
  • New roomy header: took some inspiration from @jhelse’s and from a previous version of my website.
  • Added two new pages that have no information yet: now and use. You can find them on the header.
  • I now have a blogroll that you can find here.
  • If you look at the bottom of the page, you now have links for all the categories too.

This seems to be a small update. Maybe some won’t like the new design as much as the previous one, but I assure you: it’s at least 70 KB smaller! At least! I didn’t actually measure it 😋

I may make some updates in the future, but for now I have some interesting plans for the now page!

I wanted to have a cleaner look on this website and make an about page instead of dumping that on the frontpage. However, I don’t feel like the current All page looks that good so… I don’t know.

In addition, I’d like to refactor my header to have more room to add more links instead of having them in the footer. Some of them may stay there, but I’d like to bring Bookmarks to the top and two other future pages I want to make (Now and Uses).

Just exported this website’s old comments from Disqus and closed my account. Once I figure out the structure I want to store my webmentions/comments in, I will work on importing them. Not that many… maybe 20? Could do it rather manually.

Oopsie doopsie. Just found a bug on my webmention endpoint where I was not writing the webmentions to the correct endpoint. Right now, I’m storing them as plain webmentions from webmention.io. Probably I should think about a simpler way to store them to parse them easily later on the theme.

Right now I have an index that tells, for each permalink on my blog, which likes and “others” it has. I show the likes as mere “heads” on the post and then the “others” that might be reposts or retweets. The truth is that any of those can contain content.

What do I need to store? That’s the important question.