diff options
Diffstat (limited to 'content')
99 files changed, 10998 insertions, 0 deletions
diff --git a/content/_index.md b/content/_index.md new file mode 100644 index 0000000..ec0ccd7 --- /dev/null +++ b/content/_index.md @@ -0,0 +1,94 @@ +--- +title: About Me +--- + +Freedom-focused hacker. I love everything about free software, and spend much +time contributing to it myself. I occasionally write blog posts to help other +people getting started with free software, show the issues with proprietary +software, or just talk about something I find fun and interesting. + +Pretty much all of my projects can be found on +[git.tyil.nl](https://git.tyil.nl), though many of them are also mirrored on +other code forges. + +I can be reached through various means of communication, though all of them +usable by completely free means. Scroll down to the **Channels** section for +more information about that. + +## Contact + +### PGP + +My public PGP key is available [from my own site][pubkey], or from a public key +server such as [pgp.mit.edu][pubkey-mit]. The fingerprint is: + + 1660 F6A2 DFA7 5347 322A 4DC0 7A6A C285 E2D9 8827 + +You can also fetch my PGP key using the +[WebKey Protocol](/post/2020/05/30/setting-up-pgp-wkd/): + + gpg --locate-key p.spek@tyil.nl + +[pubkey]: /pubkey.txt +[pubkey-mit]: http://pgp.mit.edu/pks/lookup?op=vindex&search=0x7A6AC285E2D98827 + +### Channels + +#### Email + +Email contact goes via [p.spek@tyil.nl][mail]. Be sure to at least sign all +mail you send me. Even better would be encrypted mail using my [PGP +key][pubkey]. + +I do not read my mailboxes very often, so please do not expect a timely +response. If you require a response as soon as possible, please find me on IRC +instead. + +#### Fediverse + +I host my own MissKey instance to interact with the wider Fediverse. + +- [`@tyil@fedi.tyil.nl`](https://fedi.tyil.nl/@tyil) + +#### IRC + +I am active on various IRC networks, most often under the nick `tyil`. All of +these are connected from the same client, so you can pick any of these if you +wish to have a real-time chat with me. + +- [DareNET](https://darenet.org) +- [Libera](https://libera.chat) +- [OFTC](https://www.oftc.net/) +- [Rizon](https://rizon.net) + +#### Matrix + +As the years have gone by, I've been losing faith in Matrix more and more. I +still have an account, and I would be happy if it ever got good, but I +personally am not counting on that to happen anymore. + +- `@tyil:matrix.org` + +#### XMPP + +If IRC is not your thing, I can be reached for personal chats on XMPP too. + +- `tyil@disroot.org` +- `tyil@chat.tyil.nl` + +## Other links + +- [Sourcehut account][git-srht] +- [GitLab account][git-gl] +- [GitHub account][git-gh] + +## RSS + +If you'd like to stay up-to-date with my posts, you can subscribe to the [RSS +feed](/posts/index.xml). + +[git-gh]: https://github.com/tyil +[git-gl]: https://gitlab.com/tyil +[git-srht]: https://sr.ht/~tyil/ +[mail]: mailto:p.spek@tyil.nl +[pubkey]: /pubkey.txt diff --git a/content/http-404.md b/content/http-404.md new file mode 100644 index 0000000..a0fd734 --- /dev/null +++ b/content/http-404.md @@ -0,0 +1,6 @@ +--- +title: "HTTP 404: File Not Found" +url: http-404.html +--- + +The file you were looking for could not be found. diff --git a/content/posts/2016/2016-10-01-on-pastebin.md b/content/posts/2016/2016-10-01-on-pastebin.md new file mode 100644 index 0000000..cb54542 --- /dev/null +++ b/content/posts/2016/2016-10-01-on-pastebin.md @@ -0,0 +1,80 @@ +--- +date: 2016-10-01 +title: On Pastebin +tags: +- Pastebin +- Security +- Cloudflare +- Privacy +--- + +Pastebin offers itself as a gratis paste service. Although it is probably the +most well known option out there, it is certainly not the best. + +## The security issue +Pastebin has a couple of issues that harm the visitor's security. This on +itself should be considered such a bad practice that no-one should consider +their service at all. + +### Cloudflare +Cloudflare is a [MITM][mitm]. It completely breaks the secure chain of TLS on +the web, and should not be used. Any service still using Cloudflare should be +shunned. There is [another article][cloudflare] on this site which has more +information on this specific issue. In addition, Cloudflare can be considered a +privacy issue for the same reasons, as is detailed below. + +### Advertisements +Another issue with regards to security on pastebin are the advertisements. +While it can be argued that "they need to make money somehow", using ads always +seems like the worst possible solution. Especially given the way they're +serving it. The past couple years have shown that advertisements on the web are +easily abused to serve malware to good netizens who decided to not block all +ads. + +A rant on the state of ads might be appropriate, but this article is +specifically about Pastebin, so I will just keep it at "third party +advertisements are a security risk, avoid sites who use them" + +## The privacy issue +Apart from their security issues, Pastebin also offers some privacy issues. As +stated above, they make use of Cloudflare. This means that whenever you visit +them, Cloudflare takes note of this. They may even decide that you need to +perform some additional tasks in order to be allowed to the resource. This +doesn't happen to most users, but if you're using any anonymization practices, +this will happen almost every time you visit a site behind Cloudflare. + +In addition to telling Cloudflare, you will also tell another third party, +Google, in case this "additional step" is required. This is done via the new +reCaptcha system which will inform Google of almost every detail of your +browser and the behaviour used to solve the puzzle. Incredibly useful for +fingerprinting you accross multiple locations. + +### Then there is Tor +But, if you're using an anonymization proxy such as Tor, even if you do not +care about the Cloudflare issue, and you solve the "security check" presented +to you, Pastebin still refuses to offer you their service. If they are going to +refuse you service, they should tell you up front, not after you have already +informed two other harmful parties of your attempt of accessing the resource. + +Actually, they should not. They should simply not require you to give up your +privacy and serve you the content you were looking for. Blocking resources to a +certain group of users is simply censorship, and should not be the status quo +on the free internet. + +## Alternatives +Luckily, there are plenty of alternatives that do not treat their users with +such disrespect. I ask anyone who is still using Pastebin to stop doing this, +and use any of the alternatives. + +* [0bin.net](https://0bin.net/) +* [cry.nu][crynu] (works like termbin: `nc cry.nu 9999 < file`) +* [ix.io][ix] +- [p.tyil.nl][tyilnl] (works like termbin: `nc p.tyil.nl 9999 < file`) + +[cloudflare]: /articles/on-cloudflare/ +[crynu]: https://cry.nu +[hastebin]: http://hastebin.com +[ix]: http://ix.io/ +[mitm]: https://en.wikipedia.org/wiki/Man-in-the-middle_attack +[termbin]: http://termbin.com +[tyilnl]: / diff --git a/content/posts/2016/2016-10-01-on-systemd.md b/content/posts/2016/2016-10-01-on-systemd.md new file mode 100644 index 0000000..9bd46d8 --- /dev/null +++ b/content/posts/2016/2016-10-01-on-systemd.md @@ -0,0 +1,286 @@ +--- +date: 2016-10-01 +title: On Systemd +tags: +- Systemd +- Security +- GNU+Linux +--- + +Systemd once presented itself as being the next generation init system for +GNU+Linux. When the project started it seemed to be headed in a good direction. +Unfortunately, it quickly became clear that systemd's goal was not only to +bring you a quick, new init system. It planned to do so much more. This was +part of the plan, since init systems were generally considered to be in a bad +state overall it was quickly accepted by most mainstream GNU+Linux +distributions. What was at first only an init system became so much more: +systemd-logind was made to manage tty's, systemd-resolvd was added to act as a +caching DNS server. Even networking was added with systemd-networkd to manage +network interfaces. + +**DISCLAIMER**: Systemd is a fast moving project, this may result in +information here to becoming outdated. If you find any information that is no +longer correct, please contact me. You can find my contact details [on my +homepage][tyil]. + +## Technical issues +### Security +From experience, we have seen that systemd's creator, Lennart Poettering, will +try to assimilate any functionality he can find and add it into systemd. This +causes systemd to have a large surface area of attack, adding to and magnifying +security attack vectors. An init system should be exactly the opposite. To +compound this issue, we have bugs like [the user-level DoS][systemd-dos], +which seem to indicate that the software is hardly tested or written by +programmers who don't use best practices. + +### POSIX +POSIX compliance. Systemd developers seem to detest it. Their common argument +against retaining POSIX compliance is that "systemd must break POSIX compliance +in order to further the development of GNU+Linux userland utilities". While +this may be true in some sense, it is a very bad idea to ignore POSIX +altogether. + +POSIX is one of the reasons that most applications running on GNU+Linux and +other Unix like systems are very portable. It's a standard that most OS's and +distro's try to meet, making it easy to port software. + +[natermeer on Reddit][reddit-natermeer] said +> POSIX has almost no relevance anymore. +> +> [...] +> +> If you care about portability you care about it running on OS X and Windows +> as well as your favorite \*nix system. POSIX gains you nothing here. A lot +> of the APIs from many of these systems will resemble POSIX closely, but if +> you don't take system-specific differences into account you are not going +> to accomplish much. + +> I really doubt that any Init system from any Unix system uses only POSIX +> interfaces, except maybe NetBSD. All of them are going to use scripts and +> services that are going to be running commands that use kernel-specific +> features at some point. Maybe a init will compile and can be executed on +> pure POSIX api, but that is a FAR FAR cry from actually having a booted and +> running system. + +Which was replied to by [aidanjt][reddit-aidanjt] +> Wrong, both OS X and Windows have POSIX support, although Window's is emulated, +> OS X certainly is not, it's fully POSIX compliant. and b) POSIX doesn't have to +> work identically everywhere, it only has to be more or less the same in most +> places and downstream can easily patch around OS-specific quirks. Even +> GNU/Linux and a bunch of the BSDs are merely regarded as 'mostly' POSIX +> compliant, after all. But if you ignore POSIX entirely, there's ZERO hope of +> portability. +> +> Actually sysvinit is very portable, init.c only has 1 single Linux header which +> has been #ifdef'ed, to handle the three-finger-salute. You see, init really +> isn't that complicated a programme, you tell the kernel to load it after it's +> done it's thing, init starts, and loads distro scripts which starts userspace +> programmes to carry on booting. No special voodoo magic is really required. +> POSIX is to thank for that. POSIX doesn't need to be the only library eva, it +> only needs to handle most of the things you can't do without, without having to +> directly poke at kernel-specific interfaces. +> +> This is why with POSIX, we can take a piece of software written for a PPC AIX +> mainframe, and make it work on x86 Linux without a complete rewrite, usually +> with only trivial changes. + +### Dependencies and unportability +Another common issue with systemd is that applications have started to +needlessly depend on it, forcing systemd onto users that do not wish to use +systemd for obvious reasons outlined here, reasons outside of this article, or +simply being unable to use it. Because systemd complies to no cross-platform +standard and uses many features only available in recent Linux version, it's +either very hard or impossible to implement systemd in some circumstances. + +The list of features it requires is no small one either, as you can see in the +list [posted by ohset][reddit-ohet]: + +- `/dev/char` +- `/dev/disk/by-label` +- `/dev/disk/by-uuid` +- `/dev/random` +- `/dev/rtc` +- `/dev/tty0` +- `/proc/$PID/cgroup` +- `/proc/${PID}/cmdline` +- `/proc/${PID}/comm` +- `/proc/${PID}/fd` +- `/proc/${PID}/root` +- `/proc/${PID}/stat` +- `/proc/cmdline` +- `/sys/class/dmi/id` +- `/sys/class/tty/console/active` +- `BTRFS_IOC_DEFRAG` +- `CLONE_xxx` +- `F_SETPIPE_SZ` +- `IP_TRANSPORT` +- `KDSKBMODE` +- `O_CLOEXEC` +- `PR_CAPBSET_DROP` +- `PR_GET_SECUREBITS` +- `PR_SET_NAME` +- `PR_SET_PDEATHSIG` +- `RLIMIT_RTPRIO` +- `RLIMIT_RTTIME` +- `SCHED_RESET_ON_FORK` +- `SOCK_CLOEXEC` +- `TIOCLINUX` +- `TIOCNXCL` +- `TIOCVHANGUP` +- `VT_ACTIVATE` +- `\033[3J` +- `audit` +- `autofs4` +- `capabilities` +- `cgroups` +- `fanotify` +- `inotify` +- `ionice` +- `namespaces` +- `oom score adjust` +- `openat()` and friends +- `selinux` +- `settimeofday()` and its semantics +- `udev` +- `waitid()` +- numerous GNU APIs like `asprintf` + +This made [Gnome][gnome] unavailable for a long time to BSD users and GNU+Linux +users who wanted to remain with a sane and proven system. Utilities like +[Gummiboot][gummiboot] are now being absorbed by systemd too. It is only a +matter of time before you can no longer use this utility without a systemd init +behind it. There are too many examples of software to list, which are being +assimilated or made unavailable by lazy or bad developers who choose to depend +on systemd for whatever reason. + +### Speed +The main selling point many systemd users hail all the time, is speed. They +place an unusual high amount of value on being a couple seconds faster on boot. +Systemd gains this speed gain by using parallelization, and many think this is +unique to systemd. Luckily for those who want to stick to a more sane system, +this is false. Other init systems, such as [OpenRC][openrc], used by +[Funtoo][funtoo], and [runit][runit], used by [Voidlinux][voidlinux] both +support parallel startup of services. Both these systems use small and +effective shell scripts for this, and support startup dependencies and the +like. Systemd brings nothing new to the init world, it just advertises these +features more agressively. + +### Modularity +The UNIX principle, *make an application perform one task very well*, seems to +be very unpopular among systemd developers. This principle is one of the +reasons why UNIX based systems have gotten so popular. Yet, the systemd +developers seem to despise this principle, and even try to argue that systemd +actually is modular because **it compiles down to multiple binaries**. This +shows a lack of understanding, which would make most users uneasy when they +consider that these people are working on one of the most critical pieces of +their OS. + +The technical problem this brings is that it is very hard to use systemd with +existing tools. `journald` for instance doesn't just output plain text you can +easily filter through, save or apply to a pager. I decides for you how to +represent this information, even if this might be an ineffective way to go +about it. + +### Binary logs +Hailed by systemd users and developers as a more efficient, fast and secure way +to store your logs, it is yet another middle finger to the UNIX principles, +which state that documents intended for the user should be human readable. +Binary logs are exactly not that. This forces you to use the tools bundled with +systemd, instead of your preferred solution. This means you need a system with +systemd in order to read your logs, which you generally need the most when the +system that generated it crashed. Thanks to systemd, these logs are now useless +unless you have another systemd available for it. + +These logs are also very fragile. It is a common "issue" to have corrupted logs +when using systemd. Corrupted is here within quotes because the systemd +developers do not recognize this as a bug. Instead, you should just rotate your +logs and hope it does not happen again. + +The usual counter to this issue is that you *can* tell systemd to use another +logger. However, this does not stop `journald` from processing them first or +just not having `journald` at all. As systemd is not modular, you will always +have all the pieces installed. It should also be noted that this is a +*workaround*, not a fix to the underlying problem. + +## Political issues +### Aggressively forced upon users +A point that has made many systemd opponents very wary of this huge piece of +software is the way it was introduced. Unlike most free software packages, +systemd was forced into the lives of many users by getting hard dependencies on +them, or simply absorbing a critical piece of software by the use of political +power. The two most prominent pieces of software where this has happened are +[Gnome][gnome] and [`udev`][udev]. + +The Gnome developers made a hard dependency on systemd. This in effect made +every gnome user suddenly require systemd. As a result, FreeBSD had to actually +drop Gnome for a while, as systemd does not run outside of GNU+Linux. + +The other, `udev`, was a critical piece of software to manage devices in +GNU+Linux. Sadly, some political power was shown by Red Hat and `udev` got +absorbed into systemd. Luckily, the Gentoo guys saw this issue and tried to +resolve it. As the systemd developers dislike anything that's not systemd +itself, they stubbornly refused the patches from the Gentoo folks which would +keep `udev` a single component (and thus usable without systemd). In the end, +the Gentoo developers forked `udev` into [`eudev`][eudev]. + +### Unwillingness to cooperate +Whenever someone from outside the systemd fangroups steps up to actually +improve systemd in whatever way, the systemd devs seem to be rather +uncooperative. It is not uncommon for developers from other projects to make a +change in order for their projects (and usually others) to improve. This +removes a lot of the cost for the systemd maintainers to deal with all the +issues created they are creating. + +There are some references to the systemd developers being against changes that +might make systemd less of a problem, but these changes are usually denied with +petty excuses. + +- https://lists.freedesktop.org/archives/systemd-devel/2012-June/005466.html +- https://lists.freedesktop.org/archives/systemd-devel/2012-June/005507.html + +## How to avoid it +### Choosing a better OS or distribution +Nowadays, the only way to avoid it without too much trouble, is by simply +choosing a better OS or distro that does not depend on systemd at all. There +are a few choices for this: + +- \*BSD ([FreeBSD][freebsd], [OpenBSD][openbsd], and others) +- [Devuan][devuan] +- [Funtoo][funtoo] +- [Voidlinux][voidlinux] + +It is a shame that it renders a very large chunk of the GNU+Linux world +unavailable when choosing a distro, but they have chosen laziness over a +working system. The only way to tell them at this point that they have made a +wrong decision, is to simply stop using these distros. + +### More links + +- [Broken by design: systemd][broken-systemd] +- [Without systemd][without-systemd] +- [systemd is the best example of Suck][suckless-systemd] +- [Thoughts on the systemd root exploit][agwa-systemd-root-exploit] (In response to [CVE-2016-10156][cve-2016-10156]) +- ["systemd: Please, No, Not Like This"](https://fromthecodefront.blogspot.nl/2017/10/systemd-no.html) + +[agwa-systemd-root-exploit]: https://www.agwa.name/blog/post/thoughts_on_the_systemd_root_exploit +[broken-systemd]: http://ewontfix.com/14/ +[cve-2016-10156]: http://www.openwall.com/lists/oss-security/2017/01/24/4 +[devuan]: https://devuan.org/ +[eudev]: https://wiki.gentoo.org/wiki/Eudev +[freebsd]: https://www.freebsd.org/ +[funtoo]: http://www.funtoo.org/Welcome +[gentoo]: https://gentoo.org +[gnome]: http://www.gnome.org/ +[gummiboot]: https://en.wikipedia.org/wiki/Gummiboot_(software) +[openbsd]: https://www.openbsd.org/ +[openrc]: https://en.wikipedia.org/wiki/OpenRC +[reddit-aidanjt]: https://www.reddit.com/r/linux/comments/132gle/eli5_the_systemd_vs_initupstart_controversy/c72saay +[reddit-natermeer]: https://www.reddit.com/r/linux/comments/132gle/eli5_the_systemd_vs_initupstart_controversy/c70hrsq +[reddit-ohet]: https://www.reddit.com/r/linux/comments/132gle/eli5_the_systemd_vs_initupstart_controversy/c70cao2 +[runit]: http://smarden.org/runit/ +[suckless-systemd]: http://suckless.org/sucks/systemd +[systemd-dos]: https://github.com/systemd/systemd/blob/b8fafaf4a1cffd02389d61ed92ca7acb1b8c739c/src/core/manager.c#L1666 +[tyil]: http://tyil.work +[udev]: https://wiki.gentoo.org/wiki/Eudev +[voidlinux]: http://www.voidlinux.eu/ +[without-systemd]: http://without-systemd.org/wiki/index.php/Main_Page diff --git a/content/posts/2016/2016-10-25-setup-a-vpn-with-cjdns.md b/content/posts/2016/2016-10-25-setup-a-vpn-with-cjdns.md new file mode 100644 index 0000000..52d9237 --- /dev/null +++ b/content/posts/2016/2016-10-25-setup-a-vpn-with-cjdns.md @@ -0,0 +1,212 @@ +--- +date: 2016-10-25 +title: Setup a VPN with cjdns +tags: +- Tutorial +- VPN +- cjdns +- GNU+Linux +- FreeBSD +--- + +In this tutorial I will outline a simple setup for a [VPN][vpn] using +[`cjdns`][cjdns]. Cjdns will allow you to setup a secure mesh vpn which uses +IPv6 internally. + +## Requirements +For this tutorial, I have used two client machines, both running Funtoo. A +FreeBSD 11 server is used as a global connection point. + +You are ofcourse able to use any other OS or distro supported by cjdns, but you +may have to update some steps to work on your environment in that case. + +## Installation of the server +### Dependencies +Before you can begin, we need some dependencies. There's only two of those, and +they are available via `pkg` to make it even easier. Install them as follows: + +``` +pkg install gmake node +``` + +### Compiling +Next up is getting the cjdns sources and compile these, as cjdns is not +available as a prebuilt package: + +``` +mkdir -p ~/.local/src +cd $_ +git clone https://github.com/cjdelisle/cjdns.git cjdns +cd $_ +./do +``` + +To make the compiled binary available system-wide so we can use it with a +system service, copy it to `/usr/local/bin` and rehash to make it available as +a direct command: + +``` +cp cjdroute /usr/local/bin/. +hash -r +``` + +### Configuring +Cjdns provides a flag to generate the initial configuration. This will provide +you with some sane defaults where only a couple of small changes are needed to +make it work properly. Generate these defaults with `--genconf`: + +``` +(umask 177 && cjdroute --genconf > /usr/local/etc/cjdroute.conf) +``` + +The umask will make all following commands write files using `600` permissions. +This makes sure the config file is not readable by people who shouldn't be able +to read it. Be sure to check wether the owner of the file is `root`! + +Now you can start actually configuring the node to allow incoming connections. +You have to find the `authorizedPasswords` array in the `cjdroute.conf` file +and remove the contents of it. Then you can add your own machines in it. This +guide follows the assumption of two clients, so the config for two clients will +be shown here. You can add more clients if you wish, ofcourse. + +```json +"authorizedPasswords": +[ + {"password": "aeQu6pa4Vuecai3iebah7ogeiShaeDaepha6Mae1yooThoF0oa0Eetha9oox", "user": "client_1"}, + {"password": "aiweequuthohkahx4tahLohPiezee9OhweiShoNeephe0iekai2jo9Toorah", "user": "client_2"}, +] +``` + +If you need to generate a password, you can make use of the tool `pwgen`, +available at your local package manager. You can then generate new passwords by +running `pwgen 60 -1`. Change the `60` around if you want passwords of a +different size. + +### Adding a startup service +rcinit has deceptively easy scripts to make applications available as services. +This in turn allows you to enable a service at startup. This way you can make +sure cjdns starts whenever the server boots. You can copy the following +contents directly into `/usr/local/etc/rc.d/cjdroute`: + +```sh +#!/bin/sh + +# PROVIDE: cjdroute +# KEYWORD: shutdown + +# +# Add the following lines to /etc/rc.conf to enable cjdroute: +# +#cjdroute_enable="YES" + +. /etc/rc.subr + +name="cjdroute" +rcvar="cjdroute_enable" + +load_rc_config $name + +: ${cjdroute_config:=/usr/local/etc/cjdroute.conf} + +command="/usr/local/bin/cjdroute" +command_args=" < ${cjdroute_config}" + +run_rc_command "$1" +``` + +Afterwards, you must enable the service in `/etc/rc.conf.local` like follows: + +``` +echo 'cjdroute_enable="YES"' >> /etc/rc.conf.local +``` + +## Installation of the clients +### Dependencies +The dependencies are still on `gmake` and `node`, so simply install those on +your clients. This guide assumes using Funtoo for the clients, so installation +would go as follows: + +``` +emerge gmake nodejs +``` + +### Compiling +Compilation is the same as for the server, so check back there for more +information if you have already forgotten. + +### Configuring +Generating the base configuration is again done using `cjdroute --genconf`, +just like on the server. On Funtoo, config files generally reside in `/etc` +instead of `/usr/local/etc`, so you should set the filepath you write the +configuration to accordingly: + +``` +cjdroute --genconf > /etc/cjdroute.conf +``` + +Setting up the connections differs as well, as the clients are going to make an +outbound connection to the server, which is configured to accept inbound +connections. + +You should still clean the `authorizedPasswords` array, as it comes with a +default entry that is uncommented. + +Now you can setup outbound connections on the clients. You set these up in the +`connectTo` block of `cjdroute.conf`. For this example, the IP 192.168.1.1 is +used to denote the server IP. Unsurprisingly, you should change this to your +server's actual IP. You can find the `publicKey` value at the top of your +server's `cjdroute.conf` file. + +On client 1, put the following in your `cjdroute.conf`: + +```json +"connectTo": +{ + "192.168.1.1:9416": + { + "login": "client_1", + "password": "aeQu6pa4Vuecai3iebah7ogeiShaeDaepha6Mae1yooThoF0oa0Eetha9oox", + "publicKey": "thisIsJustForAnExampleDoNotUseThisInYourConfFile_1.k" + } +} +``` + +On client 2: + +```json +"connectTo": +{ + "192.168.1.1:9416": + { + "login": "client_2", + "password": "aiweequuthohkahx4tahLohPiezee9OhweiShoNeephe0iekai2jo9Toorah", + "publicKey": "thisIsJustForAnExampleDoNotUseThisInYourConfFile_1.k" + } +} +``` + +That is all for configuring the nodes. + +### Adding a startup service +You probably want cjdroute to run at system startup so you can immediatly use +your VPN. For openrc based systems, such as Funtoo, cjdns comes with a ready to +use service script. To make this available to your system, copy it over to the +right directory: + +``` +cp ~/.local/src/cjdns/contrib/openrc/cjdns /etc/init.d/cjdroute +``` + +Now add the service to system startup and start the service: + +``` +rc-update add cjdroute default +rc-service cjdroute start +``` + +That should be sufficient to get cjdns up and running for an encrypted VPN. You +can find the IPs of each of your systems at the top of your `cjdroute.conf` +files, in the `ipv6` attribute. + +[cjdns]: https://github.com/cjdelisle/cjdns +[vpn]: https://en.wikipedia.org/wiki/Virtual_private_network diff --git a/content/posts/2016/2016-10-25-setup-nginx-with-lets-encrypt-ssl.md b/content/posts/2016/2016-10-25-setup-nginx-with-lets-encrypt-ssl.md new file mode 100644 index 0000000..8c7caa0 --- /dev/null +++ b/content/posts/2016/2016-10-25-setup-nginx-with-lets-encrypt-ssl.md @@ -0,0 +1,229 @@ +--- +date: 2016-10-25 +title: Setup nginx with Let's Encrypt SSL +tags: +- Tutorial +- LetsEncrypt +- Nginx +- SSL +- Encryption +--- + +This is a small tutorial to setup nginx with Let's Encrypt on a FreeBSD server +to host a static site. + +## Install required software +First you have to install all the packages we need in order to get this server +going: + +```sh +pkg install nginx py27-certbot +``` + +## Configure nginx +Next is nginx. To make life easier, you should configure nginx to read all +configuration files from another directory. This allows you to store all your sites in +separate configurations in a separate directory. Such a setup is a regular site on +nginx installations on GNU+Linux distributions, but not default on FreeBSD. + +Open up `/usr/local/etc/nginx/nginx.conf` and make the contents of the `http` +block look a as follows: + +```nginx +http { + include mime.types; + default_type application/octet-stream; + + sendfile on; + #tcp_nopush on; + + keepalive_timeout 65; + + # default paths + index index.html; + + # disable gzip - https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=773332 + gzip off; + + # default ssl settings + ssl_session_cache shared:SSL:1m; + ssl_session_timeout 5m; + ssl_ciphers HIGH:!aNULL:!MD5:!AES128:!CAMELLIA128; + ssl_protocols TLSv1.2; + ssl_prefer_server_ciphers on; + ssl_dhparam /usr/local/etc/ssl/dhparam.pem; + + # default logs + error_log /var/log/nginx/error.log; + access_log /var/log/nginx/acces.log; + + # default server + server { + listen 80; + server_name localhost; + + location / { + root /usr/local/www/nginx; + index index.html index.htm; + } + + error_page 404 /404.html; + error_page 500 502 503 504 /50x.html; + + location = /50x.html { + root /usr/local/www/nginx-dist; + } + } + + # include site-specific configs + include sites/*.conf; +} +``` + +This sets default ssl settings for all server blocks that enable ssl. Note that +these are settings I use, and are in no way guaranteed to be perfect. I did some +minor research on these settings to get an acceptable rating on +[SSL Labs][ssllabs]. However, security is not standing still, and there is a +decent chance that my settings will become outdated. If you have better settings +that result in a safer setup, please [contact me][contact]. + +### Setup HTTP +Due to the way `certbot` works, you need a functioning web server. Since there +is no usable cert yet, this means hosting a HTTP version first. The tutorial +assumes a static HTML website to be hosted, so the configuration is pretty +easy. + +Put the following in `/usr/local/etc/nginx/sites/domain.conf`: + +```nginx +# static HTTP +server { + # listeners + listen 80; + server_name domain.tld www.domain.tld; + + # site path + root /srv/www/domain/_site; + + # / handler + location / { + try_files $uri $uri/ =404; + } + + # logs + error_log /var/log/nginx/error.log; + access_log /var/log/nginx/access.log; +} +``` + +If your site's sources do not reside in `/srv/www/domain/_site`, change the +path accordingly. This guide will continue using this path for all examples, so +be sure to modify this where needed. In the same vein, the domain `domain.tld` +will be used. Modify this to your own domain. + +### Start nginx +Nginx is now configured to host a single site over HTTP. Now is the time to enable +the nginx service. Execute the following: + +```sh +echo 'nginx_enable="YES"' >> /etc/rc.conf.local +``` + +This will enable nginx as a system service. On reboots, it will be started +automatically. You can also start it up without rebooting by running the +following: + +```sh +service nginx start +``` + +## Configure Let's Encrypt +Nginx is now running as your web server on port 80. Now you can request Let's +Encrypt certificates using `certbot`. You can do so as follows: + +```sh +certbot certonly --webroot -w /srv/www/domain/_site -d domain.tld -d www.domain.tld +``` + +In case you want to add any sub domains, simply add more `-d sub.domain.tld` +arguments at the end. If the DNS entries for the domains resolve properly, and +no unexpected errors occur on the Let's Encrypt side, you should see a message +congratulating you with your new certs. + +If your domains do not resolve correctly, `certbot` will complain about this. +You will have to resolve your DNS issues before attempting again. + +If `certbot` complains about an unexpected error on their side, wait a couple +minutes and retry the command. It should work, eventually. + +Once `certbot` has ran without errors, the required files should be available +in `/usr/local/etc/letsencrypt/live/domain.tld`. + +## Configure nginx with SSL +The certificate has been issued and base nginx is running. Now is the time to +re-configure your site on nginx to host the HTTPS version of your site instead. +Open up `/usr/local/etc/nginx/sites/domain.conf` again, and make the contents +look like the following: + +```nginx +# redirect HTTPS +server { + # listeners + listen 80; + server_name domain.tld *.domain.tld; + + # redirects + return 301 https://$host$request_uri; +} + +# static HTTPS +server { + # listeners + listen 443 ssl; + server_name domain.tld www.domain.tld; + + # site path + root /srv/www/domain/_site; + + # / handler + location / { + try_files $uri $uri/ =404; + } + + # enable HSTS + add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; + + # keys + ssl_certificate /usr/local/etc/letsencrypt/live/domain.tld/fullchain.pem; + ssl_certificate_key /usr/local/etc/letsencrypt/live/domain.tld/privkey.pem; +} +``` + +Do not forget to update all the paths to match your setup! + +As a final step, you should generate the dhparam file. This is to avoid the +issues as described on [Weak DH][weakdh]. + +```sh +openssl gendh -out /usr/local/etc/ssl/dhparam.pem 4096 +``` + +Be aware that this step can take a **very** long time. On the test machine I +used to test this tutorial, with 1 core and 1 GB ram, it took nearly 1 hour to +generate this file. + +### Reload nginx +The final step is to reload the nginx configuration so it hosts the SSL version +of your site, and redirects the HTTP version to the HTTPS version. To do this, +simply run + +```sh +service nginx reload +``` + +That should be all to get your site working with HTTP redirecting to HTTPS, and +HTTPS running using a gratis Let's Encrypt certificate. + +[contact]: https://www.tyil.work/ +[ssllabs]: https://www.ssllabs.com/ssltest/analyze.html?d=tyil.work&latest +[weakdh]: https://weakdh.org/ diff --git a/content/posts/2016/2016-10-31-freebsd-mailserver-part-1-preparations.md b/content/posts/2016/2016-10-31-freebsd-mailserver-part-1-preparations.md new file mode 100644 index 0000000..9fc04e7 --- /dev/null +++ b/content/posts/2016/2016-10-31-freebsd-mailserver-part-1-preparations.md @@ -0,0 +1,140 @@ +--- +date: 2016-10-31 07:57:50 +title: "FreeBSD email server - Part 1: Preparations" +tags: +- Tutorial +- FreeBSD +- Email +--- + +This tutorial is devised into multiple chapters to make it more manageable, and +to be able to better explain why certain parts are needed. + +The tutorial is created out of experience setting up my own email server. I have +read through quite a lot of documentation so you do not have to. Nonetheless, I +would recommend doing so. Email business is a tricky one, with a lot of moving +parts that have to fit into each other. Knowing how exactly each part works will +greatly help understanding why they are needed in a proper email server. +Besides that, it will make your life a lot more enjoyable if you want to tweak +some things after this tutorial. + +To kick off, some preparations should be done before you start on setting up +your own email server. + +## DNS setup +Some DNS setup is required for mail. Most importantly, the MX records of a +domain. Be sure you have a domain available, otherwise, get one. There are +plenty of registrars and the price is pretty low for most domains. If you want +to look hip, get a `.email` TLD for your email server. + +For the DNS records themselves, make sure you have an `A` record pointing to +the server IP you're going to use. If you have an IPv6 address, set up an +`AAAA` record as well. Mail uses the `MX` DNS records. Make one with the value +`10 @`. If you have multiple servers, you can make MX records for these as +well, but replace the `10` with a higher value each time (`20`, `30`, etc). +These will be used as fallback, in case the server with pointed to by the `10` +record is unavailable. + +## PostgreSQL +Next up you will have to install and configure [PostgreSQL][postgres]. Although +using a database is not required, this tutorial will make use of one. Using a +database makes administration easier and allows you to add a pretty basic web +interface for this task. + +### Installation +Since the tutorial uses FreeBSD 11, you can install PostgreSQL easily by running + +``` +pkg install postgresql96-server +``` + +### Starting up +In order to start Postfix, you should enable the system service for it. This +way, `service` can be used to easily manage it. In addition, it will start +automatically on boot. + +``` +echo 'postgresql_enable="YES"' >> /etc/rc.conf.local +service postgresql start +``` + +### Database initialization +Since PostgreSQL is a little different than the more popular [MySQL][mysql], I +will guide you through setting up the database as well. To begin, switch user +to `postgres`, which is the default administrative user for PostgreSQL. Then +simply open up the PostgreSQL CLI. + +``` +su postgres +psql +``` + +Once you are logged in to PostgreSQL, create a new user which will hold +ownership of the database and make a database for this user. + +```sql +CREATE USER postfix WITH PASSWORD 'incredibly-secret!'; +CREATE DATABASE mail WITH OWNER postfix; +``` + +Once this is done, create the tables which will hold some of our configuration +data. + +#### domains +```sql +CREATE TABLE domains ( + name VARCHAR(255) NOT NULL, + PRIMARY KEY (name) +); +``` + +#### users +```sql +CREATE TABLE users ( + local VARCHAR(64) NOT NULL, + domain VARCHAR(255) NOT NULL, + password VARCHAR(128) NOT NULL, + PRIMARY KEY (local, domain), + FOREIGN KEY (domain) REFERENCES domains(name) ON DELETE CASCADE +); +``` + +#### aliases +```sql +CREATE TABLE aliases ( + domain VARCHAR(255), + origin VARCHAR(256), + destination VARCHAR(256), + PRIMARY KEY (origin, destination), + FOREIGN KEY (domain) REFERENCES domains(name) ON DELETE CASCADE +); +``` + +## Let's Encrypt +### Installation +Installing the [Let's Encrypt][letsencrypt] client is just as straightforward +as the PostgreSQL database, using `pkg`. + +``` +pkg install py27-certbot +``` + +### Getting a certificate +Requesting a certificate requires your DNS entries to properly resolve. If they +do not resolve yet, Let's Encrypt will bother you with errors. If they do +resolve correctly, use `certbot` to get your certificate. + +``` +certbot certonly --standalone -d domain.tld +``` + +## Conclusion +This should be everything required to get started on setting up your own email +server. Continue to [part 2][part-2] of this series to start setting up +Postfix. + +[freebsd]: https://www.freebsd.org/ +[letsencrypt]: https://letsencrypt.org/ +[mysql]: https://www.mysql.com/ +[part-2]: /post/2016/10/31/freebsd-mailserver-part-2-mailing-with-postfix/ +[postgres]: https://www.postgresql.org/ diff --git a/content/posts/2016/2016-10-31-freebsd-mailserver-part-2-mailing-with-postfix.md b/content/posts/2016/2016-10-31-freebsd-mailserver-part-2-mailing-with-postfix.md new file mode 100644 index 0000000..58d822f --- /dev/null +++ b/content/posts/2016/2016-10-31-freebsd-mailserver-part-2-mailing-with-postfix.md @@ -0,0 +1,316 @@ +--- +date: 2016-10-31 +title: "FreeBSD email server - Part 2: Mailing with Postfix" +tags: +- Tutorial +- FreeBSD +- Email +- Postfix +--- + +Welcome to the second part of my FreeBSD email server series. In this series, I +will guide you through setting up your own email service. Be sure to done the +preparations from [part 1][part-1] of this series. + +This part will guide you through setting up email service on your machine using +[Postfix][postfix]. Basic installation is pretty straightforward, but there is +a lot to configure. If you are not sure what some configuration options do, +please read up on them. There is a lot to do wrong with a mail server, and +doing things wrong will likely get you on a blacklist which will make other +servers stop processing the mail you are trying to send out. + +Setting up Postfix is one of the harder parts of configuring a mail server. If +you have questions after reading the full guide, please find me on IRC. You can +find details on how to do so on [my homepage][home]. + +## Installing Postfix +Installation procedures on FreeBSD are pretty straightforward. Unlike `certbot` +from the previous part, we will need to compile Postfix from source in order to +use PostgreSQL as a database back-end. Thanks to FreeBSD's +[ports][freebsd-ports], this is not difficult either. If this is your first +port to compile, you probably need to get the ports tree first. You can +download and extract this using the following command. + +{% highlight sh %} +portsnap fetch extract +{% endhighlight %} + +Once that has finished running, go into the directory containing the build +instructions for Postfix, and start the installation process. + +{% highlight sh %} +cd /usr/ports/mail/postfix +make configure install +{% endhighlight %} + +This will open a pop up with a number of options you can enable or disable. The +enabled defaults are fine, but you will have to enable the `PGSQL` option. This +will allow you to use the configuration tables created in part 1. + +## Enabling Postfix +Enable the Postfix service for rcinit. This allows you to use `service postfix +start` once configuration is done, and will auto start the service on system +boot. In addition, the default mailer on FreeBSD, [sendmail][sendmail] should +be disabled so nothing is in Postfix's way when trying to deal with processing +email traffic. + +{% highlight sh %} +# disable the default sendmail system +echo 'daily_clean_hoststat_enable="NO"' >> /etc/periodic.conf.local +echo 'daily_status_mail_rejects_enable="NO"' >> /etc/periodic.conf.local +echo 'daily_status_include_submit_mailq="NO"' >> /etc/periodic.conf.local +echo 'daily_submit_queuerun="NO"' >> /etc/periodic.conf.local +echo 'sendmail_enable="NONE"' >> /etc/rc.conf.local + +# enable postfix +echo 'postfix_enable="YES"' >> /etc/rc.conf.local +{% endhighlight %} + +## Configuring Postfix +There is a ton to configure for Postfix. This configuration happens in two +files, `main.cf` and `master.cf`. Additionally, as some data is in the +PostgreSQL database, three files with information on how to query for this +information are needed. All of these files are in `/usr/local/etc/postfix`. + +The guide has a comment line for most blocks. It is advised that **if** you +decide to just copy and paste the contents, you copy that along so you have +some sort of indication of what is where. This could help you out if you ever +need to change anything later on. + +### main.cf +#### Compatibility +The configuration file starts off by setting the compatibility level. If +postfix updates the configuration scheme and deprecates certain options, you +will be notified of this in the logs. + +{% highlight ini %} +# compatibility +compatibility_level = 2 +{% endhighlight %} + +#### Directory paths +These options indicate where Postfix will look and keep certain files required +for correct operation. + +{% highlight ini %} +# directory paths +queue_directory = /var/spool/postfix +command_directory = /usr/local/sbin +daemon_directory = /usr/local/libexec/postfix +data_directory = /var/db/postfix +{% endhighlight %} + +#### Domain configuration +The domain configuration instruct the server of the domain(s) it should serve +for. Use your FQDN without sub domains for `mydomain`. You can use a sub domain +for `myhostname`, but you are not required to. The most common setting is +using a `mail` sub domain for all mail related activities, which would +result in something like this. + +{% highlight ini %} +# domain configuration +myhostname = mail.domain.tld +mydomain = domain.tld +myorigin = $mydomain +{% endhighlight %} + +#### Listening directives +All internet devices it should listen on, and all domains this server should +consider itself the endpoint for, should be listed here. The defaults in the +example block are good enough, as we put some of our data in the PostgreSQL +database instead. + +{% highlight ini %} +# listening directives +inet_interfaces = all +mydestination = $myhostname, localhost.$mydomain, localhost +{% endhighlight %} + +#### Reject unknown recipients +How to deal with messages sent to an email address whose domain points to your +server's address, but have no actual mailbox. A code of `550` means to inform +the remote server that delivery is not possible and will not be possible. This +should stop the remote server from trying it again. + +{% highlight ini %} +# reject unknown recipients +unknown_local_recipient_reject_code = 550 +{% endhighlight %} + +#### Trust +{% highlight ini %} +# trust +mynetworks_style = host +{% endhighlight %} + +#### Address extensions +This block is optional. It allows you to use email address extensions. These +are addresses with an additional character in them that will drop the email in +the non extended address' mailbox, but allows you to quickly filter on them as +the sent-to address contains the extension. + +{% highlight ini %} +# address extensions +recipient_delimiter = + +{% endhighlight %} + +#### Virtual domain directives +This part is where things get important. Virtual domains allow you to handle +mail for a large number of domains that are different from the actual server's +domain. This is where the database configuration comes in to play. It is +important to note the `static:125` values. The `125` should map to the `UID` of +the `postfix` user account on your system. + +{% highlight ini %} +# virtual domain directives +virtual_mailbox_base = /srv/mail +virtual_mailbox_domains = pgsql:/usr/local/etc/postfix/pgsql-virtual-domains.cf +virtual_mailbox_maps = pgsql:/usr/local/etc/postfix/pgsql-virtual-users.cf +virtual_alias_maps = pgsql:/usr/local/etc/postfix/pgsql-virtual-aliases.cf +virtual_uid_maps = static:125 +virtual_gid_maps = static:125 +virtual_transport = lmtp:unix:private/dovecot-lmtp +{% endhighlight %} + +#### TLS setup +The TLS setup configures your server to use secure connections. The keys used +here have been generated in the previous part of this series. + +{% highlight ini %} +# TLS setup +smtpd_tls_cert_file = /usr/local/etc/letsencrypt/live/domain.tld/fullchain.pem +smtpd_tls_key_file = /usr/local/etc/letsencrypt/live/domain.tld/privkey.pem +smtpd_use_tls = yes +smtpd_tls_auth_only = yes +{% endhighlight %} + +#### SASL setup +SASL deals with the authentication of the users to your email server. + +{% highlight ini %} +# SASL setup +smtpd_sasl_type = dovecot +smtpd_sasl_path = private/auth +smtpd_sasl_auth_enable = yes +smtpd_recipient_restrictions = + permit_sasl_authenticated, + permit_mynetworks, + reject_unauth_destination +smtpd_relay_restrictions = + permit_sasl_authenticated, + permit_mynetworks, + reject_unauth_destination +{% endhighlight %} + +#### Debugging +The debugging options are generally useful in case things break. If you have +little traffic, you could leave them on forever in case you want to debug +something later on. Once your server is working as intended, you should turn +these options off. The postfix logs get pretty big in a short amount of time. + +{% highlight ini %} +# debugging +debug_peer_level = 2 +debugger_command = + PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/binary + ddd $daemon_directory/$process_name $process_id & sleep 5 +{% endhighlight %} + +#### Installation time defaults +These options should not be touched, but are very important to have for your +server. + +{% highlight ini %} +# install-time defaults +sendmail_path = /usr/local/sbin/sendmail +newaliases_path = /usr/local/bin/newaliases +mailq_path = /usr/local/bin/mailq +setgid_group = maildrop +html_directory = /usr/local/share/doc/postfix +manpage_directory = /usr/local/man +sample_directory = /usr/local/etc/postfix +readme_directory = /usr/local/share/doc/postfix +inet_protocols = ipv4 +meta_directory = /usr/local/libexec/postfix +shlib_directory = /usr/local/lib/postfix +{% endhighlight %} + +### master.cf +For the `master.cf` file, you can use the following configuration block. + +{% highlight cfg %} +submission inet n - n - - smtpd + -o syslog_name=postfix/submission + -o smtpd_tls_security_level=encrypt + -o smtpd_sasl_auth_enable=yes + -o smtpd_reject_unlisted_recipient=no + -o smtpd_recipient_restrictions=permit_sasl_authenticated,reject + -o milter_macro_daemon_name=ORIGINATING +pickup unix n - n 60 1 pickup +cleanup unix n - n - 0 cleanup +qmgr unix n - n 300 1 qmgr +tlsmgr unix - - n 1000? 1 tlsmgr +rewrite unix - - n - - trivial-rewrite +bounce unix - - n - 0 bounce +defer unix - - n - 0 bounce +trace unix - - n - 0 bounce +verify unix - - n - 1 verify +flush unix n - n 1000? 0 flush +proxymap unix - - n - - proxymap +proxywrite unix - - n - 1 proxymap +smtp unix - - n - - smtp +relay unix - - n - - smtp +showq unix n - n - - showq +error unix - - n - - error +retry unix - - n - - error +discard unix - - n - - discard +local unix - n n - - local +virtual unix - n n - - virtual +lmtp unix - - n - - lmtp +anvil unix - - n - 1 anvil +scache unix - - n - 1 scache +{% endhighlight %} + +### SQL query files +The following three configuration files deal with the SQL query files to make +Postfix able of getting some of its configuration from a database. You +obviously have to change the first 4 directives to match your database +authentication credentials. + +#### pgsql-virtual-domains.cf +{% highlight ini %} +user = postgres +password = incredibly-secret! +hosts = 127.1 +dbname = mail +query = SELECT 1 FROM domains WHERE name='%s'; +{% endhighlight %} + +#### pgsql-virtual-users.cf +{% highlight ini %} +user = postgres +password = incredibly-secret! +hosts = 127.1 +dbname = mail +query = SELECT 1 FROM users WHERE local='%u' AND domain='%d'; +{% endhighlight %} + +#### pgsql-virtual-aliases.cf +{% highlight ini %} +user = postfix +password = nope +hosts = 127.1 +dbname = mail +query = SELECT destination FROM aliases WHERE origin='%s'; +{% endhighlight %} + +## Conclusion +This should be enough Postfix configuration, for now. Next part involves +Dovecot, which will enable IMAP. It will also provide the SASL mechanism +defined in this part. + +[freebsd-ports]: https://www.freebsd.org/ports/ +[home]: / +[part-1]: /post/2016/10/31/freebsd-mailserver-part-1-preparations/ +[postfix]: http://www.postfix.org/ +[sendmail]: http://www.sendmail.com/sm/open_source/ diff --git a/content/posts/2016/2016-10-31-freebsd-mailserver-part-3-dovecot-imap-sasl.md b/content/posts/2016/2016-10-31-freebsd-mailserver-part-3-dovecot-imap-sasl.md new file mode 100644 index 0000000..0938a5e --- /dev/null +++ b/content/posts/2016/2016-10-31-freebsd-mailserver-part-3-dovecot-imap-sasl.md @@ -0,0 +1,228 @@ +--- +date: 2016-10-31 07:57:50 +title: "FreeBSD email server - Part 3: Dovecot, IMAP and SASL" +tags: +- Tutorial +- FreeBSD +- Email +- Dovecot +- IMAP +- SASL +--- + +Welcome to the second part of my FreeBSD email server series. In this series, I +will guide you through setting up your own email service. Be sure to read the +previous parts before trying to continue on this part in case you have not done +so yet. + +This part will guide you through setting up [Dovecot][dovecot]. This service +will deal with the SASL authentication to your email server and making your email +boxes accessible via IMAP. While this guide does not cover POP3 functionality, +Dovecot can handle this as well. + +Just like the Postfix setup, Dovecot has quite a few configuration options to +set before it will work as expected in this setup. If you have questions after +reading the full guide, please find me on IRC. You can find details on how to +do so on [my homepage][home]. + +## Installing Dovecot +Dovecot will also be installed from the ports tree from FreeBSD. As this guide +assumes you are working through them in order, explanation of acquiring the +ports tree will be omitted here. + +You can start the installation procedure with the following commands. + +``` +cd /usr/ports/mail/dovecot2 +make configure install +``` + +Again, like with the Postfix installation, leave the default options on and add +the `PGSQL` option so Dovecot can use PostgreSQL as the database back-end. + +## Enabling Dovecot +Enable the Dovecot service for rcinit. + +``` +echo 'dovecot_enable="YES"' >> /etc/rc.conf.local +``` + +## Configuring Dovecot +To start of with Dovecot configuration, copy over the sample files first. + +``` +cp -r /usr/local/etc/dovecot/example-config/* /usr/local/etc/dovecot/. +``` + +Now you can start editing a number of pesky files. The file names of the +headings all appear relative to `/usr/local/etc/dovecot`. + +### dovecot.conf +Here you only have to set which protocols you want to enable. Set them as +follows. + +```ini +protocols = imap lmtp +``` + +### conf.d/10-master.cf +The `master.cf` configuration file indicates which sockets Dovecot should use +and provide and as which user its processes should be ran. Keep the defaults as +they are, with the exception of the following two blocks. + +#### service imap-login +This will enable imaps, IMAP over SSL, and disable plain IMAP. + +```ini +service-imap-login { + inet_listener imap { + port = 0 + } + + inet_listener imaps { + port = 993 + ssl = yes + } +} +``` + +#### services +This will instruct Dovecot to provide a service for authentication and `lmtp` +the **local mail transport protocol**. This is required to deliver the email +files into the correct email box location in the file system. + +```ini +service auth { + unix_listener auth-userdb { + mode = 0600 + user = postfix + group = postfix + } + + unix_listener /var/spool/postfix/private/auth { + mode = 0666 + user = postfix + group = postfix + } + + user = dovecot +} + +service lmtp { + unix_listener /var/spool/postfix/private/dovecot-lmtp { + mode = 0600 + user = postfix + group = postfix + } +} + +service auth-worker { + user = postfix +} +``` + +### conf.d/10-ssl.conf +Here you have to enable SSL and provide the correct paths to your SSL key in +order for Dovecot to work with them. + +```ini +ssl = required +ssl_cert = < /usr/local/etc/letsencrypt/live/domain.tld/fullchain.pem +ssl_key = < /usr/local/etc/letsencrypt/live/domain.tld/privkey.pem +``` + +### conf.d/10-mail.conf +The mail.conf location instructs Dovecot which location to appoint for storing +the email files. `%d` expands to the domain name, while `%n` expands to the +local part of the email address. + +```ini +mail_home = /srv/mail/%d/%n +mail_location = maildir:~/Maildir +``` + +Make sure the location set by `mail_home` exists and is owned by `postfix`! + +``` +mkdir -p /srv/mail +chown postfix:postfix /srv/mail +``` + +### conf.d/10-auth.conf +This file deals with the authentication provided by Dovecot. Mostly, which +mechanisms should be supported and what mechanism should be used to get the +actual credentials to check against. Make sure the following options are set +as given + +```ini +disable_plaintext_auth = yes +auth_mechanisms = plain +``` + +Also, make sure `!include auth-system.conf.ext` is commented **out**. It is not +commented out by default, so you will have to do this manually. In addition, +you have to uncomment `!include auth-sql.conf.ext`. + +### conf.d/auth-sql.conf.ext +This is the file included from `10-auth.conf`. It instructs Dovecot to use SQL as +the driver for the password and user back-ends. + +```ini +passdb { + driver = sql + args = /usr/local/etc/dovecot/dovecot-sql-conf.ext +} + +userdb { + driver = prefetch +} + +userdb { + driver = sql + args = /usr/local/etc/dovecot/dovecot-sql-conf.ext +} +``` + +### dovecot-sql.conf.ext +The final configuration file entails the queries which should be used to get the +required information about the users. Make sure to update the `password` and possibly +other parameters used to connect to the database. You may have to update the `125` as +well, as this has to be identical to the `UID` of `postfix`. + +As a side note, if you are following this tutorial on a machine that does +**not** support Blowfish in the default glib, which is nearly every GNU+Linux +setup, you **can not** use `BLF-CRYPT` as the `default_pass_scheme`. You will +have to settle for the `SHA-512` scheme instead. + +```ini +driver = pgsql +connect = host=127.1 dbname=mail user=postfix password=incredibly-secret! +default_pass_scheme = BLF-CRYPT +password_query = \ + SELECT \ + local AS user, \ + password, \ + '/srv/mail/%d/%n' AS userdb_home, \ + 125 AS userdb_uid, \ + 125 AS userdb_gid \ + FROM users \ + WHERE local='%n' AND domain='%d'; + +user_query = \ + SELECT \ + '/srv/mail/%d/%n' AS home \ + 125 AS uid, \ + 125 AS gid \ + FROM users \ + WHERE local='%n' AND domain='%d'; +``` + +## Conclusion +After this part, you should be left with a functioning email server that +provides IMAP over a secure connection. While this is great on itself, for +actual use in the wild, you should setup some additional services. Therefore, +in the next part, we will deal with practices that "authenticate" your emails +as legit messages. Be sure to read up on it! + +[dovecot]: http://dovecot.org/ +[home]: / diff --git a/content/posts/2016/2016-10-31-freebsd-mailserver-part-4-message-authentication.md b/content/posts/2016/2016-10-31-freebsd-mailserver-part-4-message-authentication.md new file mode 100644 index 0000000..62a2799 --- /dev/null +++ b/content/posts/2016/2016-10-31-freebsd-mailserver-part-4-message-authentication.md @@ -0,0 +1,159 @@ +--- +date: 2016-10-31 20:00:38 +title: "FreeBSD email server - Part 4: Message authentication" +tags: +- Tutorial +- FreeBSD +- Email +- DKIM +- SPF +--- + +Welcome to another part in the FreeBSD email server series. This time, we are +going to setup some mechanisms to deal with message authentication. This +practice will make other email providers accept your email messages and deliver +them properly in the inbox of the receiving user, instead of their spam box. + +We will do so using three of the most common practices: [SPF][spf], +[DKIM][dkim] and [DMARC][dmarc]. + +## DKIM +### Installation +The tools for DKIM are easily installed using `pkg`. + +``` +pkg install opendkim +``` + +### Configuration +Write the following configuration into `/usr/local/etc/mail/opendkim.conf`. + +```apache +# logging +Syslog yes + +# permissions +UserID postfix +UMask 007 + +# general settings +AutoRestart yes +Background yes +Canonicalization relaxed/relaxed +DNSTimeout 5 +Mode sv +SignatureAlgorithm rsa-sha256 +SubDomains no +X-Header yes +OversignHeaders From + +# tables +KeyTable /usr/local/etc/opendkim/key.table +SigningTable /usr/local/etc/opendkim/signing.table + +# socket +Socket inet:8891@localhost + +# domains +Domain domain.tld.privkey +KeyFile /usr/local/etc/opendkim/domain.tld +Selector mail +``` + +#### Postfix +Postfix needs to be instructed to sign the messages with a DKIM header using +the opendkim service. You can do so by inserting the following configuration +block somewhere around the end of `/usr/local/etc/postfix/main.cf`. + +```ini +# milters +milter_protocol = 2 +milter_default_action = reject +smtpd_milters = + inet:localhost:8891 +``` + +#### System service +OpenDKIM runs as a system service. As such, you will have to enable this +service in rcinit. This is a simple step, achieved with the given command. + +``` +echo 'milteropendkim_enable="YES"' >> /etc/rc.conf.local +``` + +Do not forget to actually start the service when you are done with the +tutorial! + +### Creating and using keys +In order to use DKIM, you will need to generate some keys to sign the messages +with. You cannot use your Let's Encrypt SSL keys for this. First, create a +directory to house your domain's keys. + +``` +mkdir -p /usr/local/etc/opendkim/keys/domain.tld +chown -R postfix:wheel $_ +``` + +Next up, generate your first key. + +``` +opendkim-genkey -D /usr/local/etc/opendkim/keys -b 4096 -r -s $(date +%Y%m%d) -d domain.tld +``` + +I tend to use the current date for the key names so I can easily sort them by +the most recent one. + +Afterwards, you will have to add a line to two separate files to instruct DKIM +to use this key for a certain domain when signing mail. These are fairly +straightforward and can be done using a simple `echo` as well. + +``` +echo '*@domain.tld domain.tld' >> /usr/local/etc/opendkim/signing.table +echo "domain.tld domain.tld:$(date +%Y%m%d):/usr/local/etc/opendkim/keys/domain.tld/$(date +%Y%m%d).private" \ + >> /usr/local/etc/opendkim/key.table +``` + +### Adding the DNS records +You may have already noticed that `opendkim-genkey` also creates a `.txt` file +in addition to the private key. This text file contains the DNS record value +you need to add for your domain's DNS. Add the record to your DNS server, and +simply wait for it to propagate. + +## SPF +SPF is simply a DNS record that shows which IPs are allowed to email for that +domain. + +### Adding the DNS records +A simple example for an SPF record is the following. It allows mail to be sent +in the domain's name from any IP listed in the MX records. + +``` +v=spf1 mx -all +``` + +## DMARC +DMARC is, like SPF, a DNS record. It tells how to deal with messages coming +from the server and where to report abuse of your server. Some of the larger +email providers send out reports to the address given in the DMARC record so +you can figure out whether someone is spamming from your servers, for example. + +### Adding the DNS records +A simple DMARC policy to get started with is to quarantine all emails that fail +authentication. This means the emails will go into the receiving user's spam +box. In addition, abuse reports will be sent to the address defined in the +`rua`. + +``` +v=DMARC1; p=quarantine; rua=mailto:abuse@domain.tld +``` + +## Conclusion +These few simple measures will make receiving servers trust the authenticity of +the mails you send. In effect, your messages will be much less likely to be +marked as spam. However, you are a target of spam as well. How you can deal +with that, will be available in the next part of this series. + +[dkim]: http://www.dkim.org/ +[dmarc]: http://dmarc.org/ +[spf]: https://en.wikipedia.org/wiki/Sender_Policy_Framework + diff --git a/content/posts/2016/2016-10-31-freebsd-mailserver-part-5-filtering-mail.md b/content/posts/2016/2016-10-31-freebsd-mailserver-part-5-filtering-mail.md new file mode 100644 index 0000000..07f8e21 --- /dev/null +++ b/content/posts/2016/2016-10-31-freebsd-mailserver-part-5-filtering-mail.md @@ -0,0 +1,132 @@ +--- +date: 2016-10-31 20:02:19 +title: "FreeBSD email server - Part 5: Filtering mail" +tags: +- Tutorial +- FreeBSD +- Email +- Postfix +- SpamAssassin +- Pigeonhole +--- + +Being able to send mail and not be flagged as spam is pretty awesome on itself. +But you also get hit by a lot of spam. The more you give out your email address +and domain name, the more spam you will receive over time. I welcome you to +another part of the FreeBSD email server series. In this part, we will set up +email filtering at the server side. + +We will accomplish this with a couple packages, [SpamAssassin][spamassassin] +and [Pigeonhole][pigeonhole]. The former deals with scanning the emails to +deduce whether it is spam or not. The latter filters messages. We will use this +filtering to drop emails marked as spam by SpamAssassin into the Junk folder, +instead of the inbox. + +## Installing the packages +Both packages are available through FreeBSD's `pkg` utility. Install them as +such. + +``` +pkg install dovecot-pigeonhole spamassassin +``` + +## SpamAssassin +### Enabling the service +Like most services, you have to enable them as well. Pigeonhole is an extension +to Dovecot, and Dovecot will handle this one. SpamAssassin requires you to +configure the service as well. You can enable it and set sane configuration to +it with the following two commands. + +``` +echo 'spamd_enable="YES"' >> /etc/rc.conf.local +echo 'spamd_flags="-u spamd -H /srv/mail"' >> /etc/rc.conf.local +``` + +### Acquiring default spam rules +SpamAssassin has to "learn" what counts as *spam* and what counts as *ham*. To +fetch these rules, you should execute the updates for SpamAssassin with the +following command. + +``` +sa-update +``` + +You most likely want to run this once every while, so it is advised to setup a +cronjob for this purpose. + +## Postfix +In order to have mails checked by SpamAssassin, Postfix must be instructed to +pass all email through to SpamAssassin, which will hand them back with a +`X-Spam-Flag` header attached to them. This header can be used by other +applications to treat it as spam. + +### master.cf +There's not much to include to the already existing Postfix configuration to +enable SpamAssassin to do its job. Just open `/usr/local/etc/postfix/master.cf` +and append the block given below. + +```ini +spamassassin unix - n n - - pipe + user=spamd argv=/usr/local/bin/spamc + -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient} +``` + +## Pigeonhole +Pigeonhole is an implementation of Sieve for Dovecot. It deals with filtering +messages on the server side using a set of rules, defined in a file usually +named `sieve`. This file is generally saved at +`/srv/mail/domain.tld/user/sieve`. A default file to filter spam out is the +following example. + +```sieve +require [ + "fileinto", + "mailbox" +]; + +if header :contains "X-Spam-Flag" "YES" { + fileinto :create "Junk"; + stop; +} +``` + +This looks for the `X-Spam-Flag` header, which is added by SpamAssassin. If it +is set to `YES`, this indicates SpamAssassin thinks the message is spam. As +such, sieve is instructed to filter this message into the folder `Junk`, and to +create this folder if it does not exist yet. The `stop;` makes sieve stop +trying to process this message further based on later rules. + +## Dovecot +Dovecot needs some additional configuration to work with Pigeonhole. Modify the +following files and add the contents described. + +### conf.d/20-lmtp.conf +This will enable Pigeonhole in Dovecot. + +```ini +protocol lmtp { + mail_plugins = $mail_plugins sieve +} +``` + +### conf.d/90-plugin.conf +This configures Pigeonhole to look for a file named `sieve` in the mailbox +homedir, and execute that when delivering mail. + +```ini +plugin { + sieve = /srv/mail/%d/%n/sieve +} +``` + +## Conclusion +Spam is a pain, especially if you get a lot of it. The configuration added in +this part of the FreeBSD email server series should get rid of most of it. This +also concludes the series. If you have any questions or suggestions, please +contact me via any of the methods detailed on [my home page][home]. + +Thanks for reading along, and enjoy your very own email server! + +[home]: / +[pigeonhole]: http://pigeonhole.dovecot.org/ +[spamassassin]: https://spamassassin.apache.org/ diff --git a/content/posts/2016/2016-11-24-freebsd-mailserver-calendars-and-contacts.md b/content/posts/2016/2016-11-24-freebsd-mailserver-calendars-and-contacts.md new file mode 100644 index 0000000..b39120f --- /dev/null +++ b/content/posts/2016/2016-11-24-freebsd-mailserver-calendars-and-contacts.md @@ -0,0 +1,141 @@ +--- +date: 2016-11-24 08:26:09 +title: "FreeBSD email server - Part +: Calendars and contacts" +tags: +- Tutorial +- FreeBSD +- Email +- CalDAV +- CardDAV +--- + +This guide is an addition to the [FreeBSD email server series][tutorial-email]. +It is not required for your email server to operate properly, but it is often +considered a very important feature for those who want to switch from a third +party email provider to their own solution. It does build upon the completed +series, so be sure to work through that before starting on this. + +## Install required packages +``` +pkg install py27-radicale +``` + +## Configure Radicale +### /usr/local/etc/radicale/config +Open up the `/usr/local/etc/radicale/config` file, and update each `[block]`. + +#### [server] +The server is binding to `localhost` only. This way it is not accessible on +`:5232` from outside the server. Outside access will be provided through an +nginx reverse proxy instead. + +```ini +hosts = 127.1:5232 +daemon = True + +dns_lookup = True + +base_prefix = / +can_skip_base_prefix = False + +realm = Radicale - Password required +``` + +#### [encoding] +```ini +request = utf-8 +stock = utf-8 +``` + +#### [auth] +```ini +type = IMAP + +imap_hostname = localhost +imap_port = 143 +imap_ssl = False +``` + +#### [storage] +```ini +type = filesystem +filesystem_folder = /usr/local/share/radicale +``` + +#### [logging] +```ini +config = /usr/local/etc/radicale/logging +``` + +### /usr/local/etc/radicale/logging +This file is fine on the defaults in FreeBSD 11. This saves you from +configuring a little bit. + +## Configure Dovecot +### Enable imap +This option was disabled in the [IMAP server tutorial][tutorial-email], +however, if we want to auth using the same credentials as the mailserver, this +option is needed again. Bind it to `localhost`, so it can only be used +internally. In `/usr/local/etc/dovecont/conf.d/10-master.conf`, enable the +`imap` port again: + +```ini +... +service imap-login { + inet_listener imap { + address = 127.1 + port = 143 + } + ... +} +... +``` + +## Configure nginx +To make using the service easier, you can setup [nginx][nginx] to act as a +reverse proxy. If you followed the [webserver tutorial][tutorial-webserver], +you already have the basics for this set up. I do recommend you check this out, +as I will only explain how to configure a virtual host to deal with the reverse +proxy here. + +### Setup a reverse proxy +Assuming you have taken the crash-course in setting up the nginx webserver, you +can attain a reverse proxy using the following config block. Note that this block +only does HTTPS, as I use HTTP only to redirect to HTTPS. + +```nginx +# static HTTPS +server { + # listeners + listen 443 ssl; + server_name radicale.domain.tld; + + # enable HSTS + add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; + + # keys + ssl_certificate /usr/local/etc/letsencrypt/live/domain.tld/fullchain.pem; + ssl_certificate_key /usr/local/etc/letsencrypt/live/domain.tld/privkey.pem; + + # / handler + location / { + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_pass http://127.1:5232; + } +} +``` + +## Enable the service at startup +``` +echo 'radicale_enable="YES"' >> /etc/rc.conf.local +``` + +## Start the server +``` +service radicale start +``` + +[nginx]: https://www.nginx.com/ +[tutorial-email]: /post/2016/10/31/freebsd-mailserver-part-1-preparations/ +[tutorial-webserver]: /post/2016/10/25/setup-nginx-with-lets-encrypt-ssl/ diff --git a/content/posts/2016/_index.md b/content/posts/2016/_index.md new file mode 100644 index 0000000..74b1787 --- /dev/null +++ b/content/posts/2016/_index.md @@ -0,0 +1,3 @@ +--- +title: 2016 +--- diff --git a/content/posts/2017/2017-09-14-how-to-git.md b/content/posts/2017/2017-09-14-how-to-git.md new file mode 100644 index 0000000..39b884e --- /dev/null +++ b/content/posts/2017/2017-09-14-how-to-git.md @@ -0,0 +1,182 @@ +--- +date: 2017-09-14 +title: "How to: git" +tags: +- Tutorial +- Git +--- + +This guide will explain how to use `git` more efficiently, and why you should +use it as such. + +## Forking +When working in a team, there's generally a remote server which is used to sync +your repositories. There are gratis services, such as [GitHub][github], +[Gitlab][gitlab], [GOGS][gogs], and others. These services also allow you to +*fork* a repository. This basically makes a copy of the entire repository for +your own use. In it, you have full control over the branches, tags, merge +process and everything else you want to do with it. + +One the main reasons to do this is so you do not have to clutter up the main +repository with a ton of branches (these are explained later in the post). If +there are two people working in the same branch, it can help reduce conflicts, +as each developer is working on the branch in his own fork. + +As such, **always** use a fork. If the service does not have a fancy button for +you to click, you can still fork manually. Simply clone their repository as +usual, set a new remote and push it there: + +``` +git clone git@domain.tld:them/repo.git +cd repo +git remote rename origin upstream +git remote add origin git@domain.tld:you/repo.git +git push origin master +``` + +The default naming convention uses `upstream` for the base of your fork, and +`origin` for your remote version of the repository. If a merge request is +accepted on the original repo, you can apply it to your fork using + +``` +git pull upstream master +``` + +## Branching +Branching is the art of using separate branches to introduce new code into your +`master` branch. Every git repository starts with a `master` branch by default. +This is the *main* branch of your repository. + +Every time you want to add new code to your project, make a branch for the +feature or issue you are trying to solve. This way, you can commit freely +without having to worry about having untested or possibly broken code in the +`master` branch. If something were to come up with a higher priority, such as a +critical bug, you can simply create a new branch off of `master`, fix it and +merge that back into `master`, without having to worry about that other feature +you were working on, which is not in a releasable state yet. Once the fix is +applied, you go back to your feature branch on continue working on the cool new +stuff you wanted to implement. Now, the bug is fixed, and no code has been +released that should not have been released. If that's not convincing enough, +try some of the [Stack Overflow posts][so-git-branch] on this very topic. + +Branches can be made at your leisure, with next to no overhead on your project. +Do not be scared to play around with your code in a new branch to test +something out. You can also delete branches as quickly as you made them if you +are not satisfied with the result. + +Creating branches is done using `git checkout -b new-branch`. If you need to +switch to another existing branch to change something, use +`git checkout other-branch`. Deleting a branch can be done using +`git branch -D old-branch`. You can get a list of all branches in the +repository with `git branch`. The current branch is marked with an \*. + +If you start a new branch to implement a feature, be sure to always branch off +of `master`, unless you have a very compelling reason not to do so. If you are +not sure what reasons would validate branching off of another branch, you +should just branch off of `master`. If you branch off of another branch, you +will have the commit history of the other branch. This often includes commits +not accepted into master yet, which might result into commits getting into +master which should not be there (yet), or annoying merge conflicts later on. + +### Merging +Using multiple branches brings along the concept of *merging* branches +together. When working in a group, this is generally done by maintainers of the +upstream repository, via a *merge request*. For some reason, certain services +have named this as a *pull request* instead. The base idea of the process is as +follows: + +- Pull the latest `upstream/master` +- Create a new branch +- Apply the change you want +- Issue a merge request via the service you are using + - Generally, you want your change to be merged into their `master` branch +- Add a title and a description of your change: What does it do, and why should it be accepted +- Optionally, discuss the changes with the upstream maintainers +- Optionally, make a couple of changes to your branch, and push it again +- Upstream maintainer accepts your change + +When everything worked out, the upstream repository now contains your changes. +If you pull their branch again, it will contain your code. Using the merge +request process, your code can be easily reviewed by others, and discussed if +needed. + +## Committing +Whenever you have changed anything in the repository and you wish to share +these changes, you have to commit the changes. Committing in general is not +something people tend to have issues with. Simple add the changes you want to +commit using `git add` (add the `-p` switch if you want to commit only parts of +a changed file), then `git commit` and enter a descriptive message. And that is +where most annoyances come from: the commit *message*. There are no hard rules +on this forced by git itself. There are, however, some de-facto standards and +best practices which you should always follow. Even if you never intend to +share the repository with other people, having good commit messages can help +you identify a certain change when you look back into the history. + +A git commit message should be short, no more than 79 characters, on the first +line. It should be readable as "this commit message will ...", where your +commit message will replace the "...". It is a de-facto standard to start your +commit message with a capital letter, and leave off a finishing period. You do +not *have* to adhere to if you hate this, but be sure that all your commits are +consistent in how they are formatted. + +If you need to explain anything beyond that, such as a rationale for the +change, or things the reviewer should pay attention to in this particular +commit, you can leave an empty line and publish this message in the commit +body. + +When you are using a bug tracking system, you might also want to have a footer +with additional information. On services such as [Gitlab][gitlab] and +[GitHub][github], you can close issues by adding "Closes: #1" in the commit +message footer. A full commit message with all these things might look as +follows: + +``` +Fix overflow issue in table rendering mechanism + +An overflow issue was found in the table rendering mechanism, as explained in +CVE-0123-45678. Regression tests have been included as well. + +Closes: #35 +``` + +In order to achieve these kind of messages, you need to be sure that your +commits can fit in to this structure. This means you need to make small +commits. Having many smaller commits makes it easier to review the changes, +keep short, descriptive messages to describe each change, and revert a single +change in case it breaks something. + +### Signing your commits +You can set up git to cryptographically sign each commit you make. This will +ensure that the commit you made is proven to be from you, and not someone +impersonating you. People impersonating you might try to get harmful code into +a repo where you are a trusted contributor. Having all commits signed in a +repository can contribute in verifying the integrity of the project. + +Recently, [Github][github] has added the **Verified** tag to commits if the +commit contains a correct signature. + +To enable signing of all commits, add the following configuration to your +`~/.gitconfig`: + +```ini +[commit] + gpgsign = true + +[user] + signingkey = 9ACFE193FFBC1F50 +``` + +Ofcourse, you will have to update the value of the `signingkey` to match +the key you want to sign your commits with. + +## Closing words +I hope this post will help you in your adventures with git. It is a great tool +or working on projects together, but it gets much better when you stick to some +best practices. If you have any suggestions for this post, or any questions +after finishing it, contact me via any method listed on [my home page][home]. + +[github]: https://github.com +[gitlab]: https://gitlab.com +[gogs]: https://gogs.io +[home]: https://tyil.work +[so-git-branch]: https://softwareengineering.stackexchange.com/questions/335654/git-what-issues-arise-from-working-directly-on-master diff --git a/content/posts/2017/2017-09-28-perl6-creating-a-background-service.md b/content/posts/2017/2017-09-28-perl6-creating-a-background-service.md new file mode 100644 index 0000000..4f94bb6 --- /dev/null +++ b/content/posts/2017/2017-09-28-perl6-creating-a-background-service.md @@ -0,0 +1,157 @@ +--- +date: 2017-09-28 +title: Perl 6 - Creating a background service +tags: +- Tutorial +- Perl6 +- Programming +- Raku +--- + +I've recently made some progress on +[Shinrin](https://github.com/scriptkitties/perl6-Shinrin) a centralized logging +system in Perl 6. This has to run as service, which means that for most service +managers it has to be able to run in the background. + +{{< admonition title="Note" >}} +If you just want to get to the solution and don't care for the details, just +head straight to [the full script](#the-final-solution). +{{< / admonition >}} + +## It's not possible! + +After a lot of trying and talking with the folks at +[#perl6](irc://chat.freenode.net:6697/#perl6) I was told that it is not possible +to do this in pure Perl 6, explained by people with more knowledge than I have +on the internals: + +{{< quote attribution="jnthn" >}} +(jnthn suspects fork + multi-threaded VM = pain) Since fork only clones one +thread - the one that called it. So suddenly you've got an instance of the VM +missing most of its threads. +{{< / quote >}} + +{{< quote attribution="geekosaur" >}} +The most common failure mode is that some thread is holding e.g. a mutex (or a +userspace lock) during the fork. The thread goes away but the lock is process +level and remains, with nothing around to know to unlock it. So then things +work until something else needs that lock and suddenly you deadlock. +{{< / quote >}} + +Not much later, `jnthn` [pushed a +commit](https://github.com/perl6/doc/commit/8f9443c3ac) to update the docs to +clarify that a `fork` call through `NativeCall` will probably not give the +result you were hoping for. + +## Or is it? + +Luckily, the same people were able to think up of a work-around, which can be +made in POSIX sh, so it's usable on any decent OS. The workaround is to let a +little shell script fork into the background, and let that run the Perl +application. + +### A first example +This is fairly simple to create, as in this example to launch `shinrind` in the +background: + +```sh +#! /usr/bin/env sh + +main() +{ + perl6 -Ilib bin/shinrind "$@" +} + +main "$@" & +``` + +This works just fine if the working directory is correct. This means you need +to be in the parent directory to `lib` and `bin` of the program to make it +work. + +## Improving the forking script + +While that short script works fine to show a proof of concept, in order to make +it viable for real-world scenarios, it can use some improvements. After all, it +would be annoying if you'd have to `cd` to a specific directory any time you +want to start your application. + +### Ensure you are in the directory you should be in + +So for starters, let's make sure that you can run it from anywhere on your +system. For this, you should set the working directory for the script, so you +don't have to do it manually. Because the script runs in its own subshell, the +shell you're working from remains unaffected. + +A POSIX compliant way to get the directory the script is stored in is as +follows: + +```sh +DIR=$(CDPATH="" cd -- "$(dirname -- "$0")" && pwd) +``` + +This will set `$DIR` to the path of the directory the shell script is stored +in. You can simply `cd` to that and be assured you're in the right directory. + +In Perl 6, it is expected for executable files to live in the `bin` directory +of your project repository. So you should actually be in the parent of the +directory holding your script. Furthermore, you should check the `cd` command +executed correctly, just to be safe. + +```sh +cd -- "${DIR}/.." || exit +``` + +### Disable `STDOUT` and `STDERR` + +A started service should not be polluting your interactive shell, so you should +disable (or otherwise redirect) `STDOUT` and `STDERR`. This is done in the +shell using a small bit of code behind whatever you want to redirect: + +```sh +> /dev/null 2>&1 +``` + +This will set `STDOUT` to `/dev/null`, and set `STDERR` to the same stream as +`STDOUT`, which in effect will make all output go to `/dev/null`. If you want +to log everything to a single file, you can replace `/dev/null` with another +file of your choice. If you don't want logs to be overwritten on each start, +use a `>>` instead of a single `>` at the start. + +If you want to log errors and output in different files, you can use the +following: + +```sh +> /var/log/service.log 2> /var/log/service.err +``` + +This will put standard output in `/var/log/service.log` and errors in +`/var/log/service.err`. + +### Fork just the Perl 6 program + +In the initial example, I put the `&` behind the `main` call, at the bottom of +the script. While this works just fine for most simple usage, if you want to do +additional chores, like creating a pidfile after starting the Perl 6 program, +you're out of luck. If you were to only fork the Perl 6 application, you could +handle some other cases in the shell script. + +### The final solution + +For those eager to just get going with this, here is the complete example +script to just fork your Perl program into the background: + +```sh +#! /usr/bin/env sh + +readonly DIR=$(CDPATH="" cd -- "$(dirname -- "$0")" && pwd) + +main() +{ + cd -- "${DIR}/.." || exit + + perl6 -Ilib bin/shinrind "$@" > /dev/null >2&1 & +} + +main "$@" +``` diff --git a/content/posts/2017/2017-11-01-hacktoberfest-2017.md b/content/posts/2017/2017-11-01-hacktoberfest-2017.md new file mode 100644 index 0000000..015f341 --- /dev/null +++ b/content/posts/2017/2017-11-01-hacktoberfest-2017.md @@ -0,0 +1,213 @@ +--- +title: Hacktoberfest 2017 +date: 2017-11-01 +tags: +- Contributions +- FreeSoftware +- Github +- Hacktoberfest +--- + +This year I actively participated in the Hacktoberfest event, which is "a +month-long celebration of open source software". Ironic, given that the +companies organising it don't have their own software stack open source. + +I've found some issues to solve in [Perl 6](https://perl6.org/) projects, and +that lead to trying to solve issues in some other projects, and eventually I +got more PRs out than there are days in the month. It did go at the cost of +some sleep, but in the end it seems worth it. In this article, I'll give a +small overview of all those PRs, in no particular order. + +## Projects contributed to + +### Funtoo + +#### funtoo/boot-update + +- https://github.com/funtoo/boot-update/pull/14 + +When reinstalling my server to try out [Docker](https://docker.com), I noticed +an error in the output of the `boot-update` utility, a tool from +[Funtoo](https://www.funtoo.org/Welcome) to make installing and configuring the +bootloader easier. The error itself was a small type of a `-` which had to be a +`_`. + +#### scriptkitties/overlay + +- https://github.com/scriptkitties/overlay/pull/14 +- https://github.com/scriptkitties/overlay/pull/15 +- https://github.com/scriptkitties/overlay/pull/16 + +This is the overlay of the [Scriptkitties](https://scriptkitties.church) +community. It's got some additional software released under a free license that +is not available in the main portage repository. Most of the packages in here +are of software made by the Scriptkitties community. + +This month I updated the readme to be in asciidoc, my new favourite format for +documentation. The Travis builds should also no longer throw errors, so those +can be used again to ensure the overlay is meeting quality standards. One +package has also been updated to be at it's latest version again. + +### Perl 6 + +#### moznion/p6-HTML-Escape + +- https://github.com/moznion/p6-HTML-Escape/pull/1 + +On this repository, I added a subroutine to also handle unescaping HTML special +characters. Sadly, the owner of this repository has shown no sign of life, and +the PR remains open. + +#### rakudo/rakudo + +- https://github.com/rakudo/rakudo/pull/1180 + +This is a rather small issue, but I noticed it when compiling Perl 6 with +[Rakudobrew](https://github.com/tadzik/rakudobrew) and it annoyed me. +[Zoffix](http://zoffix.com/) was a great help in getting me started on this one, +and in general with many other Perl related contributions as well. + +#### scriptkitties/perl6-IRC-Client-Plugin-Github + +- https://github.com/scriptkitties/perl6-IRC-Client-Plugin-Github/pull/2 + +A neat feature for the Github notification system, HMAC adds a header that can +be used to verify the body of the request, and can be used to verify the other +end of the connection knows the right "secret". Inspired by a Perl 6 bot that +already did this, I made a PR to make this a proper +[`IRC::Client`](https://github.com/zoffixznet/perl6-IRC-Client) plugin. It is still +being tested in [musashi](https://github.com/scriptkitties/musashi). + +#### perl6/roast + +- https://github.com/perl6/roast/pull/342 + +Roast is the test suite for Perl 6. There was an open issue for the IO::File +tests, which needed expansion. As my first contribution during a Perl 6 +squashaton, I expanded these tests to fix the issue that was open for it. + +#### vim-perl/vim-perl6 + +- https://github.com/vim-perl/vim-perl6/pull/9 +- https://github.com/vim-perl/vim-perl6/pull/10 + +This first PR has become a bit of a drag, with the maintainers not responding +for two weeks, but suddenly very eager to respond when I mention I'm going to +fork off and update the reference on the Perl documentation to my fork. +Nonetheless, it's sorted out, and the abbreviations for unicode operators +have been merged in! + +#### timo/json_fast + +- https://github.com/timo/json_fast/pull/32 + +`JSON::Fast` is the de-facto standard for dealing with JSON data in Perl 6 it +seems. For my work with `App::Cpan6` I wanted the JSON data to be ordered, so I +added that as an option when calling `to-json`. Having the JSON data ordered +makes it easier to compare diffs of two different versions of the data, making +git diffs a lot cleaner. + +Sadly, timo has not merged the PR yet, so I can't properly depend on it in +`App::Cpan6`. + +#### scriptkitties/perl6-SemVer + +- https://github.com/scriptkitties/perl6-SemVer/pull/1 + +This is one of the new projects I started. It is intended to be used in +`App::Cpan6`, since that uses [Semantic Versioning](https://semver.org) for all +modules it works with. This module defines a class that can interpret a SemVer +notation, and exposes methods to bump any part of the version. + +#### perl6/doc + +- https://github.com/perl6/doc/pull/1614 + +This has been one of the more annoying PRs to work on, as the current `zef` +maintainer insists everything but his module is wrong, and seemed very +uninterested to improve the situation for users. After some discussion on IRC, +some more discussion on IRC, and then some discussion on the PR itself, I +decided to just word the paragraph differently. + +I am still interested in improving the documentation here and the ecosystem +itself, mainly the `META6.json` specification, and getting `zef` to play nice +with this spec. If anyone else is interested in helping me out on this, do +message me on IRC! + +#### perl6/perl6.org + +- https://github.com/perl6/perl6.org/pull/86 +- https://github.com/perl6/perl6.org/pull/87 + +There were some open issues for the [perl6.org](https://perl6.org) website, and +I decided to take a look at some and try to fix them. This resulted in NeoVim +being added to the list of recommended editors for Perl 6, and the list of IRC +bots being updated to include all bots in use right now. + +#### scriptkitties/p6-MPD-Client + +- https://github.com/scriptkitties/p6-MPD-Client/pull/1 +- https://github.com/scriptkitties/p6-MPD-Client/pull/2 + +As I was making `App::MPD::AutoQueue` and `App::MPD::Notify`, I found some +issues in `MPD::Client`. I fixed those to get my two new projects working +nicely. + +#### melezhik/sparrowdo + +- https://github.com/melezhik/sparrowdo/pull/15 +- https://github.com/melezhik/sparrowdo/pull/18 + +Sparrowdo is a configuration management system, written in Perl 6. I learned +about it after a reference from the Perl 6 Weekly, and set out to try it. I ran +into some issues, which I reported and eventually fixed. + +In addition, I also rewrote the testing script for Travis, which enables +paralel builds of the tests. This has nearly halved the time required for +running the full test suite. + +#### perl6/ecosystem + +- https://github.com/perl6/ecosystem/pull/371 +- https://github.com/perl6/ecosystem/pull/372 +- https://github.com/perl6/ecosystem/pull/374 + +These PRs added a module, and removed that one and more later on, since I got a +PAUSE ID and uploaded my modules to CPAN. + +#### scriptkitties/perl6-App-Cpan6 + +- https://github.com/scriptkitties/perl6-App-Cpan6/pull/1 +- https://github.com/scriptkitties/perl6-App-Cpan6/pull/2 +- https://github.com/scriptkitties/perl6-App-Cpan6/pull/3 +- https://github.com/scriptkitties/perl6-App-Cpan6/pull/4 +- https://github.com/scriptkitties/perl6-App-Cpan6/pull/12 +- https://github.com/scriptkitties/perl6-App-Cpan6/pull/13 +- https://github.com/scriptkitties/perl6-App-Cpan6/pull/14 +- https://github.com/scriptkitties/perl6-App-Cpan6/pull/15 + +`App::Cpan6` is a tool I've started working on to assist me in creating new +Perl 6 modules. There's been a couple of tasks that I do often in the process +of creating a module, and those tasks should become easier and faster using +this module. + +If everything works out and I learn enough of the module installation process, +I might consider letting this deal with the installation and updating of +modules as well. + +## In retrospect + +The Hacktoberfest has been an interesting month for me. I've gotten to +contribute to a project I have come to love a lot, Perl 6. I've also made some +new friends with similar goals. Sadly I can't put in this much time every month +of the year, but I would if I could! + +I learned many interesting things for Perl 6, new operators, new functions, all +kinds of cool stuff to improve my Perl scripts with. I also got to learn about +parallelizing Travis builds with the Sparrowdo project, of which I will write +another tutorial post later. + +I've greatly enjoyed contributing to all the various projects, and would +recommend other people to check it out too. The people on the respective +project's IRC channels have been a great help to me to get started, and I can +help out getting you started as well now. diff --git a/content/posts/2017/2017-11-16-perl6-setting-up-a-raspberry-perl.md b/content/posts/2017/2017-11-16-perl6-setting-up-a-raspberry-perl.md new file mode 100644 index 0000000..eb42853 --- /dev/null +++ b/content/posts/2017/2017-11-16-perl6-setting-up-a-raspberry-perl.md @@ -0,0 +1,206 @@ +--- +date: 2017-11-16 +title: "Setting up a Raspberry Perl" +tags: +- Tutorial +- Perl6 +- RaspberryPi +- Raku +--- + +In this tutorial I'll get you through setting up a Raspberry Pi with +[Perl 6](https://perl6.org/). I am using a Raspberry Pi 3 myself, but other +versions should work fine too. However, older versions are slower, so it might +take a bit longer to install completely. + +{{< admonition title="Note" >}} +For those who have never had a Raspberry Pi before, you will need +the following: + +- Raspberry Pi board +- Power supply (5v 2A, micro USB) +- SD card of at least 4gb, but I would advise at least 8gb +- Monitor with HDMI cable +- Keyboard +{{< / admonition >}} + +Perl 6 will be installed using +[Rakudowbrew](https://github.com/tadzik/rakudobrew), which I'll also be using to +get [zef](https://github.com/ugexe/zef) installed. Zef is the recommended module +manager for Perl 6. + +## Setting up Raspbian + +The first step is getting the OS set up. To keep this tutorial simple, I will +stick to [Raspbian](https://www.raspbian.org/), but if you feel confident in +your skills you can use any other distribution or OS. Perl 6 installs the same +on all UNIX(-like) operating systems. + +### Get the image + +First, [download the Raspbian image from the Raspberry Pi download +page](https://www.Raspberrypi.org/downloads/raspbian/). I chose the `LITE` +version, but if you prefer having a graphical desktop you can go for the +`DESKTOP` version instead. + +At the time of writing, this means I got the +`2017-09-07-raspbian-stretch-lite.zip`. If you want to verify you got the +correct download and nothing went wrong saving it to your disk, you can verify +the checksum. The checksum for your download is noted below the download links. +To get the checksum of the file you downloaded, use `sha256sum` as follows: + +NOTE: Lines prepended with a `$` are to be ran as your normal user, whereas +lines with a `#` are ment to be ran as "super user". This can be done by using +a privilege escalation program, such as +[`sudo`](https://www.linux.com/blog/how-use-sudo-and-su-commands-linux-introduction). + + $ sha256sum 2017-09-07-raspbian-stretch-lite.zip + +If the checksum matches the one noted below the download button you used, it +should be fine, and you can continue with extracting the image from the zip +using `unzip`: + + $ unzip 2017-09-07-raspbian-stretch-lite.zip + +This will result in a similarly named file, but with a `.img` extension instead +of `.zip`. This is the image that you can write to the SD card. + +### Write the image to the SD card + +This step is pretty easy, but typos here can be disastrous for the system +you're using to write to the SD card. + +Open a terminal and run `dmesg -w` as super user (usually doable using `sudo +dmesg -w`). This will give immediate feedback when you insert your SD card, and +shows which device it is being assigned to. In my case, this was `sdb`, which +means the device file resides at `/dev/sdb`. + +Now, to actually write the image, I'll use `dd` since this is everyone's +favourite tool, it seems. If you feel adventurous enough to try out something +different, feel free to read up on +[Useless Use of `dd`](https://www.vidarholen.net/contents/blog/?p=479). + +Make sure to make the `if` argument point to the correct path with your +extracted raspbian image, and `of` to point to the correct device as identified +earlier. In order to be allowed to run this command, you must be root, which +can be achieved by using `sudo` or `doas` again. + + # dd bs=4M status=progress if=/path/to/2017-09-07-raspbian-stretch-lite.img of=/dev/sdb + $ sync + +Afterwards, plug it into your Raspberry Pi and attach all cables you might +need. Think of stuff like a keyboard, mouse, monitor, internet, power. Do power +last, as the Raspberry Pi will start immediatly once it receives power. + +### First boot + +The Raspberry Pi should start booting the moment you supply it with power. If +you attach the HDMI after the power, it's possible you won't have display +working, so make sure HDMI is attached before powering up. + +You'll see some text scrolling by, up to a point where it asks you for a +`login`, and accepts keyboard input. The default username is `pi`, and the +default password is `Raspberry`. You are strongly advised to change the +password upon login, which can be done in the next step. + +### Configuration + +The Raspberry Pi comes with its own configuration tool, `raspi-config`. Run +this with `sudo` prepended in front of it so you gain the right privileges. I +would advise you to at least change the user password from here. After this you +should go to `Advanced Options` and expand the filesystem. This will grow the +filesystem to the entire SD card's size. + +TIP: To get to the buttons on the bottom (`Select`, `Finish` and `Back`), use +the arrow keys to go left or right. + +You can look around the tool for other interesting things to modify. Once you +are satisfied, go back to the main menu and choose `Finish`. It will ask to +reboot, which you should accept. This will apply all the new configurations you +just made. + +### Updating and installing additional packages + +It's rare for the system to be completely up to date after installing the image +on the SD card. Additionally, you also need some extra packages in order to get +rakudobrew, and to install Perl 6 itself. For this, we use the package manager +bundled with raspbian, `apt`: + + # apt update + # apt upgrade + +This will update the package lists, and then upgrade all outdated packages to +their newest versions. You should do this at least once a week to make sure +your system stays up to date. + +Once the upgrades are finished, you can install some new packages which are +needed later on in this tutorial: + + # apt install git build-essential + +`git` is required to get the rakudobrew repository and is also used by +rakudobrew itself to get the sources needed to build Perl 6 and to install zef. +The `build-essential` package comes with all sorts of tools to build software, +which is required to build Perl 6. + +## Installing Perl 6 + +Now, we've got a working Raspberry Pi installation. We can start doing things +with it, such as playing around with Perl 6. + +### Setting up Rakudobrew + +Rakudobrew is a nice tool to manage Perl 6 installations on your system. It can +also install `zef` for you, so you don't have to deal with this manually. This +is all documented on the repository's `README.md` file as well, but I'll +explain it here too. I do make a few small tweaks here and there to match my +preferred setup more closely. + +Clone the repository to your system, and add it to your `$PATH` to be able to +use the scripts bundled with it: + + $ mkdir -p ~/.local/var + $ git clone https://github.com/tadzik/rakudobrew.git ~/.local/var/rakudobrew + $ export PATH=${HOME}/.local/var/rakudobrew/bin:$PATH + $ hash -r + +The `hash -r` call will rehash your PATH, so you can tab-complete `rakudobrew`. +Next, initialize rakudobrew: + + $ rakudobrew init + +This will give you a notification to automatically load rakudobrew next time. +It is advised you follow that message, so you won't have to do it manually each +time you log in to the system. + +### Installing Perl 6 with MoarVM backend + +Now that rakudobrew is installed and available to use, it's time to make use of +it to install Perl 6. + + $ rakudobrew build moar + +### Installing zef, the module manager + +Getting zef to work isn't much harder than installing Perl 6, but its a lot +faster. You can have rakudobrew take care of this too: + + $ rakudobrew build zef + +## Final words + +And that should be it, you now have a working Perl 6 installation with the zef +module manager to take care of installing and upgrading modules. Now you just +need to come up with a nice project to work on to start using and learning the +wonders of Perl 6. + +If you need any help on getting started, try the `#perl6` IRC channel on +Freenode, or check out some of the Perl 6 documentation and introduction sites: + +- https://docs.perl6.org/ +- http://perl6intro.com/ + +For starting projects that are easy to start with and can bring quick results, +consider making an IRC bot using +[`IRC::Client`](https://github.com/zoffixznet/perl6-IRC-Client), or a small web +application using [`Bailador`](https://github.com/Bailador/Bailador). diff --git a/content/posts/2017/2017-12-17-on-cloudflare.md b/content/posts/2017/2017-12-17-on-cloudflare.md new file mode 100644 index 0000000..f802937 --- /dev/null +++ b/content/posts/2017/2017-12-17-on-cloudflare.md @@ -0,0 +1,134 @@ +--- +title: On Cloudflare +date: 2017-12-17 +tags: +- Cloudflare +- Security +- Privacy +--- + +## Foreword + +Cloudflare is a threat to online security and privacy. I am not the first on to +address this issue, and I probably will not be the last either. Sadly, people +still seem to be very uninformed as to what issues Cloudflare actually poses. +There also seems to be a big misconception about the benefits provided by using +Cloudflare. I would suggest reading the [article on Cloudflare by +joepie91](http://cryto.net/~joepie91/blog/2016/07/14/cloudflare-we-have-a-problem/) +for a more thorough look at Cloudflare. + +If anyone is using Cloudflare, please tell them to stop doing it. Link them to +this page or any of the articles referenced here. Cloudflare is harmful to your +visitors, and if you do not care about them, they will stop caring about you +too. + +## A literal MITM attack + +Cloudflare poses a huge risk by completely breaking the TLS/SSL chain used by +browsers by setting itself up as a +[man in the middle](https://en.wikipedia.org/wiki/Man-in-the-middle_attack). +Cloudflare doesn't do actual DDoS protection, they just make the request to the +origin server for you. Once they have received the data, they decrypt it and +re-encrypts it with their own certificate. This means that Cloudflare has +access to all requests in plain text and can optionally modify the data you +see. TLS/SSL is meant to prevent this very issue, but Cloudflare seems to care +very little. + +If we would consider Cloudflare to be a benevolent entity and surely never +modify any data ever, this is still an issue. Much data can be mined from the +plain text communications between you and the origin server. This data can be +used for all kinds of purposes. It is not uncommon for the USA government to +request a massive amount of surveillance information from companies without the +companies being able to speak up about it due to a gag order. This has become +clear once more by the [subpoena on +Signal](https://whispersystems.org/bigbrother/eastern-virginia-grand-jury/). It +should be clear to anyone that end-to-end encryption has to be a standard and +implemented properly. Cloudflare goes out of its way to break this +implementation. + +### Cloudbleed + +The danger of their MITM style of operation was shown be the +[Cloudbleed](https://en.wikipedia.org/wiki/Cloudbleed) vulnerability. It also +shows that they make use of their MITM position to scan the data your site and +a visitor are exchanging. This includes private data, such as passwords. + +Even if you have an SSL connection to Cloudflare, they still decrypt it on +their end. They then serve the content under their own certificate. This makes +it look to the visitor like everything is secure, the browser says so after +all. But in reality, they don't have a secure connection to your server. They +only have one up to Cloudflare, and when it reaches Cloudflare, they decrypt it +and re-encrypt it using your certificate again. If you use one, of course, +otherwise they'll pass it on in plaintext back to your server, which is even +more dangerous. Whether or not you do, the content exists in plaintext on +Cloudflare's servers, which is not what you want, if you truly care about +security. + +## Eliminating your privacy + +If Cloudflare were to fix their MITM behavior, the privacy problem would not +be solved all of a sudden. There are more questionable practices in use by +Cloudflare. + +People who are using a VPN or an anonimization service such as Tor are usually +greeted by a warning from Cloudflare. Let's not talk about this warning being +incorrect about the reason behind the user receiving the warning, but instead +about the methodology used to "pass" this "warning". Cloudflare presents you +with a page that requires you to solve a reCaptcha puzzle, which is hosted by a +well known third party that tries to harm your privacy as much as possible, +Google. If you do not wish to have Google tracking you all the time, you will +not be able to solve these puzzles, and in effect, unable to access the site +you were visiting. It is also interesting to note that this reCaptcha system is +sometimes broken if your browser does not identify itself as one of the regular +mainstream browsers such as Firefox or Chrome. + +Some site administrators disable this specific check. However, this still means +all your requests are logged by another third party, namely Cloudflare itself. +As noted in _A literal MITM attack_, this data is still very interesting to +some parties. And do not fool yourself: meta data is still very worthwhile and +can tell a huge amount of information about a person. + +### Forcing JavaScript + +This issue generally does not concern many people, as most people online +nowadays use a big mainstream browser with JavaScript enabled. However, there +are still people, services and applications that do not use JavaScript. This +makes sites unavailable when they are in the "under attack" mode by Cloudflare. +This will run a check sending Cloudflare your browser information before +deciding whether you are allowed to access the website. This is yet another +privacy issue, but at the same time, a usability issue. It makes your site +unavailable to people who simply do not wish to use JavaScript or people who +are currently limited to a browser with no JavaScript support. + +It is also common for Cloudflare to +[break RSS readers](http://www.tedunangst.com/flak/post/cloudflare-and-rss) by +presenting them with this check. This check is often presented to common user +agents used by services and programs. Since these do not include a big +JavaScript engine, there is no way for them to pass the test. + +## False advertising + +### DDoS protection + +Cloudflare is hailed by many as a gratis DDoS protection service, and they +advertise themselves as such. However, Cloudflare does not offer DDoS +protection, they simply act as a pin cushion to soak the hit. Real DDoS +protection works by analyzing traffic, spotting unusual patterns and blocking +these requests. If they were to offer real DDoS protection like this, they +would be able to tunnel TLS/SSL traffic straight to the origin server, thereby +not breaking the TLS/SSL chain as they do right now. + +It should also be noted that this gratis "protection" truly gratis either. If +your site gets attacked for long enough, or for enough times in a short enough +time frame, you will be kicked off of the gratis plan and be moved onto the +"business" plan. This requires you to pay $200 per month for a service that does +not do what it is advertised to do. If you do not go to the business plan, you will +have about the same protection as you would have without it, but with the +addition of ruining the privacy and security of your visitors. + +### Faster page loads + +This is very well explained on [joepie91's +article](http://cryto.net/~joepie91/blog/2016/07/14/cloudflare-we-have-a-problem/) +under the heading _But The Speed! The Speed!_. As such, I will refer to his +article instead of repeating him here. diff --git a/content/posts/2017/2017-12-21-funding-yourself-as-free-software-developer.md b/content/posts/2017/2017-12-21-funding-yourself-as-free-software-developer.md new file mode 100644 index 0000000..a4e73cb --- /dev/null +++ b/content/posts/2017/2017-12-21-funding-yourself-as-free-software-developer.md @@ -0,0 +1,233 @@ +--- +date: 2017-12-21 +title: Funding Yourself As A Free Software Developer +tags: +- FreeSoftware +- Programming +- Funding +--- + +I've been meaning to spend more time on developing free software, helping out +new users on IRC and writing more tutorials to get others started. All of these +cost time, and time is money - so I've set out to set up donation accounts. +In the hopes of helping other developers who struggle to fund their work, I've +written up this article to talk about my experience. This is a living +document! As you explore this yourself, please send me your thoughts on each +platform and turn me on to interesting platforms I missed. + +I'll be focussing on platforms allowing for recurring donations, as these are +more useful for procuring a stable income. + +## Platforms + +### BountySource + +{{< admonition title="warning" >}} +- Requires 3rd-party [Cloudflare](/post/2017-12-17/on-cloudflare/)-hosted + JavaScript sources to function. +{{< / admonition >}} + +BountySource lets people donate money towards an issue on Github your projects. +Once an issue gets fixed, you can claim the "bounty" that was on this issue. +This can also help in making clear which issue you should aim for next, and +can increase interest in contributors for your project. + +There's also BountySource Salt, which is a recurring donation platform. +Projects or teams can use this to gain monthly income to sustain the +development of their project(s). + +Support for this platform is offered through the IRC channel [`#bountysource` on +Freenode](https://kiwiirc.com/client/chat.freenode.net:+6697/#bountysource). + +The BountySource platform itself is also free software, and the source code +for it can be found [on GitHub](https://github.com/bountysource/core). + +You can find BountySource at https://www.bountysource.com/. + +### LiberaPay + +This service seems to be completely free as in freedom. They even +[publish their source on GitHub](https://github.com/liberapay/liberapay.com). +Their own funding comes through donations on their own platform, instead of +taking a cut of each donation like most other services. + +It's possible to connect other accounts to your LiberaPay account. While this +feature in general is pretty common, they allow you to link to sites which are +interesting to show as developer, such as GitHub, GitLab, and BitBucket. They +also let you link to a Mastodon account, if you have one. + +To let people know you're accepting donations through LiberaPay, you can use +one of the widgets they make available for you. This will show a donate button +which will link to you profile. Do note, this is not a regular HTML button or +cleverly implemented anchor tag, but a JavaScript-based button. + +Another thing LiberaPay lacks is a rewards system. Most other platforms allow +you to set reward tiers, which allow you to give certain benefits to donors. + +You can find Liberapay at https://liberapay.com/. + +### MakerSupport + +{{< admonition title="Warning" >}} +- The site requires a 3rd-party hosted jQuery. +- You have to solve a Google reCaptcha in order to register a new account. +{{< / admonition >}} + +MakerSupport seems to be another option, aimed at content creators who might +need freedom of speech more than others. It seems to be less focused on +software development, as you cannot link to any of the major git hosting +platforms. + +There are options here to set up "tiers" for your donors; which is a convenient +way to provide them with perks for their support. For a free software +developer, this might be something like access to more direct support from the +developer. + +Sadly, registration wasn't as smooth as most other platforms. My preferred +username, "tyil" is too short. There's no indication of the requirements of any +of the fields, you just get a popup on submission of the form saying a field is +wrong. + +Additionally, the registration form requires some 3rd-party JavaScript to work, +and a Google reCaptcha to be solved in order to get the submit button to show +up. As I have set up uMatrix in my browser, this cost me some extra time to +finish registration. + +Setting a profile image proved to be a little harder. First off, I'm still +using uMatrix so I had to allow a 3rd-party (Amazon, in this case) XHR +requests. Secondly, their error when uploading a "wrong" format is also not +very user friendly, as it won't give you any details on why it's disallowed, +nor what images are allowed instead. + +{{< admonition title="Note" >}} +It seems they check the extension of the uploaded image's filename. As far as I +can tell, you're allowed to upload files that end with `.jpg` and `.png`. +{{< / admonition >}} + +You can find MakerSupport at https://www.makersupport.com/. + +### Patreon + +{{< admonition title="Warning" >}} +- Requires 3rd-party [Cloudflare](/post/2017/12/17/on-cloudflare/)-hosted + JavaScript sources to function. +- You have to solve a Google reCaptcha in order to register a new account. +{{< / admonition >}} + +Patreon is possibly the most famous donation-based funding platform available +right now. Its popularity is a good thing, since this means there's probably +many donors already using this platform. + +At Patreon, you can set up so-called goals. Goals are the thing I haven't found +with other funding platforms. It allows you to set a goal for an amount of +money, and add a reward to this. This way, you can inform your donors you will +be creating a certain kind of content once a one-time goal has been reached. +Basically, you can show your donors what you're going to do with the money +they're donating to you. + +Another interesting thing that I haven't seen on other platforms is the option +to charge donors per creation, instead of per month. While this may seem less +fitting for software developers (unless you want to get paid per commit, I +guess), it's an interesting feature that's pretty unique. If you publish many +tutorials, guides or other posts, this might fit you very well. + +You can link your account to other services, similarly to other platforms, but +it seems to only allow you to be linked with proprietary social media +platforms. + +You can find Patreon at https://www.patreon.com/home. + +### (Dis)honorable mentions + +#### Hatreon + +I've included this because I found people talking about it on IRC. However, it +seems to be nothing more than a joke that's gone too far. Its main reason for +existing seems to be to get away from the political correctness found with +earlier crowdfunding platforms, yet their site is invite-only, so those who are +actually interested can't even use it. It seems that pledging is currently +disabled as well, and has been for at least 10 days. + +## But that's not all + +Just setting up an account on a funding platform isn't enough. There's more to +keeping a healthy and happy supporter base. + +### Spread awareness of your work + +Whether you're writing articles or publishing new releases of projects, tell +the world you're doing whatever it is you're doing. If nobody knows about your +project, they won't be able to give any kind of appreciation for it. Use social +media outlets, public forums, mailing lists, anything! Tell them what you made, +why it's useful and how they could use it to improve their digital life. + +{{< admonition title="Warning" >}} +Ofcourse, don't spam it to unrelated communication channels. This will only +backfire. +{{< / admonition >}} + +### Using the rewards system + +On the platforms that support a rewards system, make use of it. There's some +little things you can do that go a long way with your supporters. For instance, +you can offer things like stickers to donors that donate a certain amount of +money to you. These are reasonably cheap to produce and ship, and many people +like these. + +Another idea that seems to strike well with donors is having a way to talk with +the person they're supporting directly. This can be done by giving them access +to an IRC channel for you and your donors. You can use another platform for +this, but most free software enthousiasts are already on IRC, and there's few +real-time communication alternatives that they're already using. + +### Don't stick to a single platform + +There's multiple platforms out there, use them! Not all of them have the same +userbase, and you can reach more people by giving them more options to work +with. + +### Let people know you're accepting donations + +If people don't know you're even accepting donations, chances are pretty high +you won't get any. Or if it's too hard to figure out how to donate to you, +people will simply not take the effort. Make sure people can easily find out +that you're accepting donations, and how to donate to you. + +### Show what you're doing with donation money + +Have a page with information about what you're using with the money. This can +be as simple as just saying you pay the rent and buy food with it. Most donors +don't mind too much what you're doing with the money they donate to you, but a +few do appreciate having this information available to them. + +It can be as simple as adding a `/donate` link to your site where you explain +how to donate to you, and what you do with the donation money. + +{{< admonition title="Warning" >}} +Don't let it turn into an annoying advertisement though, this will surely have +an opposite effect. +{{< / admonition >}} + +## Further reading + +There's more places to go for tips and tricks in getting funds to sustain your +free software development work. I've listed a couple of these here for those +interested. + +- [snowdrift.coop wiki on crowdfunding/fundraising services](https://wiki.snowdrift.coop/market-research/other-crowdfunding) +- [A handy guide to financial support for open source](https://github.com/nayafia/lemonade-stand) + +## RFC + +I'd love to receive feedback on this, as I think being able to get donations +easily for the work free software developers put in to their projects is +important. + +Getting to know more platforms and giving them a small write-up should help out +other developers like me looking for the best platform for their use case. I'd +also like to hear from developers already using a platform, to extend this +article with more useful information on how to successfully get donors for +their work. + +If you want to contact me, do take a look at the [Contact](/#contact) section, +and let me know about your experiences with funding. diff --git a/content/posts/2017/_index.md b/content/posts/2017/_index.md new file mode 100644 index 0000000..141c28d --- /dev/null +++ b/content/posts/2017/_index.md @@ -0,0 +1,3 @@ +--- +title: 2017 +--- diff --git a/content/posts/2018/2018-02-05-why-perl6.md b/content/posts/2018/2018-02-05-why-perl6.md new file mode 100644 index 0000000..f2a8ea3 --- /dev/null +++ b/content/posts/2018/2018-02-05-why-perl6.md @@ -0,0 +1,271 @@ +--- +title: Why Perl 6? +date: 2018-02-05 +tags: +- Perl6 +- Raku +--- + +For about a year now, I've been working in Perl 6. Telling this to other people +often brings about some confused faces. I've grown quite fond of Perl 6 the +more I learn about it, yet the general developer community still seems to think +Perl is a dirty word. In this article, I will detail some of the features that +make me like Perl 6, and why I try to use it wherever possible. + +== Hassle-free command line arguments +Whet creating an application, you usually want to be able to specify some +arguments at runtime. Most times this happens using command line arguments or +options. Perl 6 allows you to specify these in the +[`MAIN`](https://docs.perl6.org/language/functions#index-entry-MAIN) subroutine +signature. + +For instance, if I want the application to accept two string arguments, I can +do it as easy as this: + +```raku +sub MAIN ( + Str $arg-one, + Str $arg-two, +) { + ... +} +``` + +Now, if you wanted to add an option like `--output=/path/to/file`, you can do +it just like this: + +```raku +sub MAIN ( + Str $arg-one, + Str $arg-two, + Str :$output, +) { + ... +} +``` + +By default, if there's a `MAIN` available in your Perl 6 program, but the +arguments or options supplied by the user are incorrect, it will display the +right way to invoke the command, called the +[`USAGE`](https://docs.perl6.org/language/functions#index-entry-USAGE). +Ofcourse, this message can be changed if you wish, but the default is quite good +for most use-cases. + +However, sometimes you want to add a little explanation to what the argument or +option is intended for. Just for a liitle bit of additional user friendliness. + +Fear not, for this is also already covered by the defaults. In Perl, there was +POD to document your code. In Perl 6, we have +[POD](https://docs.perl6.org/language/glossary#index-entry-POD) as well. And +these comments can be inspected at runtime to provide the user some +information. And that's exactly what the default `USAGE` also does. So if you +want to add some helpful comments to the arguments or the program itself, +simply add the comments where you want them: + +```raku +#| This is a sample program, just to showcase the awesome stuff available in +#| Perl 6. +sub MAIN ( + Str $arg-one, #= Just a random argument + Str $arg-two, #= Yet another argument used for showcasing + Str :$output, #= Last but not least, an option which allows for a value +) { + ... +} +``` + +## Unicode + +What if you could support all languages with a single implementation? That's +where unicode comes in. And Perl 6 currently has the best support for Unicode +out of all programming languages available. Its only real competitor seems to +be Swift (at the time of writing this). + +But not just for handling strings, Perl 6 uses unicode as a core language +feature. This means you can use them in your source code as well. And that +opens up some nice possibilities. Using the right unicode characters allows you +to write cleaner and more concise code, reducing the cognitive load while +trying to understand the program. + +For instance, if you're trying to do any kind of math, you can just use the +π character as a regular character. Or use the ² to get the square of a certain +number. This little piece is completely valid in Perl 6: + +```raku +my $a = $r² ÷ π; +``` + +Now, if you're thinking "that looks neat, but how am I ever going to write +these?", do not worry. Most operating systems and many editors have tools to let +you input these. For instance, using `vim` with +[`vim-perl6`](https://github.com/vim-perl/vim-perl6), you can just write "pi" +and hit space (or type any non-alphabetical character). + +But not everyone is using an OS or an editor that makes it easy. And for those +people, Perl 6 simply supports using [ASCII based +operators](https://docs.perl6.org/language/unicode_ascii). The previous block +could also be written as follows: + +```raku +my $a = $r ^ 2 / pi; +``` + +As unicode becomes more accepted, input methods will hopefully improve to make +input easier for everyone in the long run. Those who can already input it +easily don't have to wait for this future, Perl 6 already supports it. + +## Multithreading + +Multi-core processors are virtually everywhere these days. Yet many programming +languages still don't support multithreaded application development natively, +if at all. In Perl 6, running something in a different thread is as easy as +wrapping it in a [`start`](https://docs.perl6.org/routine/start) block: + +```raku +start { + do-something(); +} +``` + +`start` returns a [`Promise`](https://docs.perl6.org/type/Promise), which you can +store in a scalar variable just like any other object. You can check on whether +the `Promise` has completed already and check whether it died, for instance. + +Other aspects which can often be spread over multiple threads are loops or +maps. For instance, consider the following +[`map`](https://docs.perl6.org/routine/map) function: + +```raku +@cats.map: { + $^cat.pat; +} +``` + +This will pat each cat in turn, in the order they appear in the list. But you +can speed up the patting process by patting multiple cats at the same time. And +to get there, all you need to do is add a +[`race`](https://docs.perl6.org/routine/race): + +```raku +@cats.race.map: { + $^cat.pat; +} +``` + +This will attempt to pat the cats over multiple threads, speeding up the +process to pat all the cats. If the result of the pattings needs to be in the +same order as the patting order, you use +[`hyper`](https://docs.perl6.org/routine/hyper) instead of `race`: + +```raku +@cats.hyper.map: { + $^cat.pat; +} +``` + +## Object orientation + +Object oriented programming seems to be getting out of fashion with the new +generation of developers. But it's still in wide use, being taught at most +universities, and is often easy to explain to new developers as well. + +And Perl 6 has [OO](https://docs.perl6.org/language/classtut#index-entry-OOP) +support built into its core: + +```raku +class Foo +{ + has Str $some-field; + + method bar ( + Str $some-arg, + ) { + ... + } +} +``` + +You can also have +[multi-dispatch](https://docs.perl6.org/language/glossary#index-entry-Multi-Dispatch) +methods on your classes, which are methods with the same names, but accepting +different arguments or argument types. For instance: + +```raku +class Foo +{ + multi method bar ( + Str $some-arg, + ) { + ... + } + + multi method bar ( + Int $some-arg, + ) { + ... + } +} +``` + +Which method is being used will be decided by the type of argument is being +passed in, in this case either a [`Str`](https://docs.perl6.org/type/Str) or an +[`Int`](https://docs.perl6.org/type/Int). + +## Functional programming + +Whilst OO is considered being old more and more, functional programming is +gaining ground. And this paradigm is fully supported in the core of Perl 6 as +well. You've seen the `map` example already while patting cats earlier, for +instance. + +But there's much more on the functional playing field, such as the +[`==>`](https://docs.perl6.org/routine/==%3E) operator, known as the [`feed +operator`](https://docs.perl6.org/language/operators#infix_==%3E). It simply +passed the output of a statement as the last argument to the next statement: + +```raku +@grumpy-cats + ==> feed() + ==> pat() + ==> snuggle() + ==> my @happy-cats; +``` + +This will take the `@grumpy-cats`, feed them, pat them, snuggle them and put +the result into `@happy-cats`. You could've chained the calls using a `.` +instead, and Perl 6 allows you to do this too. But the `==>` looks much more +readable to me, which is why I prefer using this instead. + +I'm still exploring the functional programming field myself, but these few +things have made me happy exploring it. + +## Community + +(Almost) last, but certainly not least, the Perl 6 community is amazing. It's +been the friendliest bunch I've been with, both on IRC, their mailing lists and +in real life. Everyone is welcoming, and they try to help you whenever they +can. + +Community is important to help you out whenever you get stuck for whatever +reason. A friendly community is the best you can get here to keep you a happy +developer yourself as well. + +## Other little aspects + +There's a few neat things I can do in Perl 6 that I can't do in (most) other +languages, but aren't important enough to warrant a large section to show them +off. + +### Dashes in names + +You can use dashes in names: Things like `my $foo-bar` is valid, just like +`method foo-bar`. It's nothing big on itself, but I've found it makes reading +code much more enjoyable than pascalCase, CamelCase or snake_case. + +### Gradual typing + +You don't *need* to use types in Perl 6. But when you want to use them (for +making use of multi-dispatch, for example), you can just start using them. If +types are added, the compiler will make sure the types are correct. If not, you +can always do them yourself (but why would you, when the compiler can do a +better job for free). diff --git a/content/posts/2018/2018-03-20-perl6-introduction-to-application-programming.md b/content/posts/2018/2018-03-20-perl6-introduction-to-application-programming.md new file mode 100644 index 0000000..2b8ea48 --- /dev/null +++ b/content/posts/2018/2018-03-20-perl6-introduction-to-application-programming.md @@ -0,0 +1,772 @@ +--- +title: "Perl 6 - Introduction to application programming" +date: 2018-03-20 +tags: +- Tutorial +- Perl6 +- Assixt +- GTK +- Programming +- Raku +--- + +In this tutorial, I'll be guiding you through creating a simple application in +Perl 6. If you don't have Perl 6 installed yet, get the [Rakudo +Star](http://rakudo.org/how-to-get-rakudo/) distribution for your OS. +Alternatively, you can use the [Docker +image](https://hub.docker.com/_/rakudo-star/). + +The application itself will be a simple dice-roller. You give it a number of +dice to roll, and the number of sides the die has. We'll start off by creating +it as a console application, then work to make it a GUI as well with the +`GTK::Simple` module. + +## Preparation + +First, you'll want to install the libgtk headers. How to get these depends on +your distro of choice. For Debian-based systems, which includes Ubuntu and +derivatives, this command would be the following `apt` invocation: + +```txt +$ apt install libgtk-3-dev +``` + +For other distros, please consult your documentation. + +To ease up module/application building, I'll use `App::Assixt`. This module +eases up on common tasks required for building other modules or applications. +So we'll start by installing this module through `zef`. + +```txt +$ zef install App::Assixt +``` + +{{< admonition title="note" >}} +You may need to rehash your `$PATH` as well, which can be done using `hash -r` +on `bash`, or `rehash` for `zsh`. For other shells, consult your manual. +{{< / admonition >}} + +Next up, we can use `assixt` to create the new skeleton of our application, +with the `new` subcommand. This will ask for some user input, which will be +recorded in the `META6.json`, a json-formatted file to keep track of meta +information about the module. `assixt` should take care of this file for you, +so you never need to actually deal with it. + +```txt +$ assixt new +``` + +### assixt input + +Since the `assixt new` command requires some input, I'll walk through these +options and explain how these would affect your eventual application. + +#### Name of the module + +This is the name given to the module. This will be used for the directory name, +which by default in `assixt` will be `perl6-` prepended to a lower-case version +of the module name. If you ever wish to make a module that is to be shared in +the Perl 6 ecosystem, this should be unique across the entire ecosystem. If +you're interested in some guidelines, the [PAUSE +guidelines](https://pause.perl.org/pause/query?ACTION=pause_namingmodules) seem +to apply pretty well to Perl 6 as well. + +For this application, we'll use `Local::App::Dicer`, but you can use whatever +name you'd prefer here. + +#### Your name + +Your name. This will be used as the author's name in the `META6.json`. It is +used to find out who made it, in order to report issues (or words of praise, +of course). + +#### Your email address + +Your email address. Like your name, it will be used in case someone has to +contact you in regards off the module. + +#### Perl 6 version + +This defaults to `c` right now, and you can just hit enter to accept it. In the +future, there will be a Perl 6.d available as well, in which case you can use +this to indicate you want to use the newer features introduced in 6.d. This is +not the case yet, so you just want to go with the default `c` value here. + +#### Module description + +A short description of your module, preferably a single sentence. This is +useful to people wondering what the module is for, and module managers can show +to the user. + +#### License key + +This indicates the license under which your module is distributed. This +defaults to `GPL-3.0`, which I strongly recommend to use. The de-facto +default seems to be `Artistic-2.0`, which is also used for Perl 6 itself. + +This identifier is based on the [SPDZ license list](https://spdx.org/licenses/). +Anything not mentioned in this list is not acceptable. #TODO Clarify why + +## Writing your first test + +With the creation of the directory structure and metadata being taken care of +by `assixt`, we can now start on writing things. Tests are not mandatory, but +are a great tool for quickly checking if everything works. If you make larger +applications, it really helps not having to manually test anything. Another +benefit is that you can quickly see if your changes, or those of someone else, +break anything. + +Creating the base template for tests, `assixt` can help you out again: `assixt +touch` can create templates in the right location, so you don't have to deal +with it. In this case we want to create a test, which we'll call "basic". + +```txt +$ assixt touch test basic +``` + +This will create the file `t/basic.t` in your module directory. Its contents +will look as follows: + +```raku +#! /usr/bin/env perl6 + +use v6.c; + +use Test; + +ok True; + +done-testing; + +# vim: ft=perl6 +``` + +The only test it has right now is `ok True`, which will always pass testing. We +will change that line into something more usable for this application: + +```raku +use Local::App::Dicer; + +plan 2; + +subtest "Legal rolls", { + plan 50; + + for 1..50 { + ok 1 ≤ roll($_) ≤ $_, "Rolls between 1 and $_"; + } +} + +subtest "Illegal rolls", { + plan 3; + + throws-like { roll(0) }, X::TypeCheck::Binding::Parameter, "Zero is not accepted"; + throws-like { roll(-1) }, X::TypeCheck::Binding::Parameter, "Negative rolls are not accepted"; + throws-like { roll(1.5) }, X::TypeCheck::Binding::Parameter, "Can't roll half sides"; +} +``` + +{{< admonition title="note" >}} +Perl 6 allows mathematical characters to make your code more concise, as with +the ≤ in the above block. If you use [vim](http://www.vim.org/), you can make use +of the [vim-perl6](https://github.com/vim-perl/vim-perl6) plugin, which has an +option to change the longer, ascii-based ops (in this case `\<=`) into the +shorter unicode based ops (in this case `≤`). This specific feature requires +`let g:perl6_unicode_abbrevs = 1` in your `vimrc` to be enabled with +`vim-perl6`. + +If that's not an option, you can use a +[compose key](https://en.wikipedia.org/wiki/Compose_key). If that is not viable +either, you can also stick to using the ascii-based ops. Perl 6 supports both +of them. +{{< / admonition >}} + +This will run 53 tests, split up in two +[subtests](https://docs.perl6.org/language/testing#Grouping_tests). Subtests are +used to logically group your tests. In this case, the calls that are correct are +in one subtest, the calls that should be rejected are in another. + +The `plan` keywords indicate how many tests should be run. This will help spot +errors in case your expectations were not matched. For more information on +testing, check out [the Perl 6 docs on +testing](https://docs.perl6.org/language/testing). + +We're making use of two test routines, `ok` and `throws-like`. `ok` is a +simple test: if the given statement is truthy, the test succeeds. The other +one, `throws-like`, might require some more explanation. The first argument it +expects is a code block, hence the `{ }`. Inside this block, you can run any +code you want. In this case, we run code that we know shouldn't work. The +second argument is the exception it should throw. The test succeeds if the +right exception is thrown. Both `ok` and `throws-like` accept a descriptive +string as optional last argument. + +### Running the tests + +A test is useless if you can't easily run it. For this, the `prove` utility +exists. You can use `assixt test` to run these tests properly as well, saving +you from having to manually type out the full `prove` command with options. + +```txt +$ assixt test +``` + +You might notice the tests are currently failing, which is correct. The +`Local::App::Dicer` module doesn't exist yet to test against. We'll be working +on that next. + +{{< admonition title="note" >}} +For those interested, the command run by `assixt test` is `prove -e "perl6 +-Ilib" t`. This will include the `lib` directory into the `PERL6PATH` to be +able to access the libraries we'll be making. The `t` argument specifies the +directory containing the tests. +{{< / admonition >}} + +## Creating the library + +Again, let's start with a `assixt` command to create the base template. This +time, instead of `touch test`, we'll use `touch lib`. + +```txt +$ assixt touch unit Local::App::Dicer +``` + +This will generate a template file at `lib/Local/App/Dicer.pm6` which some +defaults set. The file will look like this. + +```raku +#! /usr/bin/env false + +use v6.c; + +unit module Local::App::Dicer; +``` + +The first line is a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)). It +informs the shell what to do when you try to run the file as an executable +program. In this case, it will run `false`, which immediately exits with a +non-success code. This file needs to be run as a Perl 6 module file, and +running it as a standalone file is an error. + +The `use v6.c` line indicates what version of Perl 6 should be used, and is +taken from the `META6.json`, which was generated with `assixt new`. The last +line informs the name of this module, which is `Local::App::Dicer`. Beneath +this, we can add subroutines, which can be exported. These can then be accessed +from other Perl 6 files that `use` this module. + +### Creating the `roll` subroutine + +Since we want to be able to `roll` a die, we'll create a subroutine to do +exactly that. Let's start with the signature, which tells the compiler the name +of the subroutine, which arguments it accepts, their types and what type the +subroutine will return. + +{{< admonition title="tip" >}} +Perl 6 is gradually typed, so all type information is optional. The subroutine +arguments are optional too, but you will rarely want a subroutine that doesn't +have an argument list. +{{< / admonition >}} + +```raku +sub roll($sides) is export +{ + $sides +} +``` + +Let's break this down. + +- `sub` informs the compiler we're going to create a subroutine. +- `roll` is the name of the subroutine we're going to create. +- `$sides` defines an argument used by the subroutine. +- `is export` tells the compiler that this subroutine is to be exported. This + allows access to the subroutine to another program that imports this module + through a `use`. +- `{ $sides }` is the subroutine body. In Perl 6, the last statement is also + the return value in a code block, thus this returns the value of $sides. A + closing `;` is also not required for the last statement in a block. + +If you run `assixt test` now, you can see it only fails 1/2 subtests: + +```raku +# TODO: Add output of failing tests +``` + +Something is going right, but not all of it yet. The 3 tests to check for +illegal rolls are still failing, because there's no constraints on the input of +the subroutine. + +### Adding constraints + +The first constraint we'll add is to limit the value of `$sides` to an `Int:D`. +The first part of this constraint is common in many languages, the `Int` part. +The `:D` requires the argument to be **defined**. This forces an actual +existing instance of `Int`, not a `Nil` or undefined value. + +```raku +sub roll(Int:D $sides) is export +``` + +Fractional input is no longer allowed, since an `Int` is always a round number. +But an `Int` is still allowed to be 0 or negative, which isn't possible in a +dice roll. Nearly every language will make you solve these two cases in the +subroutine body. But in Perl 6, you can add another constraint in the signature +that checks for exactly that: + +```raku +sub roll(Int:D $sides where $sides > 0) is export +``` + +The `where` part specifies additional constraints, in this case `$sides > 0`. +So now, only round numbers larger than 0 are allowed. If you run `assixt test` +again, you should see all tests passing, indicating that all illegal rolls are +now correctly disallowed. + +### Returning a random number + +So now that we can be sure that the input is always correct, we can start on +making the output more random. In Perl 6, you can take a number and call +`.rand` on it, to get a random number between 0 and the value of the number you +called it on. This in turn can be rounded up to get a number ranging from 1 to +the value of the number you called `.rand` on. These two method calls can also +be changed to yield concise code: + +```raku +sub roll(Int:D $sides where $sides > 0) is export +{ + $sides.rand.ceiling +} +``` + +That's all we need from the library itself. Now we can start on making a usable +program out of it. + +## Adding a console interface + +First off, a console interface. `assixt` can `touch` a starting point for an +executable script as well, using `assixt touch bin`: + +```txt +$ assixt touch bin dicer +``` + +This will create the file `bin/dicer` in your repository, with the following +template: + +```raku +#! /usr/bin/env perl6 + +use v6.c; + +sub MAIN +{ + … +} +``` + +The program will run the `MAIN` sub by default. We want to slightly change this +`MAIN` signature though, since we want to accept user input. And it just so +happens that you can specify the command line parameters in the `MAIN` +signature in Perl 6. This lets us add constraints to the parameters and give +them better names with next to no effort. We want to accept two numbers, one +for the number of dice, and one for the number of sides per die: + +```raku +sub MAIN(Int:D $dice, Int:D $sides where { $dice > 0 && $sides > 0 }) +``` + +Here we see the `where` applying constraints again. If you try running this +program in its current state, you'll have to run the following: + +```txt +$ perl6 -Ilib bin/dicer +Usage: + bin/dicer <dice> <sides> +``` + +This will return a list of all possible ways to invoke the program. There's one +slight problem right now. The usage description does not inform the user that +both arguments need to be larger than 0. We'll take care of that in a moment. +First we'll make this part work the way we want. + +To do that, let's add a `use` statement to our `lib` directory, and call the +`roll` function we created earlier. The `bin/dicer` file will come to look as +follows: + +```raku +#! /usr/bin/env perl6 + +use v6.c; + +use Local::App::Dicer; + +sub MAIN(Int:D $dice, Int:D $sides where { $dice > 0 && $sides > 0 }) +{ + say $dice × roll($sides) +} +``` + +{{< admonition title="note" >}} +Just like the `≤` character, Perl 6 allows to use the proper multiplication +character `×` (this is not the letter `x`!). You can use the more widely known +`*` for multiplication as well. +{{< / admonition >}} + +If you run the program with the arguments `2` and `20` now, you'll get a random +number between 2 and 40, just like we expect: + +```txt +$ perl6 -Ilib bin/dicer 2 20 +18 +``` + +### The usage output + +Now, we still have the trouble of illegal number input not clearly telling +what's wrong. We can do a neat trick with [the `USAGE` +sub](https://docs.perl6.org/language/functions#index-entry-USAGE) to achieve +this. Perl 6 allows a subroutine with the name `USAGE` to be defined, overriding +the default behaviour. + +Using this, we can generate a friendlier message informing the user what they +need to supply more clearly. The `USAGE` sub would look like this: + +```raku +sub USAGE +{ + say "Dicer requires two positive, round numbers as arguments." +} +``` + +If you run the program with incorrect parameters now, it will show the text +from the `USAGE` subroutine. If the parameters are correct, it will run the +`MAIN` subroutine. + +You now have a working console application in Perl 6! + +## a simple GUI + +But that's not all. Perl 6 has a module to create GUIs with the +[GTK library](https://www.gtk.org/) as well. For this, we'll use the +[`GTK::Simple`](https://github.com/perl6/gtk-simple) module. + +You can add this module as a dependency to the `Local::App::Dicer` repository +with `assixt` as well, using the `depend` command. By default, this will also +install the dependency locally so you can use it immediately. + +```txt +$ assixt depend GTK::Simple +``` + +### Multi subs + +Next, we could create another executable file and call it `dicer-gtk`. However, +I can also use this moment to introduce +[multi-methods](https://docs.perl6.org/language/glossary#index-entry-multi-method). +These are subs with the same name, but differing signatures. If a call to such a +sub could potentially match multiple signatures, the most specific one will be +used. We will add another `MAIN` sub, which will be called when `bin/dicer` is +called with the `--gtk` parameter. + +We should also update the `USAGE` sub accordingly, of course. And while we're +at it, let's also include the `GTK::Simple` and `GTK::Simple::App` modules. The +first pulls in all the different GTK elements we will use later on, while the +latter pulls in the class for the base GTK application window. The updated +`MAIN`, `USAGE` and `use` parts will now look like this: + +```raku +use Local::App::Dicer; +use GTK::Simple; +use GTK::Simple::App; + +multi sub MAIN(Int:D $dice, Int:D $sides where { $dice > 0 && $sides > 0 }) +{ + say $dice × roll($sides) +} + +multi sub MAIN(Bool:D :$gtk where $gtk == True) +{ + # TODO: Create the GTK version +} + +sub USAGE +{ + say "Launch Dicer as a GUI with --gtk, or supply two positive, round numbers as arguments."; +} +``` + +There's a new thing in a signature header here as well, `:$gtk`. The `:` in +front of it makes it a named argument, instead of a positional one. When used +in a `MAIN`, this will allow it to be used like a long-opt, thus as `--gtk`. +Its use in general subroutine signatures is explained in the next chapter. + +Running the application with `--gtk` gives no output now, because the body only +contains a comment. Let's fix that. + +### Creating the window + +First off, we require a `GTK::Simple::App` instance. This is the main window, +in which we'll be able to put elements such as buttons, labels, and input +fields. We can create the `GTK::Simple::App` as follows: + +```raku +my GTK::Simple::App $app .= new(title => "Dicer"); +``` + +This one line brings in some new Perl 6 syntax, namely the `.=` operator. +There's also the use of a named argument in a regular subroutine. + +The `.=` operator performs a method on the variable on the left. In our case, +it will call the `new` subroutine, which creates a new instance of the +`GTK::Simple::App` class. This is commonly referred to as the **constructor**. + +The named argument list (`title => "Dicer"`) is another commonly used feature +in Perl 6. Any method can be given a non-positional, named parameter. This is +done by appending a `:` in front of the variable name in the sub signature. +This has already been used in our code, in `multi sub MAIN(Bool :$gtk where +$gtk == True)`. This has a couple of benefits, which are explained in the +[Perl 6 docs on signatures](https://docs.perl6.org/type/Signature#index-entry-positional_argument_%28Signature%29_named_argument_%28Signature%29). + +### Creating the elements + +Next up, we can create the elements we'd like to have visible in our +application window. We needed two inputs for the console version, so we'll +probably need two for the GUI version as well. Since we have two inputs, we +want labels for them. The roll itself will be performed on a button press. +Lastly, we will want another label to display the outcome. This brings us to 6 +elements in total: + +- 3 labels +- 2 entries +- 1 button + +```raku +my GTK::Simple::Label $label-dice .= new(text => "Amount of dice"); +my GTK::Simple::Label $label-sides .= new(text => "Dice value"); +my GTK::Simple::Label $label-result .= new(text => ""); +my GTK::Simple::Entry $entry-dice .= new(text => 0); +my GTK::Simple::Entry $entry-sides .= new(text => 0); +my GTK::Simple::Button $button-roll .= new(label => "Roll!"); +``` + +This creates all elements we want to show to the user. + +### Show the elements in the application window + +Now that we have our elements, let's put them into the application window. +We'll need to put them into a layout as well. For this, we'll use a grid. The +`GTK::Simple::Grid` constructor takes pairs, with the key being a tuple +containing 4 elements, and the value containing the element you want to show. +The tuple's elements are the `x`, `y`, `w` and `h`, which are the x +coordinates, y coordinates, width and height respectively. + +This in turn takes us to the following statement: + +```raku +$app.set-content( + GTK::Simple::Grid.new( + [0, 0, 1, 1] => $label-dice, + [1, 0, 1, 1] => $entry-dice, + [0, 1, 1, 1] => $label-sides, + [1, 1, 1, 1] => $entry-sides, + [0, 2, 2, 1] => $button-roll, + [0, 3, 2, 1] => $label-result, + ) +); +``` + +Put a `$app.run` beneath that, and try running `perl6 -Ilib bin/dicer --gtk`. +That should provide you with a GTK window with all the elements visible in the +position we want. To make it a little more appealing, we can add a +`border-width` to the `$app`, which adds a margin between the border of the +application window, and the grid inside the window. + +```raku +$app.border-width = 20; +$app.run; +``` + +You may notice that there's no `()` after the `run` method call. In Perl 6, +these are optional if you're not supplying any arguments any way. + +### Binding an action to the button + +Now that we have a visible window, it's time to make the button perform an +action. The action we want to execute is to take the values from the two +inputs, roll the correct number of dice with the correct number of sides, and +present it to the user. + +The base code for binding an action to a button is to call `.clicked.tap` on it, +and provide it with a code block. This code will be executed whenever the +button is clicked. + +```raku +$button-roll.clicked.tap: { +}; +``` + +You see we can also invoke a method using `:`, and then supplying its +arguments. This saves you the trouble of having to add additional `( )` around +the call, and in this case it would be annoying to have to deal with yet +another set of parens. + +Next, we give the code block something to actually perform: + +```raku +$button-roll.clicked.tap: { + CATCH { + $label-result.text = "Can't roll with those numbers"; + } + + X::TypeCheck::Binding::Parameter.new.throw if $entry-dice.text.Int < 1; + + $label-result.text = ($entry-dice.text.Int × roll($entry-sides.text.Int)).Str; +}; +``` + +There's some new things in this block of code, so let's go over these. + +- `CATCH` is the block in which we'll end up if an exception is thrown in this + scope. `roll` will throw an exception if the parameters are wrong, and this + allows us to cleanly deal with that. +- `X::TypeCheck::Binding::Parameter.new.throw` throws a new exception of type + `X::TypeCheck::Binding::Parameter`. This is the same exception type as thrown + by `roll` if something is wrong. We need to check the number of dice manually + here, since `roll` doesn't take care of it, nor does any signature impose any + restrictions on the value of the entry box. +- `if` behind another statement. This is something Perl 6 allows, and in some + circumstances can result in cleaner code. It's used here because it improves + the readability of the code, and to show that it's possible. + +## The completed product + +And with that, you should have a dice roller in Perl 6, with both a console and +GTK interface. Below you can find the complete, finished sourcefiles which you +should have by now. + +### t/basic.t + +```raku +#! /usr/bin/env perl6 + +use v6.c; + +use Test; +use Local::App::Dicer; + +plan 2; + +subtest "Legal rolls", { + plan 50; + + for 1..50 { + ok 1 ≤ roll($_) ≤ $_, "Rolls between 1 and $_"; + } +} + +subtest "Illegal rolls", { + plan 3; + + throws-like { roll(0) }, X::TypeCheck::Binding::Parameter, "Zero is not accepted"; + throws-like { roll(-1) }, X::TypeCheck::Binding::Parameter, "Negative rolls are not accepted"; + throws-like { roll(1.5) }, X::TypeCheck::Binding::Parameter, "Can't roll half sides"; +} + +done-testing; + +# vim: ft=perl6 +``` + +### lib/Local/App/Dicer.pm6 + +```raku +#! /usr/bin/env false + +use v6.c; + +unit module Local::App::Dicer; + +sub roll(Int:D $sides where $sides > 0) is export +{ + $sides.rand.ceiling; +} +``` + +### bin/dicer + +```raku +#! /usr/bin/env perl6 + +use v6.c; + +use Local::App::Dicer; +use GTK::Simple; +use GTK::Simple::App; + +multi sub MAIN(Int:D $dice, Int:D $sides where { $dice > 0 && $sides > 0 }) +{ + say $dice × roll($sides) +} + +multi sub MAIN(Bool:D :$gtk where $gtk == True) +{ + my GTK::Simple::App $app .= new(title => "Dicer"); + my GTK::Simple::Label $label-dice .= new(text => "Number of dice"); + my GTK::Simple::Label $label-sides .= new(text => "Number of sides per die"); + my GTK::Simple::Label $label-result .= new(text => ""); + my GTK::Simple::Entry $entry-dice .= new(text => 0); + my GTK::Simple::Entry $entry-sides .= new(text => 0); + my GTK::Simple::Button $button-roll .= new(label => "Roll!"); + + $app.set-content( + GTK::Simple::Grid.new( + [0, 0, 1, 1] => $label-dice, + [1, 0, 1, 1] => $entry-dice, + [0, 1, 1, 1] => $label-sides, + [1, 1, 1, 1] => $entry-sides, + [0, 2, 2, 1] => $button-roll, + [0, 3, 2, 1] => $label-result, + ) + ); + + $button-roll.clicked.tap: { + CATCH { + $label-result.text = "Can't roll with those numbers"; + } + + X::TypeCheck::Binding::Parameter.new.throw if $entry-dice.text.Int < 1; + + $label-result.text = ($entry-dice.text.Int × roll($entry-sides.text.Int)).Str; + }; + + $app.border-width = 20; + + $app.run; +} + +sub USAGE +{ + say "Launch Dicer as a GUI with --gtk, or supply two positive, round numbers as arguments."; +} +``` + +## Installing your module + +Now that you have a finished application, you probably want to install it as +well, so you can run it by calling `dicer` in your shell. For this, we'll be +using `zef`. + +To install a local module, tell `zef` to try and install the local directory +you're in: + +```txt +$ zef install . +``` + +This will resolve the dependencies of the local module, and then install it. +You should now be able to run `dicer` from anywhere. + +{{< admonition title="warning" >}} +With most shells, you have to "rehash" your `$PATH` as well. On `bash`, this is +done with `hash -r`, on `zsh` it's `rehash`. If you're using any other shell, +please consult the manual. +{{< / admonition >}} diff --git a/content/posts/2018/2018-05-07-sparrowdo-getting-started.md b/content/posts/2018/2018-05-07-sparrowdo-getting-started.md new file mode 100644 index 0000000..419e98d --- /dev/null +++ b/content/posts/2018/2018-05-07-sparrowdo-getting-started.md @@ -0,0 +1,233 @@ +--- +title: Sparrowdo - Getting Started +date: 2018-05-07 +tags: +- LoneStar +- Perl6 +- Raku +- Sparrowdo +- Tutorial +--- + +[Sparrowdo](https://github.com/melezhik/sparrowdo) is a Perl 6 project to +facilitate automatic configuration of systems. There's a +[repository of useful modules](https://sparrowhub.org/) to make specific cases +easier to work with, but the +[Core DLS](https://github.com/melezhik/sparrowdo/blob/master/core-dsl.md) can +already take care of many tasks. In this tutorial, I'll guide you through +setting up Sparrowdo, bootstrapping it onto your local system, writing a task +and running it. + +## Install Sparrowdo + +Sparrowdo is a [Perl 6]http://perl6.org/) project, so you'll need to have Perl +6 installed. We'll also use the Perl 6 package manager +[zef](https://github.com/ugexe/zef/) to install Sparrowdo itself. Luckily for +us, there's a stable distribution of Perl 6 with everything we need added to it, +called [Rakudo Star](https://rakudo.org/files). And to make it easier for +GNU+Linux users, I wrote a tool to fetch the latest Rakudo Star release, compile +it and install it, called [LoneStar](https://github.com/Tyil/lonestar). Since +this tutorial will aim at GNU+Linux users, I'll use that to install Perl 6. + +### Installing Perl 6 with LoneStar + +LoneStar is a Bash application to download, compile and set up Perl 6. It's a +standalone application, meaning you don't have to install it to your system. You +can just run it from the source directory. First, we'll have to get the source +directory, which we'll do using `git`. + +```txt +mkdir -p ~/.local/src +git clone https://github.com/tyil/lonestar.git ~/.local/src/lonestar +cd !$ +``` + +Now you have the LoneStar sources available in `~/.local/src/lonestar`. You can +run the application using `./bin/lonestar`. Running it, you'll get some help +output: + +```txt +$ ./bin/lonestar +lonestar - Installation manager for Rakudo Star + +Usage: lonestar <action> [arguments...] + +Actions: + help [action] + init [version=latest] + install [version=latest] + path [version=latest] + reinstall [version=latest] + upgrade +``` + +We'll be needing the `install` action to get Perl 6 installed, and the `init` +action to configure the `$PATH` environment variable. Depending on your +hardware, `install` may take a couple minutes as it will compile Rakudo Perl 6 +and install some base modules. You might want to grab a drink during this +period. + +```txt +$ ./bin/lonestar install +$ eval $(./bin/lonestar init) +$ perl6 -v +This is Rakudo Star version 2018.04.1 built on MoarVM version 2018.04.1 +implementing Perl 6.c. +``` + +{{< admonition title="note" >}} +If there's a newer version available of Rakudo Star, the version numbers given +by `perl6 -v` will differ for you. +{{< / admonition >}} + +### Installing Sparrowdo with zef + +Now that you have Perl 6 available and installed, you can continue on using +`zef` to install Sparrowdo. `zef` is bundled with Rakudo Star, so you don't have +to do anything to get it working. + +```txt +zef install Sparrowdo +``` + +This will instruct `zef` to install Sparrowdo and all its dependencies. This can +take a couple minutes, again depending on the hardware of your machine. + +## Bootstrapping your system + +The first step to working with Sparrowdo is bootstrapping the system you wish to +use it with. In this case, that'll be the local system. There's a `--bootstrap` +option to do this automatically. + +```txt +sparrowdo --bootstrap +``` + +{{< admonition title="tip" >}} +If you wish to bootstrap a remote system, you can use the `--host` option to +specify the system. For example: `sparrowdo --host=192.168.1.2 --bootstrap`. +{{< / admonition >}} + +Now your system is ready to be configured automatically using Sparrowdo! + +## Sparrowfiles + +Sparrowfiles are the files that describe the tasks Sparrow should execute to +get you the configuration you want. They are valid Perl 6 code, and call the +subroutines (or _sparrowtasks_) that will handle the actual actions. By default, +when running `sparrowdo`, it will look for a file named `sparrowfile` in the +current directory. + +To make our sample, we'll create a new directory to work in, so we have clean +directory that can be shared easily. You can also keep this directory under +version control, so you can distribute the `sparrowfile` with all its templates. + +{{< admonition title="tip" >}} +If you just want to create an empty directory to test things in, without +"polluting" the rest of your system, just call `cd -- "$(mktemp -d)"`. This will +create a temporary directory and change the working directory to there. +{{< / admonition >}} + +I'll be using `~/.local/sparrowdo/local-dns` to work in, as I'll be setting up a +local dns cache with [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) +for the sample code. + +### Writing a `sparrowfile` + +As noted in the previous paragraph, for the sake of a demo I'll guide you +through creating a `sparrowfile` to install and configure `dnsmasq` as a local +DNS cache. Using your favourite `$EDITOR`, write the following to `sparrowfile`: + +```raku +package-install "dnsmasq"; +directory "/etc/dnsmasq.d"; +file-create "/etc/dnsmasq.conf", %(content => slurp "dnsmasq.conf"); +file-create "/etc/dnsmasq.d/resolv.conf", %(content => slurp "resolv.conf"); +service-start "dnsmasq"; +``` + +This `sparrowfile` will set up the following configuration for `dnsmasq`: + +- Install the `dnsmasq` package +- Create the `/etc/dnsmasq.d` directory in which we'll store configuration files + for `dnsmasq` +- Create the configuration files `dnsmasq.conf` at `/etc/dnsmasq.conf` +- Create the `resolv.conf` in the `dnsmasq.d` directory +- Start the `dnsmasq` service + +The configuration files will be created based on the configuration files in the +current directory. So for this to work, you'll need to also create the +appropriate configuration files. Let's start off with the main `dnsmasq` +configuration in `dnsmasq.conf`: + +```conf +listen-address=127.0.0.1 + +no-dhcp-interface= +resolv-file=/etc/dnsmasq.d/resolv.conf +``` + +This will make `dnsmasq` listen on the loopback interface, so it'll only be able +to be used by the local machine. Furthermore, DHCP functionality will be +disabled, and the upstream resolvers are read from `/etc/dnsmasq.d/resolv.conf`. +The contents of that file are as follows: + +```conf +nameserver 37.235.1.174 +nameserver 37.235.1.177 +``` + +These nameservers are part of the [FreeDNS](https://freedns.zone/en/) project. +You can of course use whatever other DNS provider you want to use as your +upstream servers. Now, for `dnsmasq` to be used, you will also need to set your +machine's DNS resolvers to point to the `dnsmasq` service. This is defined in +`/etc/resolv.conf`, so lets append the following to our `sparrowfile` to set +that up. + +```conf +bash "chattr -i /etc/resolv.conf"; +file-delete "/etc/resolv.conf"; +file-create "/etc/resolv.conf", %(content => "nameserver 127.0.0.1"); +bash "chattr +i /etc/resolv.conf"; +``` + +This will remove the "immutable" attribute from `/etc/resolv.conf` if it's set. +Next it will remove the current `/etc/resolv.conf` and write out a new one which +only refers to the local machine as DNS resolver. This is to ensure an existing +`/etc/resolv.conf` gets recreated with the configuration we want. Finally, it +adds back the immutable attribute to the file, so other processes won't +overwrite it. + +### Running the `sparrowfile` + +To run the `sparrowfile` and get the setup you desire, run the `sparrowdo` +command with `--local_mode` and wait. + +```txt +sparrowdo --local_mode +``` + +{{< admonition title="note" >}} +If you want to run this on a remote machine to configure that one instead, you +can use `--host=<ip>` instead of `--local_mode`. +{{< / admonition >}} + +You can check whether it actually worked by inspecting the files in +`/etc/dnsmasq.d` and your `/etc/resolv.conf`. The easiest way to check their +contents would be by using `cat`: + +```txt +cat /etc/dnsmasq.d/dnsmasq.conf +cat /etc/dnsmasq.d/resolv.conf +cat /etc/resolv.conf +``` + +## Closing words + +You should now have a working local DNS setup, configured programmatically +through Sparrowdo. This allows you easily get it working on other machines as +well, and updates can be done in a much simpler fashion for all of them +together. + +If you have more interest in automating configuration with Sparrowdo, go check +their website, https://sparrowdo.wordpress.com/. diff --git a/content/posts/2018/2018-08-15-the-perl-conference-in-glasgow.md b/content/posts/2018/2018-08-15-the-perl-conference-in-glasgow.md new file mode 100644 index 0000000..3c01edc --- /dev/null +++ b/content/posts/2018/2018-08-15-the-perl-conference-in-glasgow.md @@ -0,0 +1,304 @@ +--- +title: The Perl Conference in Glasgow +date: 2018-08-23 +tags: +- Conference +- Perl +--- + +This year the European Perl Conference was hosted in Glasgow, and of course +I've attended a number of presentations there. On some of these, I have some +feedback or comments. These talks, and the feedback I have for them, are +detailed in this blog post. + +{{< admonition title="note" >}} +The first talk I cover is not so much about Perl, but more about politics, as +the talk was mostly about the speaker's ideology. If this does not interest you, +I'd suggest you skip the [Discourse Without Drama](#discourse-without-drama) +section, and head straight to the [European Perl Mongers Organiser's Forum +2018](#european-perl-mongers-organiser-s-forum-2018). +{{< / admonition >}} + +## Discourse Without Drama + +This was the first talk, and the only talk available at this timeslot. I am +personally very much against the diversity ideology, and must admit I am +skeptical of such presentations from the get-go. Nonetheless, I did stay until +the end and tried to give it a fair shot. However, I cannot sit idle while she +tries to force her ideology on this community I care very deeply about. + +{{< admonition title="note" >}} +I am not against the concept of diversity, I wholly support the idea of equal +opportunities. What I do not accept is the idea of equal outcome, or forced +diversity based on physical traits. This is what I refer to with "the diversity +ideology". I also don't think anyone has a right not to be offended, as this is +impossible to achieve in the real world. +{{< / admonition >}} + +One of the things that stood out to me is that the speaker tells us not to use +logical fallacies to condemn her ideology. This on itself I can easily agree +with. However, this should go both ways: we should also not use logical +fallacies to promote her ideology. Most notably, she pointed out the +[_argumentum ad populum_](https://en.wikipedia.org/wiki/Argumentum_ad_populum). +This basically means that just because a lot of people do or say something, +doesn't make it right. And this applies to the idea that we need to push the +diversity ideology in the Perl community as well. Try to bring facts and +statistics to show that this ideology will actually improve the community in +the long term. I've personally not seen any community improve with increasingly +harsh punishments for increasingly minor offenses. + +Another thing which slightly bothered me is the useless spin into radical +feminist ideology, which to me seems very off-topic for a Perl conference. +We're not at a political rally, and these kinds of remarks have been very +divisive in all sorts of other environments already. I'd rather not bring this +kind of behaviour to a community which I have loved for being so incredibly +friendly without needing special rules and regulations for it. + +Next, a point is raised that people should *not* grow a thicker skin. Instead, +people should get softer hearts. While I can get behind the latter part, I +have to disagree with the former. Reality shows that communications don't +always go perfectly. This is even more true in a community that exists mostly +in the digital space. Context is often lost here, and that can lead to +situations where someone may feel attacked even if this was not the intention +at all. I can safely say I've been in many situations where my comments were +perceived as an attack when they were not ment to be. + +People need to be able to handle some criticism, and sometimes you'll just have +to assume good faith from the other party. Telling people they should never +have to consider context and have a right not to be offended fosters an +environment in which people will be afraid to give genuine, valid feedback. + +She seemed very much in favour of an overly broad code of conduct as well, of +which I am also a strong opponent. There are various articles online, such as +[this one](https://shiromarieke.github.io/coc.html), which show that just +slapping a generic, vague code of conduct to a community isn't going to solve +the issue of trolls or harmful behaviour. There's [another great +article](http://quillette.com/2017/07/18/neurodiversity-case-free-speech/) that +I was pointed towards that highlight how this attempt to censor people for the +sake of not offending anyone can effectively halt creativity and the exchange of +ideas. There was also an interesting quote written on one of the walls of the +venue: + +{{< quote attribution="Oscar Romero" >}} +Aspire not to have more, but to be more... +{{< / quote >}} + +Don't try to add meaningless documents such as a code of conduct, which more +often than not hurts a community instead of improving it. Try to be a better +person that tries to solve actual issues without harming the community at large. +Be the adult in the conversation that can take an insult, and still be kind. +[Remember to hug the +trolls](https://rakudo.party/post/On-Troll-Hugging-Hole-Digging-and-Improving-Open-Source-Communities#hug2:feedthehandthatbitesyou), +and eventually they will hug you back. + +## European Perl Mongers Organiser's Forum 2018 + +The Perl community isn't big nowadays, however, the Perl 6 language also offers +a lot of concepts which are very well suited for modern programming. Sadly, if +no new users try out the language, it will be all for nothing. As such, we need +to bring new blood in to the community. + +One of the ways of doing this is by extending our promoting efforts outside of +the Perl community. Most people who like Perl are in a social bubble with other +people that are also familiar with the Perl programming language, be it 5 or 6. +But we need to reach new people as well, who will most likely be outside of +this social bubble. These people don't have to be techies either, they might +just as well be marketeers or designers. + +I myself am part of the "techies", so I'll stick to this particular group for +now. And I know people like me can be found at meetups, so it would be +worthwhile to promote Perl at meetups which are not dedicated to Perl. Think of +more generic programming meetups, or GNU+Linux User Groups. We have to be +mindful not to be too pushy, though. Listen to other people, and try to +understand the problem they're facing. Most of them will not be open to using a +different language immediately, especially not Perl (which sadly has a +particularly bad standing amongst people unfamiliar with it). Try to assist +them with their issues, and slowly introduce them to Perl (6) if it helps to +showcase what you mean. It might also be interesting to show people examples on +how to solve certain issues before telling them the language's name, so they +don't have a negative preconception solely from the name. + +Another thing to note is that Perl is more than just a programming language. +It's a community, and a large library of modules, known as CPAN. And CPAN +offers some nifty tools, such as the CPAN testers, which help ensure module +developers that their code runs on a massive set of platforms and Perl +versions. + +This has led me to consider the creation of a new Perl 6 module: +`CPAN::Tester`, to make it easy for people to contribute to a large-scale +testing environment for Perl 6. The idea is that one can run `CPAN::Tester` on +their machine, which will keep track of new Perl 6 modules being uploaded to +CPAN. The results are to be sent to another server (or multiple servers), which +can aggregate the data and show a matrix of test results. This aggregating +server could also be built as a Perl 6 module, possibly named +`CPAN::Tester::ResultsServer`. This would make setting up an environment +similar to CPAN testers for Perl 5 quite easy for Perl 6. + +## Perl 6 in Real Life $Work + +The speaker shows the perfect use case for +[Perl 6 grammars](https://docs.perl6.org/language/grammars), advanced yet +readable parsing of text and performing actions with the results. It's an +interesting talk, showcasing some nifty grammar constructs. The best part of +this is that it actually runs in production, where it parses over 700 files, +consisting over 100,000 lines of code, in about 22 seconds (on his laptop). +This goes to show that Perl 6 is no longer "too slow to use in production". + +It might be interesting to run this application of grammars on every Perl 6 +release to gather more information on the speed improvements of Perl 6, much +like Tux's `Text::CSV` runs. + +## Releasing a Perl 6 Module + +The speaker starts off with detailing the platform which most Perl 6 modules +use to host their code repository, GitHub. He also touched upon automated +testing using Travis and AppVeyor. It was good to show how to make use of +these, as automated testing oftentimes stops unintended bugs from reaching end +users. But, I personally prefer GitLab over GitHub, as they have much better +testing functionality, and they actually release their own platform as an open +source package. I'd like more GitLab love from the community and speakers as +well if possible. This would also make the speaker's CI configuration simpler, +for which he currently uses a `.travis.yml` file. This requires him to build +Perl 6 from source every test run, wasting quite a lot of time. + +It was also noted that there's a module to help you set up this module +skeleton, `mi6`. The speaker also noted that it doesn't seem to add much once +you know how a Perl 6 module is organized, and I tend to agree with this. +Actually, I made a module precisely because I agree with him here, +`App::Assixt`. This module intends to smoothen the entire course of module +development, not just the creation of a skeleton file. It will take care of +keeping your `META6.json` up to date, and ease uploading your module to CPAN as +well. + +Lastly, the speaker says the `META6.json` documentation can be found in S22. +While this is technically correct, S22 is *not* the implementation's +documentation, this lives in the official Perl 6 documentation instead. S22 +offers many additional information to be stored in the `META6.json`, but using +these fields will actually break installation of your module through `zef`, +rendering it unusable by others. I would strongly recommend people not to use +S22 when trying to figure out what they can or cannot do with their +`META6.json`. + +## How to become CPAN contributor? + +Submitting a pull request (or more correctly named, merge request) to a +repository is possibly the most straightforward way to help out other projects. +However, sometimes it will take a long time to get a response. The speaker +notes this can actually be on the scale of years. I have authored a number of +modules myself, and have been in the situation where I had not realized I got a +merge request from another person (same goes for issue reports). I would +recommend people who are not getting timely responses to their contributions to +contact the maintainer via other channels which are more suited for +communications. Think of email or IRC, for instance. You'll generally have a +much better chance of getting a timely response from the author, and then you +can work out your contribution and see if you can get it merged into the main +project. + +The speaker also lists a couple of ways to get started with contributing to +modules. One thing I missed in particular was the Squashathons[^1] for Perl 6. +These generally offer a good entry point to help out with the language's +development and the ecosystem's maintainance. + +Near the end, it was pointed out that it is a good idea to have a thick skin. +Even when it's not intended, people can come accross as rude. This is in +opposition to the talking point of the speaker yesterday (_Discourse Without +Drama_), but he does raise a good point here. People oftentimes don't mean to +insult you, but context is easily lost in written communications. Try to stay +mature and professional, you can simply ask for clarification. If you feel the +person remains hostile towards you, walk away. There's plenty of other projects +that would love your contributions! + +## Conference Organizers & European Perl Mongers Organiser's Forum 2018 BoF + +Well, that's certainly a mouthful for a heading, and it even contains an +abbreviation! This event was not a presentation, but a platform to exchange +ideas together. + +One of the items that were up for discussion was _A Conference Toolkit_, or ACT +for short. This is the platform used to organize Perl events, such as this +conference and Perl workshops throughout the world. However, ACT is dated. +They enabled HTTPS a short while ago, but it's still not the default because +people don't want to risk breaking the platform. I think this is enough of +an indication that it might be time to make something new to replace it. + +And I'm not alone in that sentiment, it seems. However, ACT is big and contains +a lot of data we don't want to lose. It's a massive undertaking to make a new +tool that works at least as well, and allows us to make use of the old data as +well. There is a Trello board available that lists all the features that would +be required to implement, so that's a good start already. I think now it needs +a dedicated product owner with people contributing code, so a start can be +made. This does seem like a touchy subject, since I'm far from the first person +to want this. Many before me have tried and failed already. + +As such, I'd propose not making it a Perl centric tool. Make it a modular, +generic event organizing tool. Get a good database design that we can import +our old data into, so nothing is lost, but things can be converted to be more +useful for our current needs. This way, we can work in small steps, and maybe +even reach contributors from outside the regular Perl circles. This might even +bring in new partnerships (or sponsors) towards the Perl community. + +Personally, I'd like to see something like this to be written in Perl 6. This +way, it could also be used as a showcase project for the Perl 6 programming +language. + +## Writing a Perl 6 Module + +Perl 6 has this very neat feature called +[subsets](https://docs.perl6.org/language/typesystem#index-entry-subset-subset). +These can be used to make your own types with very little effort, which can +help tremendously to keep your code clean and concise. There are two arguments +I have in favour of subsets that the speaker did not touch upon. + +First off, using a subset instead of a `where` clause in a sub or method +signature will bring much better error messages. If you use a `where` in your +signature, and the check fails, you'll get an error that there was no signature +that matched `where { ... }`. + +Secondly, if you want to use abstract methods, you can't really use a `where`. +[I'ev asked a question about this on Stack +Overflow](https://stackoverflow.com/questions/51570655/how-to-use-abstract-multi-methods-containing-a-where), +which has the details as to why this doesn't work the way you might expect. + +Next, there's some cool things about operators in Perl 6. There are many of +these available by default, and it's _very_ easy to add new ones yourself as +well. In fact, the `Math::Matrix` module used throughout the presentation makes +some available as well. Thanks to the ease of adding operators in Perl 6, if +you have a `Math::Matrix $m` in Perl 6, you can get the norm by writing `|| $m +||`. This is the mathematically correct way to write this, making it easy to +understand for everyone using matrixes in their daily lives. If you're a +mathematician, small things like these are great to have. + +I have some comments on the `Math::Matrix` module itself as well, based on +slides shown in the presentiation. The first thing I noticed is that there's a +`norm` method using a `where` clause when it's not needed: + +```raku +method norm (Str $which where * eq 'row-sum') +``` + +This can be written instead as: + +```raku +method norm ('row-sum') +``` + +This is shorter and clearer, and you'll get better feedback from the compiler as +well. I [submitted a pull request on the GitHub +repository](https://github.com/pierre-vigier/Perl6-Math-Matrix/pull/49) in an +attempt to improve this, which got merged! The speaker was not aware it could be +done in this manner, so I'm proud I got to teach him something right after he +did his presentation. + +## Winding down + +I've had a great time at the Perl conference, spoke to many people with whom +I've had some great discussions. I got to meet and personally thank a number of +people who've helped me out over the past year as well. + +A big thank you to all the people who made this conference possible, and I hope +to see you all again in Riga! + +[^1]: A Squashathon is like a hackathon, except everyone in the world is +invited, and you can help out over the Internet, staying in your own home. Of +course, you can still meet up with other developers and make it a social +gathering in the real world as well! diff --git a/content/posts/2018/2018-09-04-setting-up-pgp-with-a-yubikey.md b/content/posts/2018/2018-09-04-setting-up-pgp-with-a-yubikey.md new file mode 100644 index 0000000..36e0ef6 --- /dev/null +++ b/content/posts/2018/2018-09-04-setting-up-pgp-with-a-yubikey.md @@ -0,0 +1,442 @@ +--- +title: Setting up PGP with a Yubikey +date: 2018-09-04 +tags: +- GPG +- PGP +- Security +- YubiKey +--- + +I've recently started a job where I am required to have above-average security +practices in place on my machine. I already had some standard security in +place, such as full disk encryption and PGP encrypted email, but I thought that +this would be a good time to up my game. To accomplish this, I purchased a +Yubikey to act as my physical security token. Additionally, I have a USB device +which is also encrypted to hold backups of the keys. + +In this blogpost, I will detail how I set up my security policies in the hopes +it will be able to help out other people looking to improve their security, and +to get feedback to improve my set up as well. + +{{< admonition title="note" >}} +I am using the Yubikey 4. If you're using another version, some steps may +differ. +{{< / admonition >}} + +## Installing required software + +You'll need some software to set all of this up. Depending on your +distribution, some of it might already be installed. Everything not installed +yet should be installed with your distribution's package manager. + +For encrypting the disk and the USB key, you will need `cryptsetup`. To +generate and use the PGP keys, you will need `gpg`, at least version 2.0.12. To +interface with the Yubikey itself, you'll need `pcsc-lite`, and start the +service as well. It may be necessary to restart the `gpg-agent` after +installing `pcsc-lite`, which you can do by simply killing the existing +`gpg-agent` process. It restarts itself when needed. + +To securely remove the temporary data we need, you should make sure you have +`secure-delete` available on your system as well. + +## Personalizing the Yubikey + +The Yubikey can be personalized. Some of this personalization is completely +optional, such as setting personal information. However, setting new PIN codes +is strongly advised, as the default values are publicly known. + +### PIN codes + +The PIN codes are short combinations of numbers, letters and symbols to grant +permission to write to or retrieve data from the Yubikey. The default value for +the user PIN is `123456`. The admin PIN is `12345678` by default. These should +be changed, as they're publicly known and allow the usage of your private keys. +To change these, use the `gpg` program and enter admin mode: + +```txt +gpg --card-edit + +gpg/card> admin +Admin commands are allowed +``` + +You'll notice it immediately says that admin commands are now allowed to be +used. The admin PIN (`12345678`) will be asked whenever an admin command is +executed. It will then be stored for this session, so you won't have to enter +it right away. To update the PIN values, run the following commands: + +```txt +gpg/card> passwd +gpg/card> 3 +``` + +This will change the admin PIN first. This PIN is required for managing the +keys and user PIN on the Yubikey. To set the user PIN, pick `1` instead of `3`: + +```txt +gpg/card> 1 +``` + +Once this is done, you can quit the `passwd` submenu using `q`: + +```txt +gpg/card> q +``` + +You may have noticed we skipped the reset code. Resetting the device will wipe +existing keys, so it's not a serious risk to keep this at the default. The +private keys will be backed up to an encrypted USB drive, so we can always +retrieve them and put them back on the Yubikey if ever needed. + +### Personal information + +The personal information is optional, but could be used by a friendly person to +find out who a found Yubikey belongs to. They can contact the owner, and send +the key back. You can set as many of the personally identifying fields as you +want. If you're interested in setting this information, plug in your Yubikey +and edit the card information with `gpg`: + +```txt +gpg --card-edit +``` + +Once you're back in the GPG shell, you can update your personal information. +There are 5 attributes that you can set in this way: + +- `name`, which is your real name; +- `lang`, which is your preferred contact language; +- `sex`, which is your real sex; +- `url`, which indicates a location to retrieve your public key from; +- `login`, which indicates your email address. + +Each of these attributes can be updated by running the command in the GPG +shell. For instance, to update your real name, run the following: + +```txt +gpg/card> name +``` + +You do not need to explicitly save once you're done. You can run `quit` to quit +the GPG shell and return to your regular shell. + +## Creating PGP keys + +To create the PGP keys, we'll create a temporary directory which will function +as our working directory to store the keys in. This way you can't accidentally +break existing keys if you have them, and ensure that the private keys don't +accidentally linger on in your filesystem. + +### Preparing a clean environment + +To create such a temporary directory, we'll use `mktemp`, and store the result +in an environment variable so we can easily re-use it: + +```sh +export GNUPGHOME="$(mktemp -d)" +``` + +Now you can switch to that directory using `cd "$GNUPGHOME"`. Additionally, +`$GNUPGHOME` is also the directory `gpg` uses as its working directory, if it +is set. This means you can use a temporary custom configuration for `gpg` as +well, without it affecting your normal setup. The following configuration is +recommended to set in `$GNUPGHOME/gpg.conf` before starting: + +```conf +use-agent +charset utf-8 +no-comments +keyid-format 0xlong +list-options show-uid-validity +verify-options show-uid-validity +with-fingerprint +``` + +If you have a `gpg-agent` running, it is recommended to stop it before +continuing with `killall gpg-agent`. + +### Creating the master key + +For our master key, we'll go for a 4096 bytes RSA key. 2048 would be plenty as +well, if you want the generation to be a tad quicker. `gpg` will ask you a +couple questions to establish your identity, which is required for a PGP key. +You can add more identities later, in case you're using multiple email +addresses, for instance. + +Start the key generation process with `gpg`: + +```txt +gpg --full-generate-key +``` + +When asked what kind of key you want, choose `4` (RSA (sign only)). Next is the +key size, which should be `4096`. + +The key's expiration is optional, though highly recommended. It will be more +effort to maintain the keys, as you'll occasionally need the private master +keys to extend the validity, but you can also guarantee that your keys won't +stay valid in case you ever lose them. If you don't want to bother with +refreshing your keys from time to time, just press enter here to continue. + +When prompted on whether the data is correct, doublecheck whether the data is +really correct, and then enter `y` and press enter to accept the current +values. `gpg` will continue with your identity information, which you should +fill out with your real information. The comment field can be left empty, this +is an optional field to add a comment to your identity, such as "School", or +"Work keys". `gpg` will ask your confirmation one final time. Enter an `o` +(it's not case sensitive) and press enter again. The final step before it will +generate a key is to enter a passphrase. This is technically optional, but +highly recommended. If anyone ever gets their hands on your private master key, +they will need the passphrase in order to use it. Adding one is yet another +layer against malicious use of your key. + +Once you've chosen a passphrase, it will generate they key and output some +information about the key. Verify whether this information is correct one more +time, and if it is, you can continue to the next step. If it is not, redo the +whole PGP section of this post. + +Take note of the line starting with `pub`. It shows that the key is an +`rsa4096` key, followed by a `/`, and then the key ID. You'll need this key ID +throughout the rest of this post. For convenience, you can store this ID in +a variable, and just refer to the variable when you need it's value again: + +```sh +export KEYID=0x27F53A16486878C7 +``` + +This post will use the `$KEYID` variable from now on, to make it easier to +follow. + +### Creating a revocation certificate + +The revocation certificate can be used to invalidate your newly created key. +You should store it seperately from the private master key, preferably printed +on a sheet of paper. If you want to be able to easily read it back in, consider +printing it as a QR code. + +To create the certificate, run the following: + +```txt +gpg --gen-revoke $KEYID > $GNUPGHOME/revoke.txt +``` + +This will prompt you to specify a reason, for which you'll want to use `1`. +This way you can easily revoke the key's validity if you ever lose it. If you +want to revoke your keys in the future for any other reason, you can always +generate a new revocation certificate for that specific purpose. You don't have +to supply an additional description, so just hit enter. A revocation +certificate will be written to `$GNUPGHOME/revoke.txt`. + +### Creating the subkeys + +Now that you have your master key and the ability to revoke it in case anything +goes wrong in the future, it's time to create a couple of subkeys which can be +stored on the Yubikey, and used in your daily life. We'll create seperate keys +for _encryption_, _signing_ and _authentication_, and store each of them in +their own dedicated slot on the Yubikey. + +To add subkeys to your master key, enter a GPG shell to edit your existing +key with `gpg --expert --edit-key $KEYID`. The `--expert` is required to show +all the options we're going to need. Once the GPG shell has started, run +`addkey` to add a new key. + +Just like with the master key, a number of questions will be asked. Expiration +for subkeys is generally not advised, as the subkeys will be considered invalid +whenever the master key has expired. The key sizes for the subkeys can be left +at 2048 as well, which is also the maximum size for keys for the older Yubikey +models. The key type is different for all 3 subkeys. + +You will want to select type `4` (RSA (sign only)) for your signing key, type +`6` (RSA (encrypt only)) for the encryption key, and type `8` (RSA (set your +own capabilities)) for the authentication key. With the final key, it will ask +you what capabilities you want to enable. The only capability you want it to +have is *Authentication*. + +Once you've created the subkeys, you can check `gpg --list-secret-keys` to look +at your newly created keys. You should have 1 `sec` key, which is the master +key, and 3 `ssb` keys, which are the subkeys. One line should end with `[S]`, +one with `[E]` and one with `[A]`. These denote the capabilities of the +subkeys, _Sign_, _Encrypt_ and _Authenticate_, respectively. + +### Export the keys + +Now that you have your keys generated, you should export them, allowing you to +easily import them in another environment in case you ever need to generate +more keys, invalidate some keys, or extend the validity of the keys in case you +set an expiry date. This can be done with the following commands: + +```txt +gpg --armor --export-secret-keys $KEYID > masterkey.asc +gpg --armor --export-secret-subkeys $KEYID > subkeys.asc +``` + +## Creating a backup USB + +For the backup of the private keys, I'm using an encrypted USB device. You can +also opt to print the keys to paper, and retype them if you ever need them. Or +print a QR code that you can scan. But for convenience sake, I went with a USB +device. I encrypted it, and stored it in a safe and sealed location, so it's +easy to detect unwanted attempted access. + +### Encrypting the USB + +For the encryption, I went with full device encryption using LUKS. You will +need the `cryptsetup` utility to apply the encryption, and to unlock the drive. +You can find out the device name from `dmesg` or `lsblk`. Once you know it, +encrypt the drive with the `luksFormat` subcommand. + +{{< admonition title="warning" >}} +Using the wrong name for the device can irrecoverably destroy data from another +drive! +{{< / admonition >}} + +```txt +cryptsetup luksFormat /dev/sdb +``` + +It will prompt you whether you want to continue, and ask twice for a passphrase +to ensure it is correct. Make sure you don't forget the passphrase, or you'll +lose access to your backup keys. + +Once it has been encrypted, unlock the device. + +```txt +cryptsetup luksOpen /dev/sdb crypt +``` + +This will open the device as `/dev/mapper/crypt`. Format it with your favourite +filesystem. I used `ext4`. + +```txt +mkfs.ext4 /dev/mapper/crypt +``` + +Once it has been formatted, you can mount it as a regular device. + +```txt +mount /dev/mapper/crypt /mnt/usb +``` + +### Copying the keys + +Copying the keys is as straightforward as copying other files. You can use +`$GNUPGHOME` to target the source directory. + +```txt +cp -arv "$GNUPGHOME"/* /mnt/usb/. +``` + +Once the files are copied, you can unmount the drive, lock it and unplug the +USB. + +```txt +sync +umount /mnt/usb +cryptsetup luksClose crypt +``` + +Store the USB in a safe location, because these private keys can give someone +full control of your identity. + +## Storing the private keys on the Yubikey + +The Yubikey has key slots for encryption, signing and authentication. These +need to be set individually, which can be done using `gpg`. First, you need to +select a key using the `key` command, then store it on the card using +`keytocard` and select a slot to store it in, then finally deselect the key by +using the `key` command again. + +```txt +gpg --edit-key $KEYID + +gpg> key 1 +gpg> keytocard +Your selection? 1 +gpg> key 1 + +gpg> key 2 +gpg> keytocard +Your selection? 2 +gpg> key 2 + +gpg> key 3 +gpg> keytocard +Your selection? 3 + +gpg> save +``` + +You can verify whether the keys are available on the Yubikey now using `gpg +--card-status`. It will show the key fingerprints for the `Signature key`, +`Encryption key` and `Authentication key`. + +### Sharing your public key + +You can share your public keys in many ways. Mine is hosted [on my own +site](/pubkey.txt), for instance. There are also [public +keyservers](https://sks-keyservers.net/) on which you can upload your keys. +`gpg` has the `--send-keys` and `--recv-keys` switches to interact with these +public keyservers. For ease of use, I would recommend uploading them to a public +keyserver, so that other people can easily import it. For instance, my key can +be imported using `gpg`: + +```txt +gpg --recv-keys 0x7A6AC285E2D98827 +``` + +## Clean up + +The keys are on the Yubikey, and you probably do not want to leave traces on +your local system of these new keys, so you should clean up the `$GNUPGHOME` +directory. There's a utility for securely removing a directory with all its +contents, called `secure-delete`, which provides the `srm` program. You can use +it just like the regular `rm` on the temporary directory. + +```txt +srm -r "$GNUPGHOME" +``` + +You can also `unset` the `$GNUPGHOME` variable at this point, so `gpg` will use +it's default configuration again. + +```txt +unset GNUPGHOME +``` + +## Configure GPG + +Finally, you have your keys on the Yubikey and the traces that might have been +left on your device are wiped clean. Now you should configure `gpg` for regular +use as well, however, this is completely optional. All this configuration does +is ensure you have good defaults for the current day and age. + +```conf +auto-key-locate keyserver +keyserver hkps://hkps.pool.sks-keyservers.net +keyserver-options no-honor-keyserver-url +personal-cipher-preferences AES256 AES192 AES CAST5 +personal-digest-preferences SHA512 SHA384 SHA256 SHA224 +default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 +ZLIB BZIP2 ZIP Uncompressed +cert-digest-algo SHA512 +s2k-cipher-algo AES256 +s2k-digest-algo SHA512 +charset utf-8 +fixed-list-mode +no-comments +no-emit-version +keyid-format 0xlong +list-options show-uid-validity +verify-options show-uid-validity +with-fingerprint +use-agent +require-cross-certification +``` + +## Conclusion + +You now have PGP keys available on your Yubikey. These keys are only available +to your system if the Yubikey is inserted, and the user PIN is given. You can +use these keys for authentication, signing and encrypting/decrypting messages. +In a future post, I'll detail how to set up a number of services to use these +keys as well. diff --git a/content/posts/2018/2018-09-13-hackerrank-solutions-python3-and-perl6-part-1.md b/content/posts/2018/2018-09-13-hackerrank-solutions-python3-and-perl6-part-1.md new file mode 100644 index 0000000..7272d51 --- /dev/null +++ b/content/posts/2018/2018-09-13-hackerrank-solutions-python3-and-perl6-part-1.md @@ -0,0 +1,443 @@ +--- +title: "Hackerrank Solutions: Python 3 and Perl 6 (part 1)" +date: 2018-09-13 +tags: +- Hackerrank +- Perl6 +- Python +- Python3 +- Programming +- Raku +--- + +I recently started at a new company, for which I will have to write Python 3 +code. To make sure I still know how to do basic stuff in Python, I started to +work on some [Hackerrank challenges](https://www.hackerrank.com/). In this post, +I will show solutions to some challenges to show the differences. I hope that I +can show that Perl doesn't have to be the "write only" language that many +people make it out to be. + +{{< admonition title="note" >}} +I am _much_ more proficient in the Perl 6 programming language than in Python +(2 or 3), so I might not always use the most optimal solutions in the Python +variants. Suggestions are welcome via email, though I most likely won't update +this post with better solutions. I ofcourse also welcome feedback on the Perl 6 +solutions! +{{< / admonition >}} + +## Challenges + +The challenges covered in this post are the [warmup +challenges](https://www.hackerrank.com/domains/algorithms?filters%5Bsubdomains%5D%5B%5D=warmup) +you are recommended to solve when you make a new account. The code around the +function I'm expected to solve won't be included, as this should be irrelevant +(for now). Additionally, I may rename the sub to conform to +[kebab-case](https://en.wikipedia.org/wiki/Letter_case#Special_case_styles), as +this is more readable (in my opinion), and allowed in Perl 6. + +### Solve Me First + +This challenge is just a very simple example to introduce how the site works. +It required me to make a simple `a + b` function. + +```python3 +def solveMeFirst(a,b): + return a+b +``` + +The Perl 6 variant isn't going to very different here. + +```raku +sub solve-me-first ($a, $b) { + $a + $b +} +``` + +For those not familiar with Perl 6, the `$` in front of the variable names is +called a [Sigil](https://docs.perl6.org/language/glossary#index-entry-Sigil), +and it signals that the variable contains only a single value. + +You may have noticed that there's also no `return` in the Perl 6 variant of +this example. In Perl 6, the last statement in a block is also the implicit +return value (just like in Perl 5 or Ruby). + +### Simple Array Sum + +For this challenge I had to write a function that would return the sum of a +list of values. Naturally, I wanted to use a `reduce` function, but Python 3 +does not support these. So I wrote it with a `for` loop instead. + +```python3 +def simpleArraySum(ar): + sum = 0 + + for i in ar: + sum += i + + return sum +``` + +Perl 6 does have a `reduce` function, so I would use that to solve the problem +here. + +```raku +sub simple-array-sum (@ar) { + @ar.reduce(sub ($a, $b) { $a + $b }) +} +``` + +Here you can see a different sigil for `@ar`. The `@` sigil denotes a list of +scalars in Perl 6. In most other languages this would simply be an array. + +This code can be written even shorter, however. Perl 6 has [reduction +meta-operators](https://docs.perl6.org/language/operators#index-entry-%5B%2B%5D_%28reduction_metaoperators%29). +This allows you to put an operator between brackets, like `[+]`, to apply a +certain operator as a reduce function. + +```raku +sub simple-array-sum (@ar) { + [+] @ar +} +``` + +{{< admonition title="note" >}} +After publishing this post I have learned that both Python 3 and Perl 6 have a +`.sum` function that can also be called on the array, simplifying the code in +both languages. +{{< / admonition >}} + +### Compare the Triplets + +This challenge provides you with 2 lists of 3 elements each. The lists should +be compared to one another, and a "score" should be kept. For each index, if +the first list contains a larger number, the first list's score must be +incremented. Similarly, if the second list contains a larger number on that +index, the second list's score must be incremented. If the values are equal, do +nothing. + +```python3 +def compareTriplets(a, b): + scores = [0, 0] + + for i in range(3): + if a[i] > b[i]: + scores[0] += 1 + + if a[i] < b[i]: + scores[1] += 1 + + return scores +``` + +I learned that Python 3 has no `++` operator to increment a value by 1, so I +had to use `+= 1` instead. + +```raku +sub compare-triplets (@a, @b) { + my @scores = [0, 0]; + + for ^3 { + @scores[0]++ if @a[$_] > @b[$_]; + @scores[1]++ if @a[$_] < @b[$_]; + } +} +``` + +In Perl 6, the `^3` notation simply means a range from 0 to 3, non-inclusive, +so `0`, `1`, `2`, meaning it will loop 3 times. The `$_` is called the +_topic_, and in a `for` loop it is the current element of the iteration. + +Both of these loops could use a `continue` (or `next` in Perl 6) to skip the +second `if` in case the first `if` was true, but for readability I chose not +to. + +{{< admonition title="note" >}} +After publishing this post I learned that Python 3 also supports the inline if +syntax, just like Perl 6, so I could've used this in Python 3 as well. +{{< / admonition >}} + +### A Very Big Sum + +In this challenge, you need to write the function body for `aVeryBigSum`, which +gets an array of integers, and has to return the sum of this array. Both Python +3 and Perl 6 handle the large integers transparently for you, so I was able to +use the same code as I used for the simple array sum challenge. + +```python3 +def aVeryBigSum(ar): + sum = 0 + + for i in ar: + sum += i + + return sum +``` + +And for Perl 6 using the `[+]` reduce meta-operation. + +```raku +sub a-very-big-sum (@ar) { + [+] @ar +} +``` + +### Plus Minus + +The next challenge gives a list of numbers, and wants you to return the +fractions of its elements which are positive, negative or zero. The fractions +should be rounded down to 6 decimals. I made a counter just like in the +*Compare the Triplets* challenge, and calculated the fractions and rounded them +at the end. + +```python3 +def plusMinus(arr): + counters = [0, 0, 0] + + for i in arr: + if (i > 0): + counters[0] += 1 + continue + + if (i < 0): + counters[1] += 1 + continue + + counters[2] += 1 + + for i in counters: + print("%.6f" % (i / len(arr))) +``` + +For the Perl 6 solution, I went for a `given/when`, `map` and the `fmt` +function to format the fractions. + +```raku +sub plus-minus (@arr) { + my @counters = [0, 0, 0]; + + for @arr -> $i { + given $i { + when * > 0 { @counters[0]++ } + when * < 0 { @counters[1]++ } + default { @counters[2]++ } + } + } + + @counters.map({ $_.fmt("%.6f").say }); +} +``` + +You may notice a number of statements do not have a terminating `;` at the end. +In Perl 6, this is not needed if it's the last statement in a block (any code +surrounded by a `{` and `}`. + +The `given/when` construct is similar to a `switch/case` found in other +languages (but not Python, sadly), but uses the [Smartmatch +operator](https://docs.perl6.org/language/operators#index-entry-smartmatch_operator) +implicitly to check if the statements given to `when` are `True`. The `*` is the +[Whatever operator](https://docs.perl6.org/type/Whatever), which in this case +will get the value of `$i`. + +Lastly, he `$_` in the `map` function is similar to inside a `for` loop, +it's the current element. Since the code given to `map` is inside a block, +there's no need for a `;` after `say` either. + +### Staircase + +This challenge gives you an integer 𝓃, and you're tasked with "drawing" a +staircase that is 𝓃 high, and 𝓃 wide at the base. The staircase must be made +using `#` characters, and for the spacing you must use regular spaces. + +It seems that in Python, you _must_ specify the `i in` part oft the `for i in +range`. Since I don't really care for the value, I assigned it to `_`. + +```python3 +def staircase(n): + for i in range(1, n + 1): + for _ in range(n - i): + print(" ", end="") + + for _ in range(i): + print("#", end="") + + print("") +``` + +In Perl 6, there's also a `print` function, which is like `say`, but does not +append a `\n` at the end of the string. The `for` loop in Perl 6 allows for +just a range to operate as expected. The `..` operator creates a range from the +left-hand side up to the right hand side, inclusive. + +```raku +sub staircase ($n) { + for 1..$n -> $i { + print(" ") for 0..($n - $i); + print("#") for ^$i; + print("\n"); + } +} +``` + +### Mini-Maxi Sum + +Here you will be given 5 integers, and have to calculate the minimum and +maximum values that can be calculated using only 4 of them. + +I sort the array, and iterate over the first 4 values to calculate the sum and +print it. I then do the same but sort it in reverse for the sum of the 4 +highest values. + +```python3 +def miniMaxSum(arr): + arr.sort() + sum = 0 + + for i in range(4): + sum += arr[i] + + print(str(sum) + " ", end="") + + arr.sort(reverse=True) + sum = 0 + + for i in range(4): + sum += arr[i] + + print(str(sum)) +``` + +Perl 6 has immutable lists, so calling `sort` on them will return a new list +which has been sorted. I can call `reverse` on that list to get the highest +number at the top instead. `head` allows me to get the first 4 elements in a +functional way. You've already seen the meta-reduce operator `[+]`, which will +get me the sum of the 4 elements I got from `head`. I wrap the calculation in +parenthesis so I can call `print` on the result immediately. + +```raku +sub mini-maxi-sum (@arr) { + ([+] @arr.sort.head(4)).print; + print(" "); + ([+] @arr.sort.reverse.head(4)).print; +} +``` + +### Birthday Cake Candles + +In this challenge, you're given a list of numbers. You must find the highest +number in the list, and return how often that number occurs in the list. + +It's fairly straightforward, I keep track of the current largest value as +`size`, and a `count` that I reset whenever I find a larger value than I +currently have. + +```python3 +def birthdayCakeCandles(ar): + size = 0 + count = 0 + + for i in ar: + if i > size: + size = i + count = 0 + + if i == size: + count += 1 + + return count +``` + +The Perl 6 variant does not differ in how it solves the problem, apart from +having a very different syntax of course. + +```raku +sub birthday-cake-candles (@ar) { + my ($size, $count) = (0, 0); + + for @ar { + if ($_ > $size) { + $size = $_; + $count = 0; + } + + $count++ if $size == $_; + } + + $count; +} +``` + +{{< admonition title="note" >}} +On IRC, someone showed me a clean solution in Python 3: `return +ar.count(max(ar))`. This feels like a much cleaner solution than what I had +created. +{{< / admonition >}} + +### Time Conversion + +This is the final challenge of this section on Hackerrank, and also this post. +You're given a timestamp in 12-hour AM/PM format, and have to convert it to a +24-hour format. + +I split the AM/PM identifier from the actual time by treating the string as a +list of characters and taking two slices, one of the last two characters, and +one of everything _but_ the last two characters. Then I split the time into +parts, and convert the first part (hours) to integers for calculations. Next I +set the hours to 0 if it's set to 12, and add 12 hours if the timestamp was +post meridiem. Finally, I convert the hours back to a string with leading +zeroes, and join all the parts together to form a timestamp again. + +```python3 +def timeConversion(s): + meridiem = s[-2:] + hours = int(s[:2]) + rest = s[2:-2] + + if (hours > 11): + hours = 0 + + if (meridiem.lower() == "pm"): + hours += 12 + + return ("%02d:%s" % (hours, rest)) +``` + +The Perl 6 solution again doesn't differ much from the Python solution in terms +of the logic it's using to get the result. The biggest difference is that in +Perl 6, strings can't be accessed as lists, so I use the `substr` method to +extract the parts that I want. The first one starts at `*-2`, which means 2 +places before the end. The others get a +[`Range`](https://docs.perl6.org/type/Range) as argument, and will get the +characters that exist in that range. + +```raku +sub time-conversion ($s) { + my $meridiem = $s.substr(*-2); + my $hours = $s.substr(0..2).Int; + my $rest = $s.substr(2..*-2); + + $hours = 0 if $hours > 11; + $hours += 12 if $meridiem.lc eq "pm"; + + sprintf("%02d:%s", $hours, $rest); +} +``` + +The `.Int` method converts the `Str` object into an `Int` object, so we can +perform calculations on it. The `eq` operator checks specifically for [_string +equality_](https://docs.perl6.org/routine/eq). Since Perl 6 is a [gradually +typed programming language](https://en.wikipedia.org/wiki/Gradual_typing), +there's a dedicated operator to ensure that you're checking string equality +correctly. + +## Wrap-up + +These challenges were just the warm-up challenges I was given after creating a +new account and choosing Python as a language to use. I intend to write up more +posts like this, for the near future I'll stick to Python 3 challenges since I +want to get better at that specific language for work. + +This is also the first post in which I have tried this format to show off two +languages side-by-side, and to highlight differences in how you can accomplish +certain (relatively simple) tasks with them. If you have suggestions to improve +this format, do not hesitate to contact me. I am always open for feedback, +preferably via email. You can find my contact details on the [homepage](/). diff --git a/content/posts/2018/_index.md b/content/posts/2018/_index.md new file mode 100644 index 0000000..e1bb4e6 --- /dev/null +++ b/content/posts/2018/_index.md @@ -0,0 +1,3 @@ +--- +title: 2018 +--- diff --git a/content/posts/2019/2019-02-03-how-to-sign-pgp-keys.md b/content/posts/2019/2019-02-03-how-to-sign-pgp-keys.md new file mode 100644 index 0000000..d5f401a --- /dev/null +++ b/content/posts/2019/2019-02-03-how-to-sign-pgp-keys.md @@ -0,0 +1,141 @@ +--- +title: How to sign PGP keys +date: 2019-02-03 +tags: +- PGP +- Tutorial +--- + +Having attended [FOSDEM](https://fosdem.org/2019/) last weekend, I have been +asked to help some people out with signing PGP keys. As it is an international +gathering of users and developers of all levels of expertise, it's a great event +to get your key out in to the wild. While helping people out, I figured it might +be even easier next time around to just refer to a small tutorial on my blog +instead. + +## Creating a PGP key + +The first step to sign keys, is to have a PGP key. If you already have one, +you're good to go to the next part of this tutorial. If you don't, you can check +out the `gpg` manual on how to create a key, or read about key creation in my +[article on using PGP with a Yubikey][yubikey-pgp-article]. While I would +strongly suggest reading at least some material, `gpg` does quite a good job of +guiding you through the process without prior knowledge, so you can just get +started with `gpg --generate-key` as well. + +[yubikey-pgp-article]: {{ "/post/2018/09/04/setting-up-pgp-with-a-yubikey/#creating-pgp-keys" | prepend: site.baseurl | prepend: site.url }} + +## Create key slips + +A *key slip* is a small piece of paper containing some basic information about +the PGP key. They're exchanged when people meet, so they don't have to +immediately sign the key, but can do it safely at home. When you're signing in a +group, this may be faster to work with. Another benefit is that some people +don't have their private keys with them. They can then just collect the key slips +from the people who's key they want to sign, and sign it whenever they are in +possession of their private key again. + +A key slip doesn't have to contain much. A key ID, fingerprint, email address and +a name is plenty. For reference, my key slips look as follows: + +```txt +Patrick Spek <p.spek@tyil.nl> rsa4096/0x7A6AC285E2D98827 + 1660 F6A2 DFA7 5347 322A 4DC0 7A6A C285 E2D9 8827 +``` + +## Verifying the owner + +Before you sign anyone's public key, you should verify that the person is +actually who they say they are. You can easily do this by asking for government +issued identification, such as an ID card, driver's license or passport. What +constitutes good proof is up to you, but in general people expect at least one +form of government issued identification. + +If the person can't verify who they are, you should *not* sign their key! + +## Retrieving their key + +Once you have verified the person is who they say they are, and you have +received their key slip containing their key ID, you can look up their key +online. You can let `gpg` do all the work for you in searching and downloading +the key, using the `--search` switch. For instance, to retrieve my key, do the +following: + +```txt +gpg --search-keys 0x7A6AC285E2D98827 +``` + +If a result has been found, you are prompted to enter the numbers of the keys +you want to download. Make sure you download the right key, in case multiple +have been found! + +After retrieving the key, you can see it in the list of all the keys `gpg` knows +about using `gpg --list-keys`. + +## Signing their key + +To actually sign their key, and show that you trust that the key belongs to the +person's name attached to it, you can use `gpg --sign-key`: + +```txt +gpg --sign-key 0x7A6AC285E2D98827 +``` + +You will be prompted whether you are sure you want to sign. You should answer +this with a single `y` to continue. + +After signing it, you'll have signed a PGP key! You can verify this by looking +at the signatures on a given key with `--list-sigs 0x7A6AC285E2D98827`. This should +contain your name and key ID. + +## Exchanging the signed key + +While you could publish the updated public key with your signature on it, you +should **not** do this! You should encrypt the updated public key and send it to +the person that owns the private key, and they should upload it themselves. One +reason for this is that it allows you to safely verify that they do in fact +actually own the private key as well, without ever asking them explicitly to +show you their private key. + +To export the public key, use `--export`: + +```txt +gpg --armor --export 0x7A6AC285E2D98827 > pubkey-tyil.asc +``` + +The `--armor` option is used to export the key as base64, instead of binary +data. + +You can attach this file to an email, and let your email client encrypt the +entire email and all attachments for they key ID. How you can do this depends on +your email client, so you should research how to do this properly in the +documentation for it. + +However, it's also possible to encrypt the public key file before adding it as +an attachment, in case you don't know how to let your email client do it (or if +you don't trust your email client to do it right). + +You can use the `--encrypt` option for this, and add a `--recipient` to encrypt +it for a specific key. + +```txt +gpg --encrypt --recipient 0x7A6AC285E2D98827 < pubkey-tyil.asc > pubkey-tyil.pgp +``` + +Now you can use this encrypted key file and share it with the owner of the key. +If the person you send it to really is the owner of the key, they can use the +private key to decrypt the file, import it with `gpg --import` and then publish +it with `gpg --send-keys` + +## Winding down + +Once all this is done, other people should have sent you your signed pubkey as +well, and you should have published your updated key with the new signatures. +Now you can start using PGP signatures and encryption for your communication +with the world. People who have not signed your key can see that there's other +people that do trust your key, and they can use that information to deduce that +whatever's signed with your key really came from you, and that anything they +encrypt with your public key can only be read by you. + +With this [trust](https://en.wikipedia.org/wiki/Web_of_trust), you can make +communication and data exchange in general more secure. diff --git a/content/posts/2019/2019-04-11-perl6-nightly-docker-images.md b/content/posts/2019/2019-04-11-perl6-nightly-docker-images.md new file mode 100644 index 0000000..61b54f5 --- /dev/null +++ b/content/posts/2019/2019-04-11-perl6-nightly-docker-images.md @@ -0,0 +1,124 @@ +--- +title: Perl 6 nightly Docker images +date: 2019-04-11 +tags: +- Perl6 +- Docker +- Raku +--- + +Due to the slow release of Rakudo Star (which actually did release a new +version last month), I had set out to make Docker images for personal use based +on the regular Perl 6 releases. But, as I discovered some [memory related +issues](https://github.com/rakudo/rakudo/issues/1501), and [another branch with +some possible fixes](https://github.com/MoarVM/MoarVM/pull/1072), I changed my +mind to make them nightlies based on the `master` branches of all related +projects instead. This way I could get fixes faster, and help testing when +needed. + +These nightlies are now up and running, available on [Docker +Hub](https://hub.docker.com/r/tyil/perl6) for anyone to use! You can also find +[the Dockerfiles I'm using on git.tyil.nl](https://git.tyil.nl/docker/perl6), +in case you're interested or have suggestions to further improve the process. + +The timing of the (public) release of these images could have been better, +though. About two weeks ago, other nightlies were released as well, by Tony +O'Dell, as has been noted in the [Perl 6 Weekly +post](https://p6weekly.wordpress.com/2019/03/25/2019-12-cool-truck/). While I +greatly appreciate his efforts, I was not going to just abandon all the work +I've put into my images. Instead I've tried to make smaller images, and provide +different bases than him. Maybe we can eventually learn from each other's images +and improve Docker support for the entire community together. + +The easiest thing to work on was providing different bases. For now, this means +I have images with the following four base images: + +- Alpine +- Debian +- Ubuntu +- Voidlinux + +This way, people can have more options with regards to using the distribution +tooling that they're more comfortable with. One could also opt to use a more +familiar or better supported base image for development and testing out their +module, and use a smaller image for production releases. + +As to the size of the images, Tony's `tonyodell/rakudo-nightly:latest` is about +1.42GB at the time of writing this post. My images range from 43.6MB +(`alpine-latest`) to 165MB (`voidlinux-latest`). Though this is not a +completely fair comparison, as my images have stripped out a lot of the tooling +used (and often required) to build some Perl 6 modules, making them unusable in +their default shape for many projects. + +To remedy this particular issue, I've also created *-dev* images. These images +come with a number of additional packages installed to allow `zef` to do its +work to get dependencies installed without requiring end-users to search for +those packages. This should reduce complexity when using the images for +end-users. If we take the dev images into account when comparing sizes, my +images range from 256MB (`alpine-dev-latest`) to 1.27GB +(`voidlinux-dev-latest`). That's much closer to the `rakudo-nightly` image. + +If you're interested in trying these images out, you may be interested in the +way I'm using these images myself as reference. Currently, my [CPAN upload +notifier bot](https://git.tyil.nl/perl6/app-cpan-uploadannouncer-irc) is using +these nightly images in its +[`Dockerfile`](https://git.tyil.nl/perl6/app-cpan-uploadannouncer-irc/src/branch/master/Dockerfile). + +```Dockerfile +FROM tyil/perl6:debian-dev-latest as install + +RUN apt update && apt install -y libssl-dev uuid-dev + +COPY META6.json META6.json + +RUN zef install --deps-only --/test . +``` + +As you can see from the `Dockerfile`, I start out by using a `-dev` image, and +name that stage `install`. I'm still contemplating to include `libssl-dev` into +the `-dev` images, as it seems to pop up a lot, but for now, it's not part of +the `-dev` images, so I install it manually. Same goes for `uuid-dev`. Then I +copy in the `META6.json`, and instruct `zef` to install all the dependencies +required. + +```Dockerfile +FROM tyil/perl6:debian-latest + +ENV PERL6LIB=lib + +WORKDIR /app + +RUN mkdir -p /usr/share/man/man1 +RUN mkdir -p /usr/share/man/man7 +RUN apt update && apt install -y libssl-dev postgresql-client + +COPY bin bin +COPY lib lib +COPY --from=install /usr/local /usr/local + +RUN mkdir -p /var/docker/meta +RUN date "+%FT%TZ" > /var/docker/meta/build-date + +CMD [ "perl6", "bin/bot" ] +``` + +Then I start a new stage. I set the `$PERL6LIB` environment variable so I don't +have to use `-Ilib` at the end, and set a `WORKDIR` to have a clean directory +to work in. Next, I set up the *runtime dependencies* of the application. + +I then continue to copy in the `bin` and `lib` directories, containing the +application itself, and copy over `/usr/local` from the `install` stage. +`/usr/local` is where Perl 6 is installed, and `zef` installs all its +dependencies into. This way, the `-dev` image can be used for building all the +dependencies as needed, and only the finished dependencies end up in the final +image that's going to run in production. + +Lastly, I set the build date and time of the image in a file, so the +application can refer to it later on. It is displayed when the IRC bot replies +to a `.bots` command, so I can verify that the running bot is the one I just +built. And finally, the `CMD` instruction runs the application. + +I hope this displays how the images can be used for your applications, and the +reasoning as to why I made them the way they are. If you have any suggestions +or issues, feel free to contact me in whatever way suits you best. You can find +some contact details on the homepage of my blog. diff --git a/content/posts/2019/2019-07-22-the-powerful-tooling-of-gentoo.md b/content/posts/2019/2019-07-22-the-powerful-tooling-of-gentoo.md new file mode 100644 index 0000000..9d8cff2 --- /dev/null +++ b/content/posts/2019/2019-07-22-the-powerful-tooling-of-gentoo.md @@ -0,0 +1,177 @@ +--- +title: "The Power(ful Tooling) of Gentoo" +date: 2019-07-22 +tags: +- Gentoo +--- + +People often ask me for my reasons to use [Gentoo](https://gentoo.org/). Many +perceive it as a "hard" distro that takes a lot of time. While it does come +with a learning curve, I don't perceive it as particularly "hard", as the +documentation is very thorough and the community is very helpful. And the +tooling you get to maintain your system is far beyond what I've come across +with any other GNU+Linux distribution. + +This blog post will highlight some of the key features I love about Gentoo. +There are certainly many more perks that I don't (yet) use, so please feel free +to inform me of other cool things that I missed. + +## Configurability + +One of the main reasons for preferring Gentoo is due to the ease of configuring +it to work just the way you want. + +A great example for this would be with `init` choices. Many distributions only +support [systemd](https://en.wikipedia.org/wiki/Systemd) these days. As I'm not +a big fan of this particular system, I want to change this. But even asking a +question about this will get you a lot of hatred in most distribution +communities. In Gentoo, however, changing init is supported and well +documented, allowing you to pick from a range of possible inits. + +### `USE` flags + +One of the core concepts of Gentoo are the [`USE` +flags](https://wiki.gentoo.org/wiki/USE_flag). These allow you to easily alter +the software you're compiling to use the features you want. They can also be +used to indicate which library you would like to use to make use of a certain +feature, if there are multiple implementations available. + +### `make.conf` + +Like most distros that work with self-compiled packages, Gentoo has a +`make.conf` file available to specify some default arguments in to use while +compiling. Unlike most other distros, Gentoo's `make.conf` also allows for some +configuration of the `emerge` utility. + +For instance, I use my `make.conf` to ensure `emerge` always asks for +confirmation before performing actions. I also ensure that the build system, +`portage`, is heavily sandboxed when building packages. + +Additionally, like all configuration files in `/etc/portage`, it can be made +into a directory. In this case, all files in the directory will be loaded in +alphabetical order. This allows for easier management using tools like +[Ansible](https://www.ansible.com/). + +### Ease of patching + +Another feature I find very useful of Gentoo, is the ease of applying my own +patches to software. If you have a custom patch for a package that you want to +be applied, all you have to do is drop it in a directory in +`/etc/portage/patches`. The directory is should be in is the same as the +package's name the patch is intended for. For instance, I have the following +patch in `/etc/portage/patches/www-client/firefox`: + +```diff +diff --git a/browser/extensions/moz.build b/browser/extensions/moz.build +index 6357998..c5272a2 100644 +--- a/browser/extensions/moz.build ++++ b/browser/extensions/moz.build +@@ -5,15 +5,10 @@ + # file, You can obtain one at http://mozilla.org/MPL/2.0/. + + DIRS += [ +- 'activity-stream', + 'aushelper', + 'followonsearch', + 'formautofill', + 'jaws-esr', +- 'onboarding', +- 'pdfjs', +- 'pocket', +- 'screenshots', + 'webcompat', + ] +``` + +Whenever a new Firefox is released and built, this patch will be applied on it +to remove some of the features I dislike. + +## Ebuilds and overlays + +In Gentoo vocabulary, `ebuild` files are the files that describe how a package +is to be built, which `USE` flags it supports and everything else relating to a +package. An overlay is a repository of ebuild files. Everyone can make their +own, and easily add 5 lines in their `repos.conf` to use it. In most cases, +they're just git repositories. + +The documentation on everything around ebuilds is superb, in my experience, +especially compared to other distros. It is incredibly easy to get started +with, since it's made to be usable with very little effort. While being simple, +it's also very flexible: All default behaviours can be overwritten if needed to +get a package to build. + +## Binary packages + +Yes, you read that right. [Binary +packages](https://wiki.gentoo.org/wiki/Binary_package_guide)! Contrary to +popular belief, Gentoo *does* support this. You can instruct `emerge` to build +binary packages of all the packages it compiles, which can then be re-used on +other systems. It does need to be compiled in such a way that the other machine +can use it, of course. You can't simply exchange the packages of an x64 machine +with and ARM machine, for instance. You can set up a [cross build +environment](https://wiki.gentoo.org/wiki/Cross_build_environment) to get that +particular usecase going, though. + +If you want to easily share the binary packages you build with one machine, you +can set up a +[binhost](https://wiki.gentoo.org/wiki/Binary_package_guide#Setting_up_a_binary_package_host), +and have `emerge` pull the binary packages on the other systems as needed using +`--usepkg`. There actually is a [binhost provided by Gentoo +itself](http://packages.gentooexperimental.org/), but it seems to only contain +important packages used to restore systems into a working state. + +## Tooling + +Some of the core tooling available to any Gentoo user has already been talked +about. But there's some additional tooling you can install to make your life +even better. + +### `genkernel` + +One of the hardest tasks to newcomers to Gentoo is often to compile a kernel. +Of course, Gentoo has an answer for this, `genkernel`. The defaults `genkernel` +will give you are reasonably sane if you just want to have a kernel that works. +Of course, you can still edit the kernelconfig before compilation starts. It +will also build an `initramfs` when requested, that goes along with the kernel. +When things have been made, the kernel and initramfs will be moved to `/boot`, +and a copy of the working kernelconfig is saved to `/etc/kernels`. All you need +to remember is to update your preferred bootloader's configuration to include +your new kernel. + +### `eix` + +[`eix`](https://wiki.gentoo.org/wiki/Eix) is a utility most Gentoo users use to +update the Portage repositories and search for available packages. The +interface is considered more convenient, and it's a bit faster at getting your +results. + +To get a quick overview of which packages are in need of updates, you can run +`eix -uc` (*u*pdates, *c*ompact). To sync the Portage tree and all overlays, +`eix-sync` is the way to go. This will ensure the cache used by `eix` also gets +updated. + +In addition to having a cleaner interface and being faster, it also comes with +additional tools for keeping your system sane. The most notable to me is +`eix-test-obsolete`. + +This utility will report any installed packages that are no longer provided by +any repository (orphaned packages). It will also report all configuration lines +that affect such packages. This is really valuable in keeping your +configuration maintainable. + +### `glsa-check` + +The `glsa-check` utility is part of the `app-portage/gentoolkit` package. When +ran, it will produce a list of all packages which have known vulnerabilities. +It will use the [GLSA database](https://security.gentoo.org/glsa) for the list +of known vulnerabilities. This can be much easier than subscribing to a mailing +list and having to check every mail to see if a vulnerability affects you. + +### `qlop` + +`qlop` is another utility that comes with `app-portage/gentoolkit`. This +program parses the logs from `emerge` to give provide you with some +information. I use this mostly to see compile times of certain packages using +`qlop -Htvg <package-name>`. Using this, I can more easily deduce if I want my +desktop (with a stronger CPU) to compile a certain package, or if it'll be +faster to just compile it on my laptop. diff --git a/content/posts/2019/2019-08-10-the-soc-controversy.md b/content/posts/2019/2019-08-10-the-soc-controversy.md new file mode 100644 index 0000000..f6cf47c --- /dev/null +++ b/content/posts/2019/2019-08-10-the-soc-controversy.md @@ -0,0 +1,98 @@ +--- +title: The SoC Controversy +date: 2019-08-10 +tags: +- CodeOfConduct +- Conference +- Perl6 +- Raku +--- + +{{< admonition title="Disclaimer" >}} +Please keep in mind that the opinion shared in this blog post is mine and mine +alone. I do not speak for any other members of the PerlCon organization team. +Please do not address anyone but me for the positions held in this post. +{{< / admonition >}} + +Those that know me are probably aware that I generally dislike to make +political posts on my personal blog. I'd rather stick to technological +arguments, as there's less problems to be found with regards to personal +feelings and all that. However, as I'm growing older (and hopefully more +mature), I find it harder to keep politics out of my life as I interact with +online communities. This becomes especially true as I plan to assist with +organizing [PerlCon +2020](https://wiki.perlcon.eu/doku.php/proposals/2020/amsterdam). + +PerlCon 2019 ended yesterday, and I had a lot of fun. I'd like to thank the +organizer, Andrew Shitov, once more for doing an amazing job. Especially so, as +he has been harassed for weeks, for trying to organize the conference. The +reason behind the harassment was partly due to his decision to not have an SoC, +or "Standards of Conduct", for PerlCon 2019. + +During his final announcements at the end of the conference, he noted that this +is still happening, even in person at the conference itself. This toxic +behavior towards him has made him decide to no longer involve himself in +organizing a conference for the Perl community. I personally think this is a +loss for everyone involved in the community, and one that was completely +avoidable by having humane discussion instead of going for Twitter harassment. + +For what it's worth, I think Twitter is also the worst possible place on the +Internet for any reasonable discussion, as it puts a very low limit on the +amount of characters you are allowed to spend on a single post. This makes it +downright impossible for any discussion, and seems to always lead to petty +name-calling. This is one of the reasons why [I'm instead using a Pleroma +instance](https://soc.fglt.nl/main/public) for my social media presence on the +Internet. If anyone is on the Internet with the intent of having interesting +discussion, I'd highly recommend to use some entrance into the Fediverse. The +instance I'm using is open for sign-ups! + +But I digress. The SoC controversy is what made me want to write this blog +post. I wonder why this even is a controversy. Why do people think it is +impossible to co-exist without some document describing explicitly what is and +is not allowed? I would hope that we're all adults, and can respect one another +as such. + +I wonder, was there any certain event at PerlCon 2019 that would've been +avoided if there *was* a SoC provided? I certainly did not, at any point, feel +that people were being harmful to one another, but maybe I'm just blind to it. +If anyone has concrete examples of events that happened during PerlCon 2019 +that a SoC could've prevented, I would be genuinely interested in hearing about +them. If I am to assist in organizing PerlCon 2020, and I want to be able to +present a good argument on the SoC discussion, I'll need concrete examples of +real problems that have occurred. + +Of course, I also consider the opposite of this discussion. Can the SoC be used +to *cause* harm, in stead of deter it? For this, I actually have clear +evidence, and the answer is a resounding **yes**. The harassment brought upon +Andrew was originally caused by an event that transpired at The Perl Conference +in Pittsburgh (2019). A video was removed, and a speaker harassed, for +dead-naming someone. Until that event, I wasn't even aware of the term, but +apparently it's grounds for removal of your presentation from the conference +archives. + +A similar event happened with The Perl Conference in Glasgow (2018), where a +talk was also removed from the archives for a supposedly offensive joke that +was made. This also sparked a heavy discussion on IRC back then, with people +from all sides pitching in with their opinion. + +From my perspective, the people shouting the loudest in these discussions +aren't interested in making the world a better place where we can live in +harmony, but to punish the offender for their behavior. I don't think we +should strive towards punishment, but towards understanding, if anything. Just +being angry, shouting at people (either in real life, or over the Internet) +isn't going to solve any underlying problem. It is more likely to cause more +issues in the long run, where people will just be more divided, and will want +to get continuous revenge upon the other side. + +Additionally, I think that the existence of an SoC or likewise document is a +sign towards outsiders that your community can't behave itself maturely. They +need special rules laid out to them, after all. Like most rules, they are +codified because issues have arisen in the past, and keep on arising. I don't +think the Perl community is too immature to behave itself. I trust in the good +faith of people, and to me it feels like a SoC does the exact opposite. + +I hope this blog post does it's job to invite you kindly to share your opinions +with me, either on [IRC, email or on the Fediverse](/#contact). I'd +gladly start a discussion on the positive and negative effects the SoC has, and the problems +it solves and creates. I think a civil discussion is in order here, to best +prepare us for PerlCon 2020. diff --git a/content/posts/2019/2019-10-17-getting-things-done-with-app-gtd.md b/content/posts/2019/2019-10-17-getting-things-done-with-app-gtd.md new file mode 100644 index 0000000..0a24e57 --- /dev/null +++ b/content/posts/2019/2019-10-17-getting-things-done-with-app-gtd.md @@ -0,0 +1,166 @@ +--- +title: "Getting Things Done with App::GTD" +date: 2019-10-07 +tags: +- GettingThingsDone +- Perl6 +- Raku +--- + +A couple months ago, I was given a workshop at work about "getting things +done". There I was told that there exists a concept called "[Getting Things +Done](https://en.wikipedia.org/wiki/Getting_Things_Done)", or "GTD" for short, +to help you, well, get things done. A number of web-based tools were introduced +to assist us with following the rules laid out in the concept. + +## The problem + +The tools that were introduced did their job, and looked reasonably shiny. +However, most required a constant Internet connection. I like my tools to be +available offline, and optionally synced together. There was one local +application and a couple cloud-synced applications that I found, so this +problem could've been resolved. However, my other problem with all these +programs was that they're all proprietary. Those who've read more of my blog +may have realized by now that I strongly prefer free software whenever +possible. + +Being unable to find any free software programs to fulfill my needs, I took a +look at the features I would need, and tried to adapt other programs to fit +those particular needs. I quickly learned that it's inconvenient at best to try +and mold generic task keeping programs into the specifics of GTD. But, it did +give me a reasonable idea of what features I needed for basic usage. It +occurred to me that it shouldn't be terribly hard to just write something of my +own. So I did. + +## The solution, `App::GTD` + +Introducing [`App::GTD`](https://gitlab.com/tyil/raku-app-gtd), a brand new +project written in the [Raku programming language](https://raku.org/). While +still in its early phases, it seems to be usable on a day-to-day basis for me +and another colleague. In its bare basics, it's just another to-do list, but +the commands it gives you incorporate the concepts of GTD. There's an inbox +that you fill up through the day, a list of next items to work on, and projects +to structure larger tasks in. + +{{< admonition title="note" >}} +The Raku programming language used to be called the Perl 6 programming +language. They function the same, but the name was changed for various reasons +I will not get into here. +{{< / admonition >}} + +This program can be installed using `zef`, though I'm planning an `ebuild` for +Gentoo (and derivatives) too. Once installed, you can use `gtd` from your +shell. Doing so without arguments will show the usage information. The most +important will be `gtd add`, `gtd next` and `gtd done`. Most of these commands +require an `id` argument. The IDs required are displayed in front of the items +when listing them with commands like `inbox` or `next`. + +## Daily life with `gtd` + +Once you have `gtd` installed, you don't *need* to do any configuration, as the +defaults should work fine for most people. This means you can start using it +immediately if you want to try it out! + +The most common invocation will be with the `add` sub-command. Whenever +something pops up that needs doing, you add it to your inbox using it. + +```txt +gtd add Buy eggs +gtd add "update cpan-raku's help command" +``` + +These items go to your inbox, and don't need to be long, so long as *you* +understand what you meant by it. You can see that you also don't need to use +quotes around the item you want to add. All arguments after `add` will be +joined together as a string again, but some shells may perform their magic on +certain things. This is why I quoted the second call, but not the first. + +All these things that you write down like this need to be sorted out at some +point. I do this every day in the morning, before I get to my regular tasks at +work. To get started, I want to see an overview of your inbox, for which the +`inbox` sub-command is intended. Running it will give you a list of all the +items in your inbox, including their ID and the date they were added. + +```txt +$ gtd inbox +[1] Buy eggs (2019-10-17) +[2] update cpan-raku's help command (2019-10-17) +``` + +Now I can go through the list, and decide which actions I should undertake +specifically. These are called "next items", and the sub-command is named +`next`. Called without arguments it will give you an overview of your next +items, but when given an ID it will move an inbox item to your list of next +items. You can optionally also specify a new name for the item, to be more +clear about what needs doing. + +```txt +$ gtd next +You're all out of Next actions! + +$ gtd next 1 +"Buy eggs" has been added as a Next item. + +$ gtd next 2 "Add usage and repo info to cpan-raku, whenever it's messaged with 'help'" +"Add usage and repo info to cpan-raku, whenever it's messaged with 'help'" has +been added as a Next item. +``` + +You can now see that your inbox is empty when using `inbox`, and see a list of +the next items you created with `next`. + +```txt +$ gtd inbox +Your inbox is empty! + +$ gtd next +[1] Buy eggs (2019-10-17) +[2] Add usage and repo info to cpan-raku, whenever it's messaged with 'help' (2019-10-17) +``` + +Now all that's left is to do all the things you've created next items for. When +done, you can remove the entry from your next items using `done`. This command +also works on items in your inbox, so small tasks that require no next item(s) +can be marked as done immediately. + +```txt +$ gtd done 1 +"Buy eggs" has been removed from your list. + +$ gtd done 2 +"Add usage and repo info to cpan-raku, whenever it's messaged with 'help'" has +been removed from your list. + +$ gtd next +You're all out of Next actions! +``` + +## Future plans + +The basics are here, but there are some things I'd very much like to add. First +and foremost, I want to be have a context to add to items, and a single context +the program operates in. This way, I can more clearly separate work and +personal tasks, which now just share one global context. + +Additionally, I've read about a new YouTube tutorial about using `ncurses` in +Raku, which I hope can guide me through making an `ncurses` application for +this as well. Perhaps I can find time to make a `GTK` application out of it as +well. + +I've already mentioned wanting to create a Gentoo `ebuild` for the application, +but this will require packaging all the module dependencies as well. This comes +with a number of hurdles that I'm trying to iron out before starting on this +endeavor. If you are on Gentoo (or a derivative) and want to assist in any way, +please contact me. + +Another thing I've taken into account when structuring the application is the +possibility for other data back-end. `gtd` is currently storing it's +information in `JSON` files in a filesystem directory, which comes with various +drawbacks. It may be beneficial to also support databases such as SQLite or +PostgreSQL. However, this is not a high priority for me right now, as it would +slow down the speed at which I can make improvements to the general program. + +I hope that `App::GTD` can help others to get things done as well. The program +is all but finished, but it should be usable for people besides me and my +colleague by now. If you have any suggestions or questions about the program, +do not hesitate to seek me out! diff --git a/content/posts/2019/_index.md b/content/posts/2019/_index.md new file mode 100644 index 0000000..b69640f --- /dev/null +++ b/content/posts/2019/_index.md @@ -0,0 +1,3 @@ +--- +title: 2019 +--- diff --git a/content/posts/2020/2020-01-08-running-cgit-on-gentoo.md b/content/posts/2020/2020-01-08-running-cgit-on-gentoo.md new file mode 100644 index 0000000..085da26 --- /dev/null +++ b/content/posts/2020/2020-01-08-running-cgit-on-gentoo.md @@ -0,0 +1,301 @@ +--- +title: Running cgit on Gentoo +date: 2020-01-08 +tags: +- git +- cgit +- Gentoo +--- + +[cgit](https://git.zx2c4.com/cgit/about/), a web interface for git +repositories, allows you to easily share your projects' source code over a web +interface. It's running on my desktop right now, so you can [see for +yourself](https://home.tyil.nl/git) what it looks like. On +[Gentoo](https://www.gentoo.org/), the ebuild for this software can be found as +`www-apps/cgit`. However, after installation, a number of configuration steps +should be performed to make it accessible on `$HOSTNAME/git`, and index your +repositories. This post will guide you through the steps I took. + +## Filesystem layout + +In my setup, my (bare) git repositories reside in `$HOME/.local/git`. But, some +of the repositories should not be public, such as the +[`pass`](https://www.passwordstore.org/) store. So, a different directory +for cgit to look in exists, at `$HOME/.local/srv/cgit`. This directory contains +symlinks to the actual repositories I want publicly available. + +## Installing the required software + +For this to work, there is more than just cgit to install. There are a number +of ways to set this up, but I chose for Nginx as web server, and `uwsgi` as the +handler for the fastcgi requests. + +```txt +emerge dev-python/pygments www-apps/cgit www-servers/nginx www-servers/uwsgi +``` + +## Configuring all elements + +After installation, each of these packages needs to be configured. + +### cgit + +The configuration file for cgit resides in `/etc/cgitrc`. After removing all +the comments, the contents of my `/etc/cgitrc` can be found below. + +```txt +# Fixes for running cgit in a subdirectory +css=/git/cgit.css +logo=/git/cgit.png +virtual-root=/git +remove-suffix=1 + +# Customization +root-desc=All public repos from tyil +enable-index-owner=0 +cache-size=1000 +snapshots=tar.gz tar.bz2 +clone-prefix=https://home.tyil.nl/git +robots=index, follow + +readme=master:README.md +readme=master:README.pod6 + +# Add filters before repos (or filters won't work) +about-filter=/usr/lib64/cgit/filters/about-formatting.sh +source-filter=/usr/lib64/cgit/filters/syntax-highlighting.py + +# Scan paths for repos +scan-path=/home/tyil/.local/srv/cgit +``` + +You should probably update the values of `root-desc`, `clone-prefix` and +`scan-path`. The first describes the small line of text at the top of the web +interface. `clone-prefix` is the prefix URL used for `git clone` URLs. The +`scan-path` is the directory `cgit` will look for repositories in. + +Additionally, the `readme=master:README.pod6` only positively affects +your setup if you also use my [Raku](https://raku.org/) customizations, +outlined in the next section. + +For more information on the available settings and their impact, consult `man +cgitrc`. + +#### Raku customizations + +Since I love working with Raku, I made some changes and a couple modules to get +`README.pod6` files rendered on the *about* tab on projects. You should ensure +the `cgit` user can run `raku` and has the +[`Pod::To::Anything`](https://home.tyil.nl/git/raku/Pod::To::Anything/) and +[`Pod::To::HTML::Section`](https://home.tyil.nl/git/raku/Pod::To::HTML::Section/) +modules installed (including any dependencies). How to achieve this depends on +how you installed Raku. Feel free to send me an email if you need help on this +part! + +Once this works, however, the remaining step is quite simple. The +`about-filter` configuration item in `/etc/cgitrc` points to a small shell +script that invokes the required program to convert a document to HTML. In my +case, this file is at `/usr/lib64/cgit/filters/about-formatting.sh`. Open up +this file in your favorite `$EDITOR` and add another entry to the `case` for +[Pod6](https://docs.raku.org/language/pod) to call Raku. + +```sh +case "$(printf '%s' "$1" | tr '[:upper:]' '[:lower:]')" in + *.markdown|*.mdown|*.md|*.mkd) exec ./md2html; ;; + *.pod6) exec raku --doc=HTML::Section; ;; + *.rst) exec ./rst2html; ;; + *.[1-9]) exec ./man2html; ;; + *.htm|*.html) exec cat; ;; + *.txt|*) exec ./txt2html; ;; +esac +``` + +#### Highlighting style + +The `syntax-highlighting.py` filter carries the responsibility to get your code +highlighted. This uses the Python library [pygments](https://pygments.org/), +which comes with a number of styles. cgit uses *Pastie* by default. To change +this, open the Python script, and look for the `HtmlFormatter`, which contains +a `style='Pastie'` bit. You can change `Pastie` for any other style name. These +styles are available in my version (2.4.2): + +- default +- emacs +- friendly +- colorful +- autumn +- murphy +- manni +- monokai +- perldoc +- pastie +- borland +- trac +- native +- fruity +- bw +- vim +- vs +- tango +- rrt +- xcode +- igor +- paraiso-light +- paraiso-dark +- lovelace +- algol +- algol_nu +- arduino +- rainbow_dash +- abap +- solarized-dark +- solarized-light +- sas +- stata +- stata-light +- stata-dark + +For those interested, I use the `emacs` theme. + +### uwsgi + +Next up, `uwsgi`. This needs configuration, which on Gentoo exists in +`/etc/conf.d/uwsgi`. However, this file itself shouldn't be altered. Instead, +make a copy of it, and call it `/etc/conf.d/uwsgi.cgit`. The standard file +exists solely as a base template. For brevity, I left out all the comments in +the contents below. + +```sh +UWSGI_SOCKET= +UWSGI_THREADS=0 +UWSGI_PROGRAM= +UWSGI_XML_CONFIG= +UWSGI_PROCESSES=1 +UWSGI_LOG_FILE= +UWSGI_CHROOT= +UWSGI_DIR=/home/tyil +UWSGI_PIDPATH_MODE=0750 +UWSGI_USER= +UWSGI_GROUP= +UWSGI_EMPEROR_PATH= +UWSGI_EMPEROR_PIDPATH_MODE=0770 +UWSGI_EMPEROR_GROUP= +UWSGI_EXTRA_OPTIONS="--ini /etc/uwsgi.d/cgit.ini" +``` + +That covers the service configuration file. When things don't work the way you +expect, specify a path in `UWSGI_LOG_FILE` to see its logs. Additionally, you +may want to alter the value of `UWSGI_DIR`. This specifies the working +directory from which the process starts. + +Now comes the application configuration, which will be read from +`/etc/uwsgi.d/cgit.ini`, according to `UWSGI_EXTRA_OPTIONS`. Create that file +with the following contents. + +```ini +[uwsgi] +master = true +plugins = cgi +socket = 127.0.0.1:1234 +uid = cgit +gid = cgit +procname-master = uwsgi cgit +processes = 1 +threads = 2 +cgi = /usr/share/webapps/cgit/1.2.1/hostroot/cgi-bin/cgit.cgi +``` + +Note that the `cgi` value contains the version number of `www-apps/cgit`. You +may need to come back after an upgrade and update it accordingly. + +As last step for `uwsgi` configuration, a service script, to manage it with +`rc-service`. These scripts all exist in `/etc/conf.d`, and the package +installed a script called `uwsgi` in there. Just like with the `conf.d` +variant, its just a template. This time, however, don't make a copy of it, but +a symlink. It does not need to be edited, but the name must be the same as the +`conf.d` entry name. That would be `uwsgi.cgit`. + +```txt +cd /etc/conf.d +ln -s uwsgi uwsgi.cgit +``` + +Now you can start the service with `rc-service uwsgi.cgit start`. If a +consequent `status` notes the state as *Started*, you're all good. If the state +says *Crashed*, you should go back and double-check all configuration files. +When those are correct and you can't figure out why, feel free to reach out to +me via email. + +```txt +rc-service uwsgi.cgit start +rc-service uwsgi.cgit service + +# Start this after boot +rc-update add uwsgi.cgit +``` + +### nginx + +The final element to make it accessible, the web server, `nginx`. How you +organize the configuration files here is largely up to you. Explaining how to +set up nginx from scratch is beyond the scope of this post. Assuming you know +how to configure this, add the following `location` blocks to the `server` +definition for the vhost you want to make `cgit` available on. + +```nginx +location "/git" { + alias /usr/share/webapps/cgit/1.2.1/htdocs; + try_files $uri @cgit; +} + +location @cgit { + include uwsgi_params; + + gzip off; + + uwsgi_modifier1 9; + uwsgi_pass 127.0.0.1:1234; + + fastcgi_split_path_info ^(/git/?)(.+)$; + uwsgi_param PATH_INFO $fastcgi_path_info; +} +``` + +Once saved, you can reload `nginx`, and the `$HOSTNAME/git` endpoint can be +reached, and should display an cgit page, detailing there are no repositories. +That can be easily solved by making some available in `$HOME/.local/srv/cgit`, +through the power of symlinks. + +## Symlinking the repositories + +Go nuts with making symlinks to the various repositories you have gathered over +the years. You don't need to use bare repositories, `cgit` will also handle +regular repositories that you actively work in. As with the `nginx` +configuration, explaining how to make symlinks is out of scope. In dire +situations, consult `man ln`. + +### `git-mkbare` + +While making the symlinks is easy, I found that it sheepishly boring to do. I go +to `$HOME/.local/git`, make a directory, `cd` to it, and create a bare +repository. Then off to `$HOME/.local/srv/cgit` to make a symlink back to the +newly created bare repository. I think you can see this will get tedious very +quickly. + +So, to combat this, I made a small shell script to do all of that for me. I +called it `git-mkbare`, and put it somewhere in my `$PATH`. This allows me to +call it as `git mkbare repo-name`. It will ask for a small description as well, +so I that can also be skipped as a manual task. This script may be of use to +you if you want to more quickly start a new project. + +You can find this script [in my dotfiles +repository](https://git.tyil.nl/dotfiles/tree/.local/bin/git-mkbare). + +## Wrapping up + +Now you should have cgit available from your site, allowing you to share the +sources of all your projects easily with the world. No need to make use of a +(proprietary) third-party service! + +If you have questions or comments on my setup, or the post in general, please +contact me through email or irc. diff --git a/content/posts/2020/2020-05-30-setting-up-pgp-wkd.md b/content/posts/2020/2020-05-30-setting-up-pgp-wkd.md new file mode 100644 index 0000000..26b6e44 --- /dev/null +++ b/content/posts/2020/2020-05-30-setting-up-pgp-wkd.md @@ -0,0 +1,106 @@ +--- +date: 2020-05-30 +title: Setting Up a PGP Webkey Directory +tags: +- PGP +- GPG +- WKD +- Security +aliases: +- /post/2020/05/30/setting-up-pgp-wkd/ +--- + +A little while ago, a friend on IRC asked me how I set up a PGP webkey +directory on my website. For those that don't know, a webkey directory is a +method to find keys through `gpg`'s `--locate-key` command. This allows people +to find my key using this command: + +```txt +gpg --locate-key p.spek@tyil.nl +``` + +This is a very user-friendly way for people to get your key, as compared to +using long IDs. + +This post will walk you through setting it up on your site, so you can make +your key more easily accessible to other people. + +## Set up the infrastructure + +For a webkey directory to work, you simply need to have your key available at a +certain path on your website. The base path for this is +`.well-known/openpgpkey/`. + +```sh +mkdir -p .well-known/openpgpkey +``` + +The webkey protocol will check for a `policy` file to exist, so you must create +this too. The file can be completely empty, and that's exactly how I have it. + +```sh +touch .well-known/openpgpkey/policy +``` + +The key(s) will be placed in the `hu` directory, so create this one too. + +```sh +mkdir .well-known/openpgpkey/hu +``` + +## Adding your PGP key + +The key itself is just a standard export of your key, without ASCII armouring. +However, the key does need to have its file **name** in a specific format. +Luckily, you can just show this format with `gpg`'s `--with-wkd-hash` option. + +```sh +gpg --with-wkd-hash -k p.spek@tyil.nl +``` + +This will yield output that may look something like this: + +```txt +pub rsa4096/0x7A6AC285E2D98827 2018-09-04 [SC] + Key fingerprint = 1660 F6A2 DFA7 5347 322A 4DC0 7A6A C285 E2D9 8827 +uid [ultimate] Patrick Spek <p.spek@tyil.nl> + i4fxxwcfae1o4d7wnb5bop89yfx399yf@tyil.nl +sub rsa2048/0x031D65902E840821 2018-09-04 [S] +sub rsa2048/0x556812D46DABE60E 2018-09-04 [E] +sub rsa2048/0x66CFE18D6D588BBF 2018-09-04 [A] +``` + +What we're interested in is the `uid` line with the hash in the local-part of +the email address, which would be `i4fxxwcfae1o4d7wnb5bop89yfx399yf@tyil.nl`. +For the filename, we only care about the local-part itself, meaning the export +of the key must be saved in a file called `i4fxxwcfae1o4d7wnb5bop89yfx399yf`. + +```sh +gpg --export 0x7A6AC285E2D98827 > .well-known/openpgpkey/hu/i4fxxwcfae1o4d7wnb5bop89yfx399yf +``` + +## Configuring your webserver + +Lastly, your webserver may require some configuration to serve the files +correctly. For my blog, I'm using [`lighttpd`](https://www.lighttpd.net/), for +which the configuration block I'm using is as follows. + +```lighttpd +$HTTP["url"] =~ "^/.well-known/openpgpkey" { + setenv.add-response-header = ( + "Access-Control-Allow-Origin" => "*", + ) +} +``` + +It may be worthwhile to note that if you do any redirection on your domain, +such as adding `www.` in front of it, the key lookup may fail. The error +message given by `gpg` on WKD lookup failures is... poor to say the least, so +if anything goes wrong, try some verbose `curl` commands and ensure that the +key is accessible at the right path in a single HTTP request. + +## Wrapping up + +That's all there's to it! Adding this to your site should be relatively +straightforward, but it may be a huge convenience to anyone looking for your +key. If you have any questions or feedback, feel free to reach out to me! diff --git a/content/posts/2020/2020-06-21-lately-in-raku.md b/content/posts/2020/2020-06-21-lately-in-raku.md new file mode 100644 index 0000000..3d54bdc --- /dev/null +++ b/content/posts/2020/2020-06-21-lately-in-raku.md @@ -0,0 +1,155 @@ +--- +title: Lately in Raku +date: 2020-06-21 +tags: +- Raku +--- + +I've been working on some Raku projects, but each of them is *just* too small +to make an individual blog post about. So, I decided to just pack them together +in a slightly larger blog post instead. + +## Binary Rakudo Star builds for GNU+Linux and FreeBSD + +A friend on IRC asked if it was possible to get Rakudo Star precompiled for +ARM, since compiling it on his machine took forever. I took a look around for +potential build services, and settled for [Sourcehut](https://builds.sr.ht/). + +I added build instructions for amd64 FreeBSD, GNU+Linux, musl+Linux, and ARM +GNU+Linux. Tarballs with precompiled binaries get build whenever I push to the +Rakudo Star mirror on Sourcehut, and are uploaded to +[dist.tyil.nl/tmp](https://dist.tyil.nl/tmp/). Currently, these are not +considered to be an official part of Rakudo Star, but if interest increases and +more people can test these packages, I can include them in official releases. + +## `IRC::Client` plugins + +IRC bots are great fun, and the +[`IRC::Client`](https://github.com/raku-community-modules/perl6-IRC-Client) +module allows for easy extension through *plugins*. For my own IRC bot, +[musashi](https://git.sr.ht/~tyil/raku-local-musashi), I've created two new +plugins, which are now available in the Raku ecosystem for anyone to use. + +### `IRC::Client::Plugin::Dicerolls` + +The first plugin I've created can do dice rolls, D&D style. You can roll any +number of dice, with any number of sides, and add (or subtract) bonusses from +these. + +```txt +<@tyil> .roll 1d20 +<+musashi> 1d20 = 1 +<@tyil> .roll 5d20 +<+musashi> 5d20 = 3 + 19 + 8 + 6 + 11 = 47 +<@tyil> .roll 1d8+2d6+10 +<+musashi> 1d8+2d6+10 = 4 + 6 + 4 + 10 = 24 +``` + +Since this is ripe for abuse, the plugin allows to set limits, and sets some +defaults for the limits as well. This should help prevent your bot from getting +killed for spam. + +### `IRC::Client::Plugin::Reminders` + +Everyone forgets things, and there's various tools helping people remember +things in various situations. For IRC based situations, I created a reminder +plugin for `IRC::Client`. + +```txt +10:19 <@tyil> musashi: remind me to write a blog post in 10 minutes +10:19 <+musashi> Reminding you to write a blog post on 2020-06-21T08:29:00Z (UTC) +10:29 <+musashi> tyil: Reminder to write a blog post +``` + +It's not very sophisticated yet, working only with numbers and certain +identifiers (minutes, hours, days, weeks), but I may add more useful +identifiers later on such as "tomorrow", or "next Sunday". Contributions for +such extended functionality are obviously also very welcome! + +There's [a small +issue](https://git.sr.ht/~tyil/raku-irc-client-plugin-reminders/tree/master/lib/IRC/Client/Plugin/Reminders.rakumod#L69) +with logging in a `start` block. It seems the dynamic variable `$*LOG` is no +longer defined within it. If anyone has an idea why, and how I could fix this, +please let me know! + +## Template program for D&D + +Another little utility I made for D&D purposes. My DM asked me how hard it'd be +to create a program to fill out a number of templates he made, so he could use +them in the game with another party. He was able to hand me a list of variables +in the form of a CSV, so I set out to use that. With some help from `Text::CSV` +and `Template::Mustache`, I had a working solution in a couple minutes, with +all the required things nicely fit into a single file. + +I had not used `$=pod` before in Raku, and I'm quite happy with how easy it is +to use, though I would like a cleaner way to refer to a Pod block by name. + +```raku +#!/usr/bin/env raku + +use v6.d; + +use Template::Mustache; +use Text::CSV; + +#| Read a CSV input file to render contracts with. +sub MAIN () { + # Set the directory to write the contracts to. + my $output-dir = $*PROGRAM.parent(2).add('out'); + + # Make sure the output directory exists + $output-dir.mkdir; + + # Load the template + my $template = $=pod + .grep({ $_.^can('name') && $_.name eq 'template' }) + .first + .contents + .map(*.contents) + .join("\n\n") + ; + + # Parse STDIN as CSV + my @records = Text::CSV + .new + .getline_all($*IN) + .skip + ; + + # Create a contract out of each record + for @records -> @record { + $output-dir.add("contract-{@record[0]}.txt").spurt( + Template::Mustache.render($template, { + contractor => @record[2], + date => @record[1], + description => @record[6], + item => @record[3], + location => @record[5], + price => @record[4] + }) ~ "\n" + ); + } +} + +=begin template +As per our verbal agreement this contract will detail the rewards, rights, and +obligations of both parties involved. + +The contractor, to be known henceforth as {{ contractor }}. +The contractee, to be known henceforth as the Association. + +{{ contractor }} requests the delivery of an object identified as the "{{ item }}" +to be delivered by the Association at the location specified for the agreed +upon compensation. The Association shall deliver the object within two weeks of +the signing of this contract and receive compensation upon delivery. + +The location is to be known as "{{ location }}", described as "{{ description }}". +The compensation agreed upon is {{ price }} pieces of Hondia standard +gold-coin currency, or an equivalent in precious gemstones. + +Written and signed on the {{ date }}. + +For the association, Lan Torrez +For the {{ contractor }} +=end template +``` diff --git a/content/posts/2020/2020-07-15-config-3.0.md b/content/posts/2020/2020-07-15-config-3.0.md new file mode 100644 index 0000000..2b77dae --- /dev/null +++ b/content/posts/2020/2020-07-15-config-3.0.md @@ -0,0 +1,176 @@ +--- +title: Config 3.0 +date: 2020-07-15 +tags: +- Raku +- Programming +--- + +For those who don't know, the +[`Config`](https://modules.raku.org/dist/Config:cpan:TYIL) module for the Raku +programming language is a generic class to hold... well... configuration data. +It supports +[`Config::Parser`](https://modules.raku.org/search/?q=Config%3A%3AParser) +modules to handle different configuration file formats, such as `JSON`, `YAML` +and `TOML`. + +Up until now, the module didn't do much for you other than provide an interface +that's generally the same, so you won't need to learn differing methods to +handle differing configuration file formats. It was my first Raku module, and +as such, the code wasn't the cleanest. I've written many new modules since +then, and learned about a good number of (hopefully better) practices. + +For version 3.0, I specifically wanted to remove effort from using the `Config` +module on the developer's end. It should check default locations for +configuration files, so I don't have to rewrite that code in every other module +all the time. Additionally, configuration using environment variables is quite +popular in the current day and age, especially for Dockerized applications. So, +I set out to make an automated way to read those too. + +## The Old Way + +First, let's take a look at how it used to work. Generally, I'd create the +default configuration structure and values first. + +```raku +use Config; + +my $config = Config.new.read({ + foo => "bar", + alpha => { + beta => "gamma", + }, + version => 3, +}); +``` + +And after that, check for potential configuration file locations, and read any +that exist. + +```raku +$config.read($*HOME.add('config/project.toml').absolute); +``` + +The `.absolute` call was necessary because I wrote the initial `Config` version +with the `.read` method not supporting `IO::Path` objects. A fix for this has +existed for a while, but wasn't released, so couldn't be relied on outside of +my general development machines. + +If you wanted to add additional environment variable lookups, you'd have to +check for those as well, and perhaps also cast them as well, since environment +variables are all strings by default. + +## Version 3.0 + +So, how does the new version improve this? For starters, the `.new` method of +`Config` now takes a `Hash` as positional argument, in order to create the +structure, and optionally types *or* default values of your configuration +object. + +```raku +use Config; + +my $config = Config.new({ + foo => Str, + alpha => { + beta => "gamma", + }, + version => 3, +}, :name<project>); +``` + +{{< admonition title="note" >}} +`foo` has been made into the `Str` *type object*, rather than a `Str` *value*. +This was technically allowed in previous `Config` versions, but it comes with +actual significance in 3.0. +{{< / admonition >}} + +Using `.new` instead of `.read` is a minor syntactic change, which saves 1 word +per program. This isn't quite that big of a deal. However, the optional `name` +argument will enable the new automagic features. The name you give to `.new` is +arbitrary, but will be used to deduce which directories to check, and which +environment variables to read. + +### Automatic Configuration File Handling + +By setting `name` to the value `project`, `Config` will consult the +configuration directories from the [XDG Base Directory +Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html). +It uses one of my other modules, +[`IO::Path::XDG`](https://modules.raku.org/dist/IO::Path::XDG:cpan:TYIL), for +this, together with +[`IO::Glob`](https://modules.raku.org/dist/IO::Glob:cpan:HANENKAMP). +Specifically, it will check my `$XDG_CONFIG_DIRS` and `$XDG_CONFIG_HOME` (in +that order) for any files that match the globs `project.*` or +`project/config.*`. + +If any files are found to match, they will be read as well, and the +configuration values contained therein, merged into `$config`. It will load the +appropriate `Config::Parser` implementation based on the file's extension. I +intend to add a number of these to future Rakudo Star releases, to ensure most +default configuration file formats are supported out of the box. + +### Automatic Environment Variable Handling + +After this step, it will try out some environment variables for configuration +values. Which variables are checked depends on the structure (and `name`) of +the `Config` object. The entire structure is squashed into a 1-dimensional list +of fields. Each level is replaced by an `_`. Additionally, each variable name +is prefixed with the `name`. Lastly, all the variable names are uppercased. + +For the example `Config` given above, this would result in the following +environment variables being checked. + +```sh +$PROJECT_FOO +$PROJECT_ALPHA_BETA +$PROJECT_VERSION +``` + +If any are found, they're also cast to the appropriate type. Thus, +`$PROJECT_FOO` would be cast to a `Str`, and so would `$PROJECT_ALPHA_BETA`. In +this case that doesn't do much, since they're already strings. But +`$PROJECT_VERSION` would be cast to an `Int`, since it's default value is also +of the `Int` type. This should ensure that your variables are always in the +type you expected them to be originally, no matter the user's configuration +choices. + +## Debugging + +In addition to these new features, `Config` now also makes use of my +[`Log`](https://modules.raku.org/dist/Log:cpan:TYIL) module. This module is +made around the idea that logging should be simple if module developers are to +use it, and the way logs are represented is up to the end-user. When running an +application in your local terminal, you may want more human-friendly logs, +whereas in production you may want `JSON` formatted logs to make it fit better +into other tools. + +You can tune the amount of logging performed using the `$RAKU_LOG_LEVEL` +environment variable, as per the `Log` module's interface. When set to `7` (for +"debug"), it will print the configuration files that are being merged into your +`Config` and which environment veriables are being used as well. + +{{< admonition title="note" >}} +A downside is that the application using `Config` for its configuration must +also support `Log` to actually make the new logging work. Luckily, this is +quite easy to set up, and there's example code for this in `Log`'s README. +{{< / admonition >}} + +## Too Fancy For Me + +It could very well be that you don't want these features, and you want to stick +to the old ways as much as possible. No tricks, just plain and simple +configuration handling. This can be done by simply ommitting the `name` +argument to `.new`. The new features depend on this name to be set, and won't +do anything without it. + +Alternatively, both the automatic configuration file handling and the +environment variable handling can be turned off individually using `:!from-xdg` +and `:!from-env` arguments respectively. + +## In Conclusion + +The new `Config` module should result in cleaner code in modules using it, and +more convenience for the developer. If you find any bugs or have other ideas +for improving the module, feel free to send an email to +`https://lists.sr.ht/~tyil/raku-devel`. diff --git a/content/posts/2020/2020-07-19-freebsd-mailserver-part-6-system-updates.md b/content/posts/2020/2020-07-19-freebsd-mailserver-part-6-system-updates.md new file mode 100644 index 0000000..f3d1e89 --- /dev/null +++ b/content/posts/2020/2020-07-19-freebsd-mailserver-part-6-system-updates.md @@ -0,0 +1,341 @@ +--- +title: "FreeBSD Email Server - Part 6: System Updates" +date: 2020-07-19 +tags: +- Email +- FreeBSD +- Tutorial +social: + email: mailto:~tyil/public-inbox@lists.sr.ht&subject=FreeBSD Email Server +--- + +Four years have past, and my FreeBSD email server has keps on running without +any problems. However, some people on IRC have recently been nagging me to +support TLSv1.3 on my mailserver. Since the installation was done 4 years ago, +it didn't do 1.3 yet, just 1.2. I set out to do a relatively simple system +update, which didn't go as smooth as I had hoped. This tutorial post should +help you avoid the mistakes I made, so your updates *will* go smooth. + +{{< admonition title="info" >}} +The rest of this tutorial assumes you're running as the `root` user. +{{< / admonition >}} + +## Preparations + +Before we do anything wild, let's do the obvious first step: backups. Since +this is a FreeBSD server, it uses glorious +[ZFS](https://en.wikipedia.org/wiki/ZFS) as the filesystem, which allows us to +make use of +[snapshots](https://docs.oracle.com/cd/E23824_01/html/821-1448/gbciq.html). +Which subvolumes to make snapshots off depends on your particular setup. In my +case, my actual email data is stored on `zroot/srv`, and all the services and +their configurations are in `zroot/usr/local`. My database's data is stored on +`zroot/postgres/data96`. Additionally, I want to make a snapshot of +`zroot/usr/ports`. + +```txt +zfs snapshot -r zroot/srv@`date +%Y%m%d%H%M%S`-11.0-final +zfs snapshot -r zroot/usr/local@`date +%Y%m%d%H%M%S`-11.0-final +zfs snapshot -r zroot/postgres@`date +%Y%m%d%H%M%S`-11.0-final +zfs snapshot -r zroot/usr/ports@`date +%Y%m%d%H%M%S`-11.0-final +``` + +This will make a snapshot of each of these locations, for easy restoration in +case any problems arise. You can list all your snapshots with `zfs list -t +snapshot`. + +Your server is most likely hosted at a provider, not in your home. This means +you won't be able to just physically access it and retrieve the harddrive if +things go really bad. You might not be able to boot single-user mode either. +Because of this, you might not be able to restore the snapshots if things go +*really* bad. In this case, you should also make a local copy of the important +data. + +The services and their configuration can be recreated, just follow the earlier +parts of this series again. The email data, however, cannot. This is the data +in `/srv/mail`. You can make a local copy of all this data using `rsync`. + +```txt +rsync -av example.org:/srv/mail/ ~/mail-backup +``` + +There's one more thing to do, which I learned the hard way. Set your login +shell to a simple one, provided by the base system. The obvious choice is +`/bin/sh`, but some people may wrongly prefer `/bin/tcsh` as well. During a +major version update, the ABI changes, which will temporarily break most of +the user-installed packages, including your shell. + +```txt +chsh +``` + +{{< admonition title="warning" >}} +Be sure to change the shell for whatever user you're using to SSH into this +machine too, if any! +{{< / admonition >}} + +## Updating the Base System + +With the preparations in place in case things get royally screwed up, the +actual updates can begin. FreeBSD has a dedicated program to handle updating +the base system, `freebsd-update`. First off, fetch any updates, and make sure +all the updates for your current version are applied. + +```txt +freebsd-update fetch install +``` + +Afterwards, set the new system version you want to update to. In my case, this +is `12.1-RELEASE`, but if you're reading this in the future, you most certainly +want a newer version. + +```txt +freebsd-update -r 12.1-RELEASE upgrade +``` + +This command will ask you to review the changes and confirm them as well. It +should generally be fine, but this is your last chance to make any backups or +perform other actions to secure your data! If you're ready to continue, install +the updates to the machine. + +```txt +freebsd-update install +``` + +At this point, your kernel has been updated. Next you must reboot to start +using the new kernel. + +```txt +reboot +``` + +Once the system is back online, you can continue installing the rest of the +updates. + +```txt +freebsd-update install +``` + +When this command finishes, the base system has been updated and should be +ready for use. Next up is updating all the software you installed manually. + +## Updating User-Installed Packages + +Unlike GNU+Linux distributions, FreeBSD has a clear distinction between the +*base system* and *user installed software*. The base system has now been +updated, but everything installed through `pkg` or ports is still at the old +version. If you performed a major version upgrade (say, FreeBSD 11.x to 12.x), +the ABI has changed and few, if any, of the user-installed packages still work. + +### Binary Packages using `pkg` + +Binary packages are the most common packages used. These are the packages +installed through `pkg`. Currently, `pkg` itself doesn't even work. Luckily, +FreeBSD has `pkg-static`, which is a statically compiled version of `pkg` +intended to fix this very problem. Let's fix up `pkg` itself first. + +```txt +pkg-static install -f pkg +``` + +That will make `pkg` itself work again. Now you can use `pkg` to update package +information, and upgrade all packages to a version that works under this +FreeBSD version. + +```txt +pkg update +pkg upgrade +``` + +#### PostgreSQL + +A particular package that was installed through `pkg`, PostgreSQL, just got +updated to the latest version. On FreeBSD, the data directory used by +PostgreSQL is dependent on the version you're running. If you try to list +databases now, you'll notice that the `mail` database used throughout the +tutorial is gone. The data directory is still there, so you *could* downgrade +PostgreSQL again, restart the database, run a `pgdump`, upgrade, restart and +import. However, I find it much cleaner to use FreeBSD jails to solve this +issue. + +{{< admonition title="info" >}} +My original installation used PostgreSQL 9.6, you may need to update some +version numbers accordingly! +{{< / admonition >}} + +I generally put my jails in a ZFS subvolume, so let's create one of those +first. + +```txt +zfs create -o mountpoint=/usr/jails zroot/jails +zfs create zroot/jails/postgres96 +``` + +This will create a new subvolume at `/usr/jails/postgres96`. Using +`bsdinstall`, a clean FreeBSD installation usable by the jail can be set up +here. This command will give you some popups you may remember from installing +FreeBSD initially. This time, you can uncheck *all* boxes, to get the most +minimal system. + +```txt +bsdinstall jail /usr/jails/postgres96 +``` + +When `bsdinstall` finishes, you can configure the jail. This is done in +`/etc/jail.conf`. If this file doesn't exist, you can create it. Make sure the +following configuration block is written to the file. + +```cfg +postgres96 { + # Init information + exec.start = "/bin/sh /etc/rc"; + exec.stop = "/bin/sh /etc/rc.shutdown"; + exec.clean; + + # Set the root path of the jail + path = "/usr/jails/$name"; + + # Mount /dev + mount.devfs; + + # Set network information + host.hostname = $name; + ip4.addr = "lo0|127.1.1.1/32"; + ip6.addr = "lo0|fd00:1:1:1::1/64"; + + # Required for PostgreSQL to function + allow.raw_sockets; + allow.sysvipc; +} +``` + +Now you can start up the jail, so it can be used. + +```txt +service jail onestart postgres96 +``` + +Using the host system's `pkg`, you can install PostgreSQL into the jail. + +```txt +pkg -c /usr/jails/postgres96 install postgresql96-server +``` + +Now you just need to make the data directory available to the jail, which you +can most easily do using +[`nullfs`](https://www.freebsd.org/cgi/man.cgi?query=nullfs&sektion=&n=1). + +```txt +mount -t nullfs /var/db/postgres/data96 /usr/jails/postgres96/var/db/postgres/data96 +``` + +Now everything should be ready for use inside the jail. Let's head on in using +`jexec`. + +```txt +jexec postgres96 +``` + +Once inside the jail, you can start the PostgreSQL service, and dump the `mail` +database. + +```txt +service postgresql onestart +su - postgres +pg_dump mail > ~/mail.sql +``` + +This will write the dump to `/usr/jails/postgres96/var/db/postgres/mail.sql` on +the host system. You can leave the jail and close it down again. + +```txt +exit +exit +service jail onestop postgres96 +``` + +This dump can be imported in your updated PostgreSQL on the host system. +Connect to the database first. + +```txt +su - postgres +psql +``` + +Then, recreate the user, database and import the data from the dump. + +```sql +CREATE USER postfix WITH PASSWORD 'incredibly-secret!'; +CREATE DATABASE mail WITH OWNER postfix; +\c mail +\i /usr/jails/postgres96/var/db/postgres/mail.sql +\q +``` + +The `mail` database is now back, and ready for use! + +### Packages from Ports + +With all the binary packages out of the way, it's time to update packages from +ports. While it is very possible to just go to each port's directory and +manually update each one individually, I opted to use `portupgrade`. This will +need manual installation, but afterwards, we can rely on `portupgrade` to do +the rest. Before doing anything with the ports collection, it should be +updated, which is done using `portsnap`. + +```txt +portsnap fetch extract +``` + +Once this is done, you can go to the `portupgrade` directory and install it. + +```txt +cd /usr/ports/ports-mgmt/portupgrade +make install clean +``` + +Now, to upgrade all other ports. + +```txt +portupgrade -a +``` + +Be sure to double-check the compilation options that you are prompted about! If +you're missing a certain option, you may miss an important feature that is +required for your mailserver to work appropriately. This can be easily fixed by +recompiling, but a few seconds checking now can save you an hour figuring it +out later! + +## Tidying Up + +Now that all user-installed software has been updated too, it's time to +finalize the update by running `freebsd-update` for a final time. + +```txt +freebsd-update install +``` + +You can return to your favourite shell again. + +```txt +chsh +``` + +And you can clean up the ports directories to get some wasted space back. + +```txt +portsclean -C +``` + +I would suggest making a new snapshot as well, now that you're on a relatively +clean and stable state. + +```txt +zfs snapshot -r zroot/srv@`date +%Y%m%d%H%M%S`-12.1-clean +zfs snapshot -r zroot/usr/local@`date +%Y%m%d%H%M%S`-12.1-clean +zfs snapshot -r zroot/postgres@`date +%Y%m%d%H%M%S`-12.1-clean +zfs snapshot -r zroot/usr/ports@`date +%Y%m%d%H%M%S`-12.1-clean +``` + +And that concludes your system update. Your mailserver is ready to be neglected +for years again! diff --git a/content/posts/2020/2020-12-14-raku-modules-in-gentoo-portage.md b/content/posts/2020/2020-12-14-raku-modules-in-gentoo-portage.md new file mode 100644 index 0000000..7bb2e55 --- /dev/null +++ b/content/posts/2020/2020-12-14-raku-modules-in-gentoo-portage.md @@ -0,0 +1,114 @@ +--- +title: Raku Modules in Gentoo Portage +date: 2020-12-14 +tags: +- Raku +- Gentoo +social: + email: mailto:~tyil/public-inbox@lists.sr.ht&subject=Raku modules in Gentoo's Portage +--- + +The last couple of days I've been taking another look at getting modules for +the Raku programming language into Gentoo's Portage tree. Making new packages +available in Gentoo is incredibly easy with their overlay system. + +The more complex part was Raku's side of things. While most languages just have +a certain directory to drop files in, Raku *should* use a +`CompUnit::Repository` object, which exposes the `.install` method. This is +obviously slower than just copying the files around, but there are merits to +this method. For one, it allows installation of the same module with different +versions, or from different authors. It also handles all the Unicode complexity +for you. + +{{< admonition title="note" >}} +There *is* a +[CompUnit::Repository::FileSystem](https://docs.raku.org/type/CompUnit::Repository::FileSystem) +which would allow me to just copy over files to the right directory, however, I +quite like the ability to have multiple versions of the same module installed. +This is actually something Portage is designed with, too! +{{< / admonition >}} + +After an email to the Raku users mailing list, I got some pointers over IRC. I +let these sink in for a couple days, considering how to approach the problem +properly. Then, one night, a solution came to mind, and I set out to test it. + +*It actually worked*. And a similar method should be usable for other +distributions too, such as Debian, OpenSUSE or Archlinux, to create packages +out of Raku modules. This should greatly improve the ability to ship Raku +programs to end-users, without requiring them to learn how Raku's ecosystem is +modeled, or which module manager it uses. + +The most important part of this approach is the +[`module-installer.raku`](https://git.sr.ht/~tyil/raku-overlay/tree/7494c81524ec1845c77dabfbb3303a34eb4b89f4/item/dev-lang/raku/files/module-installer.raku) +program, which is part of `dev-lang/raku::raku-overlay`. It accepts a path to +the module to install. It does not depend on any one module manager, so it can +be used to bootstrap a user-friendly module manager (such as +[`zef`](https://github.com/ugexe/zef/)) for the user. + +```raku +#| Install a Raku module. +sub MAIN ( + #| The path to the Raku module sources. + IO() $path, + + #| The repository to install it in. Options are "site" (ment for + #| user-installed modules), "vendor" (ment for distributions that want + #| to include more modules) and "core" (ment for modules distributed + #| along with Raku itself). + Str:D :$repo = 'site', + + #| Force installation of the module. + Bool:D :$force = True, +) { + CATCH { + default { $_.say; exit 1; } + } + + die "This script should be used by Portage only!" unless %*ENV<D>; + + my $prefix = %*ENV<D>.IO.add('usr/share/perl6').add($repo); + my $repository = CompUnit::Repository::Installation.new(:$prefix); + my $meta-file = $path.add('META6.json'); + my $dist = Distribution::Path.new($path, :$meta-file); + + $repository.install($dist, :$force); +} +``` + +It's a fairly straightforward program. It checks for `$D` to be set in the +environment, which is a variable Portage sets as the destination directory to +install new files in. This directory gets merged into the root filesystem to +finalize installation of any package. + +If `$D` is set, I append the path used by Raku in Gentoo to it, followed by a +repo name. Next I create a `CompUnit::Repository` using this path. This is a +trick to get the files to appear in the right directory for Portage, to +eventually merge them in the system-wide `site` module repo used by Raku. +Additionally, I can use the `CompUnit::Repository`'s `install` method to handle +all the Raku specific parts that I don't want to handle myself. + +This leaves one last issue. By creating this new repo, I also get a couple +files that already exist in the system wide `site` repo. Portage will complain +about possible file collisions and refuse to install the package if these +remain. However, this can be solved rather easily by calling `rm` on these files. + +```txt +rm -- "${D}/usr/share/perl6/site/version" +rm -- "${D}/usr/share/perl6/site/repo.lock" +rm -- "${D}/usr/share/perl6/site/precomp/.lock" +``` + +And with this, my test module, `IO::Path::XDG`, installs cleanly through the +power of Portage, and is usable by all users using the system-wide Raku +installation. + +To make this work for other distributions, the `module-installer.raku` program +should be modified slightly. Most notably, the `$prefix` must be altered to +point towards the right directory, so the files will be installed into whatever +directory will end up being packaged. Other than that, the standard means of +packaging can be followed. + +For the Gentoo users, this overlay is available at +[SourceHut](https://git.sr.ht/~tyil/raku-overlay). It currently holds only +`IO::Path::XDG` (`dev-raku/io-path-xdg`), but you are invited to try it out and +report any issues you may encounter. diff --git a/content/posts/2020/2020-12-15-merging-json-in-postgresql.md b/content/posts/2020/2020-12-15-merging-json-in-postgresql.md new file mode 100644 index 0000000..8d97e50 --- /dev/null +++ b/content/posts/2020/2020-12-15-merging-json-in-postgresql.md @@ -0,0 +1,50 @@ +--- +title: Merging JSON in PostgreSQL +date: 2020-12-15 +tags: +- JSON +- PostgreSQL +- Programming +social: + email: mailto:~tyil/public-inbox@lists.sr.ht&subject=Merging JSON objects in PostgreSQL +--- + +At my `$day-job` we have a lot of `jsonb` in our database. From time to time, I +have to manually run a query to fix something in there. This week was one of +those times. + +While you can pretty much do everything you need with regards to JSON editing +with `jsonb_se`t, I thought it might be nice if I were able to *merge* a given +JSON object into an existing object. This might be cleaner in some situations, +but mostly it is fun to figure it out. And who doesn’t like spending time with +`plpgsql`? + +The way I wanted to have it work is like this: + +```sql +UPDATE user SET properties = jsonb_merge(properties, '{"notifications": {"new_case": false, "new_document": true}}'); +``` + +And this is the eventual function I produced to do it: + +```sql +CREATE OR REPLACE FUNCTION jsonb_merge(original jsonb, delta jsonb) RETURNS jsonb AS $$ + DECLARE result jsonb; + BEGIN + SELECT + json_object_agg( + COALESCE(original_key, delta_key), + CASE + WHEN original_value IS NULL THEN delta_value + WHEN delta_value IS NULL THEN original_value + WHEN (jsonb_typeof(original_value) <> 'object' OR jsonb_typeof(delta_value) <> 'object') THEN delta_value + ELSE jsonb_merge(original_value, delta_value) + END + ) + INTO result + FROM jsonb_each(original) e1(original_key, original_value) + FULL JOIN jsonb_each(delta) e2(delta_key, delta_value) ON original_key = delta_key; + RETURN result; +END +$$ LANGUAGE plpgsql; +``` diff --git a/content/posts/2020/_index.md b/content/posts/2020/_index.md new file mode 100644 index 0000000..8dad6eb --- /dev/null +++ b/content/posts/2020/_index.md @@ -0,0 +1,3 @@ +--- +title: 2020 +--- diff --git a/content/posts/2021/2021-05-13-a-new-irc-client.md b/content/posts/2021/2021-05-13-a-new-irc-client.md new file mode 100644 index 0000000..7003cb5 --- /dev/null +++ b/content/posts/2021/2021-05-13-a-new-irc-client.md @@ -0,0 +1,85 @@ +--- +date: 2021-05-13 +title: A New IRC::Client +tags: +- Raku +- IRC +- Programming +social: + email: mailto:~tyil/public-inbox@lists.sr.ht&subject=A New IRC::Client +--- + +The Raku programming language has a popular module for creating IRC bots, +[`IRC::Client`](https://github.com/raku-community-modules/IRC-Client). However, +it's been stale for quite a while, and one of the bots I host, +[Geth](https://github.com/Raku/geth), is having troubles on a regular basis. + +I've looked at the source code, and found a lot of neat tricks, but when +maintaining a library, I generally want clean and straightforward code instead. +To that end, I decided to just write my own from scratch. Given that [the IRC +protocol is rather simple](https://tools.ietf.org/html/rfc2812), this was the +easiest way to go about it. + +Sure enough, after a couple hours of playing around, I had something that +worked reasonably well. A few more hours a day afterwards brought me to an +`IRC::Client` that is usable in mostly the same way as the current +`IRC::Client`, to save me effort in getting my current bots to make use of it +for a few test runs. + +Geth was my main target, as I wanted it to stop from getting timed out so +often. For the past week, Geth has been running stable, without any time +out, so I think I've succeeded in the main goal. + +However, how to continue next? Since it is mostly compatible, but not +*completely* compatible, if I were to adopt `IRC::Client` from the Raku +ecosystem and push my version, many people's IRC bots would break when people +update their dependencies. There is a solution for this built into the entire +ecosystem, which is using the correct `:ver` and `:auth` (and in some cases, +`:api`) so you can ensure your project is always getting the "correct" +dependency. However, from personal experience, I know these additional +dependency restrictions are rarely used in practice. + +I hope that with this blog post, I can inform the community in time about the +changes that are coming to `IRC::Client`, so people have ample time to set +their dependencies just right to keep their projects working. Of course, the +real solution for the long term would be to make the small changes required to +use the latest `IRC::Client` again. + +For convenience sake, I've added a small number of methods for backwards +compatibility as well, though these will generate [a deprecation +warning](https://docs.raku.org/routine/is%20DEPRECATED), and will be removed in +a later `IRC::Client` release. + +There's two major changes that are not backwards compatible, however. The first +one is the removal of the ability to have a single `IRC::Client` connect to +multiple servers. This is also the easiest to remedy, by simply creating +multiple instances of `IRC::Client`. + +The second major incompatible change is how events are dispatched to plugins. +This used to be handled by going through all the plugins sequentially, allowing +one plugin to stop the dispatch to another plugin. In my new version, events +are dispatched to all plugins in parallel. This allows for faster execution, +and for multiple plugins to handle an event without having to use `$*NEXT` +(which has been removed). My main motivation is that a buggy plugin will no +longer interfere with the interactions provided by other plugins. The ordering +of your plugins will also stop being an issue to worry about. + +Geth's updates to actually use my updated `IRC::Client` module was introduced +in +[`edc6b08`](https://github.com/Raku/geth/commit/edc6b08036c8ede08afdea39fd1768655a2766aa), +and most if it was updates to the way it handled logging. The actual changes +needed to make Geth play nice were + +- [Adding the `IRC::Client::Plugin` role to `Geth::Plugin::Info`](https://github.com/Raku/geth/commit/edc6b08036c8ede08afdea39fd1768655a2766aa#diff-104b5cdb61f2aa423eb941ce32a4412b4cb814014728cae968e5aeff7dc587d2R14); +- [And to `Geth::Plugin::Uptime`](https://github.com/Raku/geth/commit/edc6b08036c8ede08afdea39fd1768655a2766aa#diff-104b5cdb61f2aa423eb941ce32a4412b4cb814014728cae968e5aeff7dc587d2R24); +- [Using `nicks` instead of `nick`](https://github.com/Raku/geth/commit/edc6b08036c8ede08afdea39fd1768655a2766aa#diff-104b5cdb61f2aa423eb941ce32a4412b4cb814014728cae968e5aeff7dc587d2R34); +- [Using `start` instead of `run`](https://github.com/Raku/geth/commit/edc6b08036c8ede08afdea39fd1768655a2766aa#diff-104b5cdb61f2aa423eb941ce32a4412b4cb814014728cae968e5aeff7dc587d2R46); +- [Using `privmsg` instead of `send`](https://github.com/Raku/geth/commit/edc6b08036c8ede08afdea39fd1768655a2766aa#diff-c388af832470886cdd4304aeb17c4c2f406ac184d33afb956b0ef8a92b69855bR57); + +The last two changes aren't strictly necessary, as there are backwards +compatibility methods made for these, but it's a rather small change and +reduces the amount of noise in the logs. + +With this, I hope everyone using `IRC::Client` is prepared for the coming +changes. If you have any comments or questions, do not hesitate to reach out to +me and share your thoughts! diff --git a/content/posts/2021/2021-05-22-raku-on-libera-chat.md b/content/posts/2021/2021-05-22-raku-on-libera-chat.md new file mode 100644 index 0000000..87ce7ff --- /dev/null +++ b/content/posts/2021/2021-05-22-raku-on-libera-chat.md @@ -0,0 +1,35 @@ +--- +date: 2021-05-22 +title: Raku is moving to Libera.chat +tags: +- Raku +- LiberaChat +- IRC +social: + email: mailto:~tyil/public-inbox@lists.sr.ht&subject=Raku is moving to Libera.chat +--- + +Earlier this week, the staff at the Freenode IRC network have resigned en +masse, and created a new network, Libera. This was sparked by [new ownership of +Freenode](https://kline.sh/). Due to concerns with the new ownership, the Raku +Steering Council has decided to also migrate our IRC channels to Libera. + +This requires us to take a couple steps. First and foremost, we need to +register the Raku project, to allow us a claim to the `#raku` and related +channels. Approval for this happened within 24 hours, and as such, we can +continue on the more noticable steps. + +The IRC bots we're using for various tasks will be moved next, and the Raku +documentation has to be updated to refer to Libera instead of Freenode. The +coming week we'll be working on that, together with the people who provide +those bots. + +Once this is done, the last step involves the Matrix bridge. Libera and +Matrix.org staff are working on this, but there's no definite timeline +available just yet. This may mean that Matrix users will temporarily not be +able to join the discussions happening at Libera. We will keep an eye on the +progress of this, and set up the bridge as soon as it has been made available. + +If you have any questions regarding the migration, feel free to reach out to us +via email (`rsc@raku.org`) or on IRC (`#raku-steering-council` on +irc.libera.chat). diff --git a/content/posts/2021/2021-06-04-managing-docker-compose-projects-with-openrc.md b/content/posts/2021/2021-06-04-managing-docker-compose-projects-with-openrc.md new file mode 100644 index 0000000..c182654 --- /dev/null +++ b/content/posts/2021/2021-06-04-managing-docker-compose-projects-with-openrc.md @@ -0,0 +1,194 @@ +--- +date: 2021-06-04 +title: Managing Docker Compose with OpenRC +tags: +- Gentoo +- OpenRC +- Docker +- DockerCompose +--- + +On one of my machines, I host a couple services using `docker-compose`. I +wanted to start/restart/stop these using the default init/service manager used +on the machine, `openrc`. This would allow them to start/stop automatically +with Docker (which coincides with the machine powering on or off, +respectively). + +I've set this up through a single `docker-compose` meta-service. To add new +`docker-compose` projects to be managed, all I need to do for `openrc` +configuration is creating a symlink, and configure the path to the +`docker-compose.yaml` file. + +The meta-service lives at `/etc/init.d/docker-compose`, just like all other +services managed by `openrc`. This file is quite straightforward. To start off, +a number of variables are set and exported. + +```sh +name="$RC_SVCNAME" +description="OpenRC script for managing the $name docker-compose project" + +# Set default values +DOCKER_COMPOSE="${DOCKER_COMPOSE:-docker-compose} $DOCKER_COMPOSE_ARGS" + +COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-$name}" + +# Export all variables used by docker-compose CLI +export COMPOSE_PROJECT_NAME +export COMPOSE_FILE +export COMPOSE_PROFILES +export COMPOSE_API_VERSION +export DOCKER_HOST +export DOCKER_TLS_VERIFY +export DOCKER_CERT_PATH +export COMPOSE_HTTP_TIMEOUT +export COMPOSE_TLS_VERSION +export COMPOSE_CONVERT_WINDOWS_PATHS +export COMPOSE_PATH_SEPARATOR +export COMPOSE_FORCE_WINDOWS_HOST +export COMPOSE_IGNORE_ORPHANS +export COMPOSE_PARALLEL_LIMIT +export COMPOSE_INTERACTIVE_NO_CLI +export COMPOSE_DOCKER_CLI_BUILD +``` + +One of the services I use is also configured with its own `external` network. I +want it to be created if it doesn't exist, to ensure that the service can start +up properly. I do *not* want it to be removed, so I left that out. + +```sh +# Set up (external) networks +for name in "${DOCKER_NETWORKS[@]}" +do + # Create the network if needed + if ! docker network ls | awk '{ print $2 }' | grep -q "$name" + then + einfo "Creating docker network '$name'" + docker network create "$name" > /dev/null + fi + + # Expose some variables for the networks + network_id="DOCKER_NETWORK_${name}_ID" + + declare -gx DOCKER_NETWORK_${name}_ID="$(docker network ls | awk '$2 == "'"$name"'" { print $1 }')" + declare -gx DOCKER_NETWORK_${name}_GATEWAY="$(docker network inspect "${!network_id}" | jq -r '.[0].IPAM.Config[0].Gateway')" + + unset network_id +done +``` + +And lastly, there's the four simple functions to declare dependencies, +configure how to `start` or `stop`, and how to get the `status` of the service. + +```sh +depend() { + need docker +} + +start() { + $DOCKER_COMPOSE --project-directory "$COMPOSE_PROJECT_DIRECTORY" up -d +} + +status() { + $DOCKER_COMPOSE --project-directory "$COMPOSE_PROJECT_DIRECTORY" ps +} + +stop() { + $DOCKER_COMPOSE --project-directory "$COMPOSE_PROJECT_DIRECTORY" down +} +``` + +Now, to actually create a service file to manage a `docker-compose` project, a +symlink must be made. I'll take my +[`botamusique`](https://github.com/azlux/botamusique) service as an example. + +``` +ln -s /etc/init.d/docker-compose /etc/init.d/botamusique +``` + +This service can't start just yet, as there's no `$COMPOSE_PROJECT_DIRECTORY` +configured for it yet. For this, a similarly named file should be made in +`/etc/conf.d`. In here, any variable used by the service can be configured. + +``` +$EDITOR /etc/conf.d/botamusique +``` + +In my case, it only pertains the `$COMPOSE_PROJECT_DIRECTORY` variable. + +``` +COMPOSE_PROJECT_DIRECTORY="/var/docker-compose/botamusique" +``` + +And that's it. For additional `docker-compose` projects I need to make only a +symlink and a configuration file. If I discover a bug or nuisance, only a +single file needs to be altered to get the benefit on all the `docker-compose` +services. + +For reference, here's the full `/etc/init.d/docker-compose` file. + +```sh +#!/sbin/openrc-run +# Copyright 2021 Gentoo Authors +# Distributed under the terms of the GNU General Public License v2 + +name="$RC_SVCNAME" +description="OpenRC script for managing the $name docker-compose project" + +# Set default values +DOCKER_COMPOSE="${DOCKER_COMPOSE:-docker-compose} $DOCKER_COMPOSE_ARGS" + +COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-$name}" + +# Export all variables used by docker-compose CLI +export COMPOSE_PROJECT_NAME +export COMPOSE_FILE +export COMPOSE_PROFILES +export COMPOSE_API_VERSION +export DOCKER_HOST +export DOCKER_TLS_VERIFY +export DOCKER_CERT_PATH +export COMPOSE_HTTP_TIMEOUT +export COMPOSE_TLS_VERSION +export COMPOSE_CONVERT_WINDOWS_PATHS +export COMPOSE_PATH_SEPARATOR +export COMPOSE_FORCE_WINDOWS_HOST +export COMPOSE_IGNORE_ORPHANS +export COMPOSE_PARALLEL_LIMIT +export COMPOSE_INTERACTIVE_NO_CLI +export COMPOSE_DOCKER_CLI_BUILD + +# Set up (external) networks +for name in "${DOCKER_NETWORKS[@]}" +do + # Create the network if needed + if ! docker network ls | awk '{ print $2 }' | grep -q "$name" + then + einfo "Creating docker network '$name'" + docker network create "$name" > /dev/null + fi + + # Expose some variables for the networks + network_id="DOCKER_NETWORK_${name}_ID" + + declare -gx DOCKER_NETWORK_${name}_ID="$(docker network ls | awk '$2 == "'"$name"'" { print $1 }')" + declare -gx DOCKER_NETWORK_${name}_GATEWAY="$(docker network inspect "${!network_id}" | jq -r '.[0].IPAM.Config[0].Gateway')" + + unset network_id +done + +depend() { + need docker +} + +start() { + $DOCKER_COMPOSE --project-directory "$COMPOSE_PROJECT_DIRECTORY" up -d +} + +status() { + $DOCKER_COMPOSE --project-directory "$COMPOSE_PROJECT_DIRECTORY" ps +} + +stop() { + $DOCKER_COMPOSE --project-directory "$COMPOSE_PROJECT_DIRECTORY" down +} +``` diff --git a/content/posts/2021/_index.md b/content/posts/2021/_index.md new file mode 100644 index 0000000..1287d56 --- /dev/null +++ b/content/posts/2021/_index.md @@ -0,0 +1,3 @@ +--- +title: 2021 +--- diff --git a/content/posts/2022/2022-02-14-librewolf.md b/content/posts/2022/2022-02-14-librewolf.md new file mode 100644 index 0000000..0152244 --- /dev/null +++ b/content/posts/2022/2022-02-14-librewolf.md @@ -0,0 +1,92 @@ +--- +date: 2022-02-14 +title: Trying out LibreWolf +tags: +- Firefox +- LibreWolf +--- + +Over the past week, I've been trying out [LibreWolf](https://librewolf.net/) as +an alternative to mainline Firefox. I generally don't hold a high opinion on any +"modern" browser to begin with, but Firefox has been the least bad for quite +some time. I used to actually like Firefox, but Mozilla has done their best to +alienate their user base in search for profits, and is eventually left with +neither. Their latest effort in digging their own grave is teaming up with Meta. + +As such, I have been searching for an alternative (modern) browser for a long +time. One major requirement that I've had is to have something like +[uMatrix](https://addons.mozilla.org/en-US/firefox/addon/umatrix/). And +obviously major features to block any and all advertisements, as these are a +major detriment to your own mental health, and to the resources your machine +uses. So, when someone recommended me LibreWolf, which is just a more +user-respecting fork of Firefox, I didn't hesitate to try it out. + +The migration from Firefox to LibreWolf was remarkably simple. Since I use [a +small +wrapper](https://git.tyil.nl/dotfiles/tree/.local/bin/firefox?id=1c8e9b136d9d00decc1d3570fe58072427107148) +to launch Firefox with a specific profile directory, I just had to update that +to launch LibreWolf instead. It kept all my settings, installed add-ons, and even +open tabs. It seems that by default, however, it will use its own directory for +configuration. If you want to try out LibreWolf and have a similar experience, +you can just copy over your old Firefox configuration directory to a new +location for use with LibreWolf. In hindsight, that probably would've been the +safer route for me as well, but it already happened and it all went smooth, so +no losses. + +Now, while LibreWolf is more-or-less like Firefox, but less harmful to its own +users, some of the tweaks made by the LibreWolf team may or may not be desired. +I've taken note of any differences that could be conceived as issues. So far, +they're not breaking for me, but these may be of interest to you if you're +looking to try LibreWolf out as well. + +## HTTP + +By default, LibreWolf will not let you visit sites over HTTP. This is generally +a very nice feature, but for some public hot-spots, this may cause issues. These +are generally completely unencrypted, and LibreWolf will refuse to connect. The +page presented instead will inform you that the page you're trying to visit is +unencrypted, and allow you to make a temporary exception. Not a very big issue, +but it may be a little bit more annoying than you're used to. + +## Add-ons + +While all my add-ons were retained, I did want to get another add-on to redirect +me away from YouTube, to use an Invidious instance. The page for installing +add-ons itself seems to work fine, but upon clicking the Install button, and +accepting the installation, LibreWolf throws an error that it simply failed to +install anything. The Install button is nothing more than a fancy anchor with a +link to the `xpi` file, so you can manually download the file and install the +add-on manually through the [Add-ons Manager](about:addons). + +## Element + +I've been using Matrix for a while, as an atechnical-friendly, open source +platform, for those unwilling to use IRC. Their recommended client, +[Element](https://app.element.io/), is just another web page, because that's +sadly how most software is made these days. The chat itself works without a +hitch, but there are two minor inconveniences compared to my regular Firefox +setup. + +The first one is that LibreWolf does not share my local timezone with the +websites I visit. This causes timestamps to be off by one hour in the Element +client. A very minor issue that I can easily live with. + +The other is that the "default" icons, which is a capital letter with a colored +background, [don't look so well](https://dist.tyil.nl/blog/matrix-icons.png). +There's some odd artifacts in the icons, which doesn't seem to affect the letter +shown. Since I mostly use the +[weechat-matrix](https://github.com/poljar/weechat-matrix) plugin, it's not +really an issue. And for the few times I do use Element, it doesn't bother me +enough to consider it a real issue. + +## Jellyfin + +For consuming all sorts of media, I have [Jellyfin](https://jellyfin.org/) set +up for personal use. This worked fine in my regular Firefox setup, but does not +seem to be willing to play any videos in LibreWolf. The console logs show some +issues with websockets, and I've not been able to find a good way to work around +this yet. For now, I'll stick to using `mpv` to watch any content to deal with +this issue. + +All in all, I think LibreWolf is a pretty solid browser, and unless I discover +something major to turn me off, I'll keep using it for the foreseeable future. diff --git a/content/posts/2022/2022-02-20-nginx-tls.md b/content/posts/2022/2022-02-20-nginx-tls.md new file mode 100644 index 0000000..0baef43 --- /dev/null +++ b/content/posts/2022/2022-02-20-nginx-tls.md @@ -0,0 +1,92 @@ +--- +date: 2022-02-20 +title: Updating NginX TLS settings for 2022 +tags: +- TLS +- NginX +--- + +My blog (and pretty much every other site I host) is using Let's Encrypt +certificates in order to be served over https. Using any certificate at all is +generally an upgrade when it comes to security, but your webserver's +configuration needs to be up to standard as well. Unlike the Let's Encrypt +certificates, keeping the configs up to date requires manual intervention, and +is therefore something I don't do often. + +This week I decided I should check up on the state of my SSL configuration in +nginx. I usually check this through [SSL Lab's SSL Server +Test](https://www.ssllabs.com/ssltest/analyze.html). This is basically +[testssl.sh](https://testssl.sh/) in a shiny web frontend, and assigns you a +score. I aim to always have A or higher, but it seemed my old settings were +capped at a B. This was due to the cipher list allowing ciphers which are +nowadays considered insecure. + +While I could've just updated the cipher list, I decided to check up on all the +other settings as well. There's a [GitHub +gist](https://gist.github.com/gavinhungry/7a67174c18085f4a23eb) which shows you +what settings to use with nginx to make it secure by modern standards, which +I've used to check if I should update some of my own settings. + +My old settings looked as follows. Please don't be too scared of the giant list +in the `ssl_ciphers`. + +```nginx +# DHparams +ssl_dhparam /etc/nginx/dhparam.pem; + +# SSL settings +ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS'; +ssl_prefer_server_ciphers off; +ssl_protocols TLSv1.2 TLSv1.3; +ssl_session_cache shared:le_nginx_SSL:10m; +ssl_session_tickets off; +ssl_session_timeout 1440m; + +# Additional headers +add_header Strict-Transport-Security "max-age=63072000" always; +``` + +With the new settings, I've added `ssl_buffer_size` and `ssl_ecdh_curve`. A +friend on IRC pointed out I should enable `ssl_prefer_server_ciphers`, so this +has been enabled too. + +But most notably, the list of `ssl_ciphers` has been dramatically reduced. I +still allow TLSv1.2 in order to allow slightly older clients to connect without +any issues, but the ciphers considered *WEAK* have been disabled explicitly. +This leaves a total of 5 ciphers to use, all of them using ECDHE, so the +`ssl_dhparam` could be dropped as well. + +Lastly, I've added a couple headers for security reasons, as recommended by +[securityheaders.com](https://securityheaders.com). + +```nginx +# SSL settings +ssl_protocols TLSv1.3 TLSv1.2; + +ssl_buffer_size 4K; +ssl_ecdh_curve secp521r1:secp384r1; +ssl_prefer_server_ciphers on; +ssl_session_cache shared:le_nginx_SSL:2m; +ssl_session_tickets off; +ssl_session_timeout 1440m; + +ssl_ciphers 'EECDH+AESGCM:EECDH+AES256:!ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES256-SHA'; + +# Additional headers +add_header Content-Security-Policy "default-src 'self'" always; +add_header Referrer-Policy "strict-origin-when-cross-origin" always; +add_header Strict-Transport-Security "max-age=63072000" always; +add_header X-Content-Type-Options "nosniff" always; +add_header X-Frame-Options "SAMEORIGIN" always; +``` + +{{< admonition title="note" >}} +I would still like the `ssl_ciphers` to be formatted in a more clean way, which +I've tried to do with a variable through `set`, but it appears variables are not +expanded within `ssl_ciphers`. If you have any methods to format the list of +ciphers used in a cleaner way, I'd be very happy to know! +{{< / admonition >}} + +This configuration is saved as a small snippet which I `include` in all other +site configurations, so it is updated everywhere at once as well. Now I should +be able to neglect this for another year or two again. diff --git a/content/posts/2022/2022-03-05-deprecating-reiserfs.md b/content/posts/2022/2022-03-05-deprecating-reiserfs.md new file mode 100644 index 0000000..9dcec8d --- /dev/null +++ b/content/posts/2022/2022-03-05-deprecating-reiserfs.md @@ -0,0 +1,74 @@ +--- +date: 2022-03-05 +title: Deprecating ReiserFS +tags: +- BTRFS +- Filesystems +- GNU+Linux +- ReiserFS +- ZFS +- bcachefs +--- + +[ReiserFS is getting deprecated from Linux](https://lkml.org/lkml/2022/2/20/89), +mostly due to it not being ready for the year 2038. This is a little sad, as I +still use it on some systems for storing the Gentoo Portage tree, and the Linux +kernel sources. It works well for this because it supports +[tail packing](https://en.wikipedia.org/wiki/Block_suballocation#Tail_packing), +a form of block suballocation, which can save disk space. + +So, what alternatives are there for ReiserFS? After asking around and reading +some comments around the Internet, I've narrowed it down to 3 potential +candidates, bcachefs, btrfs, and zfs. Each comes with their own pros and cons, +as things tend to do. + +## bcachefs + +There are several downsides for bcachefs for me. The first one I found was that +the documentation on their main site seems a bit lacking, followed shortly by +finding that there are no ebuilds for it in Gentoo. + +Since it was suggested several times on comments on a certain orange site, I +asked around if it at least supported block suballocation, which is the main +reason I would want to use it anyway. The answer came back as a "no", so I could +safely ignore it for the rest of the journey. + +## BTRFS + +BTRFS seems like a more serious contender. It supports block suballocation, and +has good enough documentation. As an additional benefit, it is supported in the +mainline Linux kernel, making it easy to use on any modern setup. There are a +few issues, such as having to rebalance in certain situations, and this +rebalancing can itself cause issues. The files I'm storing are relatively easily +recreated with a single git clone, or downloading a tarball and unpacking that, +so that doesn't have to be problematic to me. + +## ZFS + +The final contestant, ZFS, supports block suballocation and has great +documentation. It is not part of the mainline Linux kernel, however, so this may +make things more complex on some systems. I run ZFS already on a few machines, +but not all, so where it is not used already, it is a drawback. + +Since my main concern is storing many small files, I created a few logical +volumes (and 1 ZFS subvol) and cloned the main reason for wanting a filesystem +with block suballocation, the [Gentoo Portage +tree](https://github.com/gentoo/portage). The cloning itself was done with +`--depth=1`.For reference, I also created an ext4 volume. + +``` +/dev/mapper/edephas0-test.btrfs 5.0G 559M 3.8G 13% /tmp/test/btrfs +/dev/mapper/edephas0-test.ext4 4.9G 756M 3.9G 17% /tmp/test/ext4 +/dev/mapper/edephas0-test.reiserfs 5.0G 365M 4.7G 8% /tmp/test/reiserfs +tyilstore0/test 5.0G 1.1G 4.0G 21% /tmp/test/zfs +``` + +Looking at the output from `df -h`, ReiserFS seem to still be a clear winner +when it comes to storing many small files. Nothing is even close. What does +surprise me, however, is that ZFS is actually resulting in the largest space +requirement. I'm not sure why this is, as it should support block suballocation +just fine according to [the filesystem comparison chart on +Wikipedia](https://en.wikipedia.org/wiki/Comparison_of_file_systems#Allocation_and_layout_policies). + +BTRFS comes out as the next best option, after ReiserFS, so that'll be what I am +going to use on my systems for storing large trees of small files. diff --git a/content/posts/2022/2022-04-15-fixing-w-in-tremc.md b/content/posts/2022/2022-04-15-fixing-w-in-tremc.md new file mode 100644 index 0000000..c434a6e --- /dev/null +++ b/content/posts/2022/2022-04-15-fixing-w-in-tremc.md @@ -0,0 +1,140 @@ +--- +date: 2022-04-15 +title: Fixing ^w in tremc +tags: +- Python +- Transmission +- tremc +--- + +I like collecting GNU+Linux ISOs. I'd like to think I have a pretty serious +collection of these things. I have a pretty good and stable Internet connection, +so I collect them in my torrent client, and let them seed pretty much forever, +so other people can enjoy them too. + +For this hobby, I'm using [Transmission](https://transmissionbt.com/), with +[tremc](https://github.com/tremc/tremc) as the frontend. I've been using it for +a while, but there's always been a small thing bothering me. Whenever you get a +prompt to specify the name of the torrent, or the directory to which its +contents should be downloaded, `^w` immediately kills the entire application. + +By regular shell standards, I'm used to `^w` not killing the application, but +just removing from the cursor to the start of the previous word. I don't often +change the names of the torrents, but when I do, I often use `^w` to quickly +remove one or two words in the path. With tremc, this sadly means me killing the +application, being upset for a little while as I restart it, and hold down the +backspace for a while to get the intended effect. + +Until last night. I set out to read the documentation to fix this issue once and +for all. I dug into the manual, which specifies that a configuration file at +`~/.config/tremc/settings.cfg` is read at startup. However, the manual +doesn't specify anything about the format of this file. There's also no other +manual pages included in the package. + +The Gentoo package specifies a homepage for this project on Github, so I open up +my least-hated browser and see if there's anything there. Good news, the +repository contains a sample `settings.cfg`, so I can read that and get an idea +on how to do keybinds. And the examples do show how to _set_ keybinds. But it +doesn't seem to be able to _remove_ keybinds. This is kind of a bummer. Setting +a keybind to an empty value didn't seem to do the trick either. This leaves only +one option, patching the defaults. + +This was actually pretty straightforward, just look for the list of default +keybinds, and remove a single line. + +```patch +@@ -222,7 +222,6 @@ class GConfig: + # First in list: 0=all 1=list 2=details 3=files 4=tracker 16=movement + # +256 for RPC>=14, +512 for RPC>=16 + 'list_key_bindings': [0, ['F1', '?'], 'List key bindings'], +- 'quit_now': [0, ['^w'], 'Quit immediately'], + 'quit': [1, ['q'], 'Quit'], + 'leave_details': [2, ['BACKSPACE', 'q'], 'Back to torrent list'], + 'go_back_or_unfocus': [2, ['ESC', 'BREAK'], 'Unfocus or back to torrent list'], +``` + +Since I'm just testing, and this program is a single Python file, I just edit +the file in-place, and delay making a proper patch out of for later. Restarting +tremc, going to the new torrent, pressing `m` (for "move"), and hitting `^w` to +try and remove a single word, hoping for a quick and easy fix, I was met with +tremc just quitting again. This was not what I wanted. + +So opening up the file again with everyone's favourite editor, I search around +for the `quit_now` function. Surely that'll bring me closer? The `quit_now` +string doesn't seem to be used anywhere else, apart from the function that +defines what the keybind action should do, `action_quit_now`. This seems to +simply defer to `exit_now`, which leads me to a bit of code with a special case +to `exit_now` if a certain character is detected. This character appears to be a +`^w`, which is exactly what I'm trying to stop. So, let's patch that out too. + +```patch +@@ -3760,10 +3759,7 @@ class Interface: + self.update_torrent_list([win]) + + def wingetch(self, win): +- c = win.getch() +- if c == K.W_: +- self.exit_now = True +- return c ++ return win.getch() + + def win_message(self, win, height, width, message, first=0): + ypos = 1 +``` + +Restart tremc and test again. Still exiting immediately upon getting a `^w`. +This bit of code gives me some new insights, it appears `K.W_` is related to the +keycode of a `^w`. So I continue the search, this time looking for `K.W_`. This +appears to be used later on to create a list of characters which should act as +`esc` in certain contexts. Removing the `K.W_` from this is simple enough. + +```patch +@@ -5039,7 +5047,7 @@ def parse_config_key(interface, config, gconfig, common_keys, details_keys, list + else: + gconfig.esc_keys = (K.ESC, K.q, curses.KEY_BREAK) + gconfig.esc_keys_no_ascii = tuple(x for x in gconfig.esc_keys if x not in range(32, 127)) +- gconfig.esc_keys_w = gconfig.esc_keys + (K.W_,) ++ gconfig.esc_keys_w = gconfig.esc_keys + gconfig.esc_keys_w_enter = gconfig.esc_keys_w + (K.LF, K.CR, curses.KEY_ENTER) + gconfig.esc_keys_w_no_ascii = tuple(x for x in gconfig.esc_keys_w if x not in range(32, 127)) +``` + +Restart, test, and... Nothing. This is an improvement, the program isn't exiting +immediately, it just does nothing. I know that `^u` works as I expect, so +perhaps it needs some love to have `^w` work properly as well. I search for +`K.U_`, and indeed, there is code dedicated to this keypress, in a long `elif` +construct. So I add some code to get `^w` working as well. After a bit of +fiddling, and realizing I've spent way too much time on adding a torrent, I've +settled on this little bit of love. + +```patch +@@ -3957,6 +3953,18 @@ class Interface: + # Delete from cursor until beginning of line + text = text[index:] + index = 0 ++ elif c == K.W_: ++ # Delete from cursor to beginning of previous word... mostly ++ text_match = re.search("[\W\s]+", text[::-1]) ++ ++ if text_match.span()[0] == 0: ++ # This means the match was found immediately, I can't be ++ # bothered to make this any nicer. ++ text = text[:-1] ++ index -= 1 ++ else: ++ text = text[:-text_match.span()[0]] ++ index -= text_match.span()[0] + elif c in (curses.KEY_HOME, K.A_): + index = 0 + elif c in (curses.KEY_END, K.E_): +``` + +It looks for the first non-word character or a space, starting from the end of +the string (`[::-1]` reverses a string in Python). The resulting `Match` object +can tell me how many characters I need to delete in the `.span()[0]` value. A +small exception is created if that value is `0`, otherwise the logic below +doesn't work well. + +It's not perfect, but it gets the job done well enough, and I don't like Python +enough to spend more time on it than I've already done. I am open for better +solutions that work better, though! diff --git a/content/posts/2022/2022-04-23-a-cookbook.md b/content/posts/2022/2022-04-23-a-cookbook.md new file mode 100644 index 0000000..e86fcda --- /dev/null +++ b/content/posts/2022/2022-04-23-a-cookbook.md @@ -0,0 +1,52 @@ +--- +date: 2022-04-23 +title: A cookbook! +tags: +- Food +--- + +Last week, I've decided to add a cookbook to my website. I've thought about +doing so for a while, but was rather hesitent since my website was mostly a +tech-minded blog. However, this time around I've decided to simply add a +cookbook, and have seperate RSS feeds for my [blog posts](/posts/index.xml) and +[recipes](/recipes/index.xml), so that readers can decide what they want to +follow. Now I can easily share any recipes that visitors have asked me to share +over the years. + +The cookbook has an overview of all recipes, and each invidual recipe has a +layout greatly inspired by MartijnBraam's [FatHub](https://fathub.org). The +JavaScript is an exact copy, even. + +The format of the recipes has been altered slightly. I found the FatHub format +to be a little too verbose to make it simple enough to want to write down the +recipes. So I've trimmed down on some of that. You can check the sources that I +use on [git.tyil.nl](https://git.tyil.nl/blog/). + +Since my website is generated with [Hugo](https://gohugo.io), I had to do the +template myself as well. But these were fairly simple to port over, and I am +fairly happy with the result. I've made two changes compared to the layout used +by FatHub: Every step in the instructions is numbered, and the checkboxes for +the ingredients are at the left-hand side. The ingredients don't contain a +preparation step in the tables, as I've made that part of the cooking +instructions. + +The entire thing was done in an afternoon during Easter, so it was pretty +straightforward to set up. I haven't included many recipes yet, as I want to +double-check the right amounts of ingredients and instructions for many of them. +The downside of a recipe is that people generally follow it to the letter, and +I'm not a professional cook, I often cook just by tasting and adjusting as +desired. + +There's one thing I didn't include, but which I might consider working on in the +future: [baker's percentages](https://en.wikipedia.org/wiki/Baker_percentage). +Some of my recipes have been written down in this notation, and for now I'll be +converting them to regular quantities instead. There's also some downsides to +baker's percentages such as harder to calculate what a serving size is, or the +duration of certain steps, which has caused me to not make this a hard +requirement for the cookbook section of my site. + +I think the cookbook is more than enough to get started with, and to try sharing +some recipes in. If I've ever cooked something for you in person, and you would +like to know the recipe, don't hesitate to ask me to include it in the cookbook. +Additionally, if you have some tips on how to improve an existing recipe, I'd be +very happy to hear about it! diff --git a/content/posts/2022/2022-05-07-bashtard-introduction.md b/content/posts/2022/2022-05-07-bashtard-introduction.md new file mode 100644 index 0000000..0f93fdd --- /dev/null +++ b/content/posts/2022/2022-05-07-bashtard-introduction.md @@ -0,0 +1,265 @@ +--- +title: Configuring my Machines with Bashtard +date: 2022-05-07 +tags: +- Bash +- Bashtard +- FreeBSD +- GNU+Linux +- Programming +--- + +Over the past couple weeks I've been spending some time here and there to work +on my own application for configuring my machines. Before this I've tried +Ansible, but found it to be very convoluted to use, and requires a lot of +conditionals if your machines aren't all running the same base system. + +So I made something in Bash, with a few abstractions to make certain +interactions less annoying to do manually every time. This used to be called +`tyilnet`, but I've discussed the setup with a few people on IRC, and decided it +would be a fun project to make it a bit more agnostic, so other people could +also easily start using it. This resulted in the creation of +[Bashtard](https://git.tyil.nl/bashtard/), pronounced as "bash", followed by +"tard" (as in "bastard"). + +It works by simply writing Bash scripts to do the configuration, and provides +abstractions for using the system's package manager, service manager, and some +utilities such as logging and dealing with configured values. Configuration +values can be set on a per-host or per-OS basis. Since I run a varied base of +OSs, including Gentoo, Debian, and FreeBSD, the per-OS configuration comes in +very handy to me. + +As for the reason to use Bash, I chose it because most of the systems I run +already have this installed, so it doesn't add a dependency _most of the time_. +I would've liked to do it in POSIX sh, but I feel that when you're reaching a +certain level of complexity, Bash offers some very nice features which can make +your code cleaner, or less likely to contain bugs. Features such as `[[ ]]`, +`local`, and arrays come to mind. + +I've been kindly asked to guide potential new users to writing their first +Bashtard script, known as a _playbook_, so if you want to know about how it +works in practice, keep on reading. If you're satisfied with your current +configuration management system, this might not be quite as interesting to you, +so be warned. + +The first steps for a new user would obviously to install Bashtard, as it's not +in any OSs package repositories yet. A `Makefile` is supplied in the repository, +which should make this easy enough. + +```txt +git clone https://git.tyil.nl/bashtard +cd bashtard +sudo make install +hash -r +``` + +Once installed, it needs some initialization. + +```txt +bashtard init +``` + +This will create the basic structure in `/etc/bashtard`, including a +`playbooks.d`. Inside this `playbooks.d` directory, any directory is considered +to be a playbook, which requires a `description.txt` and a `playbook.bash`. + +```txt +cd /etc/bashtard/playbooks.d +mkdir ssh +cd ssh +echo "OpenSSH configuration" > description.txt +$EDITOR playbook.bash +``` + +The `playbook.bash` needs to contain 3 functions which are used by `bashtard`, a +`playbook_add()`, `playbook_sync()`, and `playbook_del()`. These will be called +by the `bashtard` subcommand `add`, `sync`, and `del` respectively. + +I generally start with the `playbook_sync()` function first, since this is the +function that'll ensure all the configurations are kept in sync with my desires. +I want to have my own `sshd_config`, which needs some templating for the +`Subsystem sftp` line. There's a `file_template` function provided by bashtard, +which does some very simple templating. I'll pass it the `sftp` variable to use. + +```bash +playbook_sync() { + file_template sshd_config \ + "sftp=$(config "ssh.sftp")" \ + > /etc/ssh/sshd_config +} +``` + +Now to create the actual template. The `file_template` function looks for +templates in the `share` directory inside the playbook directory. + +```txt +mkdir share +$EDITOR share/sshd_config +``` + +Since I already know what I want my `sshd_config` to look like from previous +installed systems, I'll just use that, but with a variable for the `Subsystem +sftp` value. + +```cfg +# Connectivity +Port 22 +AddressFamily any +ListenAddress 0.0.0.0 +ListenAddress :: + +# Fluff +PrintMotd yes + +# SFTP +Subsystem sftp ${sftp} + +# Authentication +AuthorizedKeysFile /etc/ssh/authorized_keys .ssh/authorized_keys +PermitRootLogin no +PasswordAuthentication no +ChallengeResponseAuthentication no +PubkeyAuthentication no + +# Allow tyil +Match User tyil + PubkeyAuthentication yes + +# Allow public key authentication over VPN +Match Address 10.57.0.0/16 + PubkeyAuthentication yes + PermitRootLogin prohibit-password +``` + +The `${sftp}` placeholder will be filled with whatever value is returned by +`config "ssh.sftp"`. And for this to work properly, we will need to define the +variable somewhere. These are written to the `etc` directory inside a playbook. +You can specify defaults in a file called `defaults`, and this can be +overwritten by OS-specific values, which in turn can be overwritten by +host-specific values. + +```txt +mkdir etc +$EDITOR etc/defaults +``` + +The format for these files is a very simple `key=value`. It splits on the first +`=` to determine what the key and value are. This means you can use `=` in your +values, but not your keys. + +```txt +ssh.sftp=/usr/lib/openssh/sftp-server +``` + +This value is correct for Debian and derivatives, but not for my Gentoo or +FreeBSD systems, so I've created OS-specific configuration files for these. + +```txt +mkdir etc/os.d +cat etc/os.d/linux-gentoo +ssh.sftp=/usr/lib64/misc/sftp-server +``` + +```txt +cat etc/os.d/freebsd +ssh.sftp=/usr/lib64/misc/sftp-server +``` + +My `sshd_config` template also specifies the use of a `Motd`, so that needs to +be created as well. This can again be done using the `template` function. + +```bash +file_template "motd" \ + "fqdn=${BASHTARD_PLATFORM[fqdn]}" \ + "time=$(date -u "+%FT%T")" \ + > /etc/motd +``` + +The `motd` template gets saved at `share/motd`. + +```txt + ████████╗██╗ ██╗██╗██╗ ███╗ ██╗███████╗████████╗ + ╚══██╔══╝╚██╗ ██╔╝██║██║ ████╗ ██║██╔════╝╚══██╔══╝ + ██║ ╚████╔╝ ██║██║ ██╔██╗ ██║█████╗ ██║ + ██║ ╚██╔╝ ██║██║ ██║╚██╗██║██╔══╝ ██║ + ██║ ██║ ██║███████╗██╗██║ ╚████║███████╗ ██║ + ╚═╝ ╚═╝ ╚═╝╚══════╝╚═╝╚═╝ ╚═══╝╚══════╝ ╚═╝ + +Welcome to ${fqdn}, last updated on ${time}. +``` + +Lastly, we want to ensure the SSH daemon gets reloaded after every sync, so +let's add that to the `playbook_sync()` function as well. + +```bash +svc reload "sshd" +``` + +The `svc` utility looks for a configuration value that starts with `svc.`, +followed by the service you're trying to act upon, so in this case that would be +`svc.sshd`. We can add this to our configuration files in `etc`. Across all my +machines, `sshd` seems to work as the value, so I only need to add one line to +`etc/defaults`. + +```txt +svc.sshd=sshd +``` + +This should take care of all the things I want automatically synced. The +`playbook_add()` function is intended for all one-time setup required for any +playbook. In this case that means the SSH daemon's service needs to be +activated, since it is not active by default on all my setups. + +```bash +playbook_add() { + svc enable "sshd" + svc start "sshd" +} +``` + +However, `add` does not call a `sync`, and I don't want my SSH service to run +with default configuration until a `sync` is initialized. So before enabling and +starting the service, I will call `sync` manually, by running a `playbook_sync` +first. This in turn, however, poses another problem, as `playbook_sync()` wants +to reload the service, which it can't do unless it is already running. To fix +this, I'll add an `if` statement to skip reloading if `bashtard` is running the +`add` command. + +```bash +playbook_add() { + playbook_sync + + svc enable "sshd" + svc start "sshd" +} + +playbook_sync() { + ... + + [[ $BASHTARD_COMMAND == "add" ]] && return + + svc reload "sshd" +} +``` + +Now, `bashtard add sshd` will run the `playbook_add()` function, which calls the +`playbook_sync()` function before enabling and starting the `sshd` service. All +that is left is the `playbook_del()` function, which only really needs to stop +and disable the service. The templated files can be removed here as well if +desired, of course. + +```bash +playbook_del() { + svc stop "sshd" + svc disable "sshd" +} +``` + +Lastly, I configured my `crond` to run `bashtard sync` every 20 minutes, so +whenever I update my configurations, it can take up to 20 minutes to propagate +to all my machines. Having an abstraction to deal with `cron` (or SystemD timers +where applicable) in Bashtard is something I'd like to add, but I have no +concrete plans on how to do this, yet. + +You can find the full `playbook.bash` source on +[git.tyil.nl](https://git.tyil.nl/tyilnet/tree/playbooks.d/ssh/playbook.bash?id=319ab064370cb1e65be115ffddf5c0cd519af2dd). diff --git a/content/posts/2022/2022-08-06-installing-gentoo-encrypted-zfs-efistub.md b/content/posts/2022/2022-08-06-installing-gentoo-encrypted-zfs-efistub.md new file mode 100644 index 0000000..2825b7c --- /dev/null +++ b/content/posts/2022/2022-08-06-installing-gentoo-encrypted-zfs-efistub.md @@ -0,0 +1,242 @@ +--- +date: 2022-11-20 +title: "Installing Gentoo with encrypted ZFS rootfs and EFIstub kernel" +tags: +- GNU+Linux +- Gentoo +- Tutorial +- ZFS +--- + +A little while ago, I got a new work laptop. As is customary, I installed my +preferred GNU+Linux environment onto it. Consequently, a few people have asked +me to detail my steps to get this system up and running, as they would like to +try out a similar setup as I did. It's also been a while since I made another +blog post, so here's killing two birds with one stone! + +## Preparing disks + +Make sure you get the right device name, or you'll purge the data on some other +drive! + +```sh +parted -a optimal /dev/nvme1n1 +mklabel gpt +mkpart esp 1 5130 +mkpart rootfs 5130 -1 +set 1 boot on +quit +``` + +### Get IDs of partitions + +For partitioning I've lately come to love using disk IDs, rather than their +`/dev/sd*` entries. They're easy to look up, so copy them over to use them later +on. + +```sh +ls -l /dev/disk/by-id +``` + +- `nvme-eui.36483331545090280025385800000001-part1` -> ESP +- `nvme-eui.36483331545090280025385800000001-part2` -> ZFS + +### Formatting + +#### ESP + +The ESP partition holds the kernel and initramfs, and _must_ be FAT32. + +```sh +mkfs.vfat -F32 /dev/disk/by-id/nvme-eui.36483331545090280025385800000001-part1 +``` + +#### zpool + +The zpool settings used here are the settings I used. You should verify these +settings also work optimally for your setup! I generally name my pools after the +device they're running from, in this case `ivdea`. Any name will work here, just +make sure to be consistent later down the guide! + +```sh +rm -f /etc/hostid && zgenhostid + +zpool create -f \ + -O acltype=posixacl \ + -O compression=lz4 \ + -O dedup=off \ + -O encryption=aes-256-gcm \ + -O keyformat=passphrase \ + -O keylocation=prompt \ + -O relatime=on \ + -O xattr=sa \ + -R /mnt/gentoo \ + -m none \ + -o ashift=12 \ + -o cachefile=/etc/zfs/zpool.cache \ + ivdea0 \ + /dev/disk/by-id/nvme-eui.36483331545090280025385800000001-part2 + +zfs create -o mountpoint=none ivdea0/rootfs +zfs create -o mountpoint=/ ivdea0/rootfs/gentoo +zfs create -o mountpoint=none ivdea0/rootfs/gentoo/usr +zfs create -o mountpoint=none ivdea0/rootfs/gentoo/var +zfs create -o mountpoint=none ivdea0/rootfs/gentoo/var/lib +zfs create -o mountpoint=none ivdea0/home +zfs create -o mountpoint=/home/tyil ivdea0/home/tyil + +zpool set bootfs=ivdea0/rootfs/gentoo ivdea0 +``` + +## Preparing chroot + +You will want to grab the latest Gentoo autobuild tarball for your architecture. +I'm _not_ using systemd, if you do desire this for some reason, you may need to +alter some steps. + +### Initial + +```sh +cd /mnt/gentoo +mkdir efi +mount /dev/disk/by-id/nvme-eui.36483331545090280025385800000001-part1 efi +wget $STAGE3 # Use whichever URL for the stage3 tarball you need +tar xpf stage3*.tar.xz --xattrs-include='*.*' --numeric-owner +``` + +### Recovery + +This section is labeled "Recovery" to easily find it later, in case you need to +go back into the chroot to fix up any issues that prevent you from booting it. + +```sh +mkdir -p etc/zfs +cp /etc/zfs/zpool.cache etc/zfs +cp --dereference /etc/resolv.conf /mnt/gentoo/etc/ +mount -t proc /proc proc +mount --rbind --make-rslave /sys sys +mount --rbind --make-rslave /dev dev +mount --rbind --make-rslave /run run +chroot . /bin/bash -l +``` + +## Configuring the system + +The base system is now installed, and most of the following steps are for +configuring it to actually work properly. + +### Portage + +Run the initial Portage tree download. This will use `webrsync`, you can +configure it to use `git` at a later stage if desired. + +```sh +mkdir -p /etc/portage/repos.conf +cp /usr/share/portage/config/repos.conf /etc/portage/repos.conf/gentoo.conf +emerge-webrsync +``` + +### Editor + +Ofcourse, you can stick to `nano`, but I've been a vim guy for a very long time +now, and without it I feel sad. It is the first thing I install, to make the +rest of the configuration easier to do, by virtue of having the best editor +available. + +```sh +emerge vim +``` + +Once `vim` (or whichever worse editor you prefer) is installed, you can go +around editing configuration files as needed. + +### locale + +Enable all the locales you desire in `/etc/locale.gen`. Once all the desird +locales are uncommented, you can generate the locales with `locale-gen`. You +will most likely also want to add the locales to the `L10N` variable in your +`make.conf`. + +### timezone + +Set your timezone by making `/etc/localtime` a symlink to the timezone you use. + +```sh +ln -fs /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime +``` + +### hostname + +Set the machine's short hostname in `/etc/conf.d/hostname` first, then add your +hostname aliases to `/etc/hosts`. + +```txt +# /etc/conf.d/hostname +hostname="ivdea" + +# /etc/hosts +127.0.0.1 ivdea.tyil.net ivdea +::1 ivdea.tyil.net ivdea +``` + +### kernel + +{{< admonition title="Note" >}} +This will build the initramfs twice, since emerging gentoo-kernel will build it +automagically. This can be "fixed" by removing a USE flag, but this is easier to +me. +{{</ admonition >}} + +By the time you're reading this, the kernel version used here is probably +outdated. You will want to update it to whichever kernel version you're going to +use. + +```sh +emerge \ + busybox \ + dracut \ + efibootmgr \ + gentoo-kernel \ + intel-microcode \ + linux-firmware + +emerge sys-fs/zfs-kmod sys-fs/zfs +emerge --config gentoo-kernel + +rc-update add zfs-import boot +rc-update add zfs-mount boot +rc-update add zfs-share default +rc-update add zfs-zed default + +zgenhostid + +cp /boot/vmlinuz-5.15.59-gentoo-dist /efi/efi/gentoo/vmlinuz-5.15.59-gentoo-dist.efi +cp /boot/initramfs-5.15.59-gentoo-dist /efi/efi/gentoo/initramfs-5.15.59-gentoo-dist.img + +efibootmgr \ + --disk /dev/disk/by-id/nvme-eui.36483331545090280025385800000001 \ + --part 1 \ + --create \ + --label "Gentoo ZFS 5.15.59" \ + --loader 'efi\gentoo\vmlinuz-5.15.59-gentoo-dist.efi' \ + --unicode \ + 'dozfs root=ZFS=ivdea0/rootfs/gentoo ro initrd=\efi\gentoo\initramfs-5.15.59-gentoo-dist.img encrypted' +``` + +### Root password + +Set the root password using `passwd`. This would also be a good time to add any +other users you want to use, and configure them with the correct permissions and +groups. + +## Misc + +If you have any other software requirements, such as wireless network management +or privilege escalation utilities, this is the most appropriate time to install +and configure them. + +## Reboot + +Now you can reboot into the system, and be done with this guide. If anything +isn't working properly, return to the "Recovery" step and fix any outstanding +issues. diff --git a/content/posts/2022/_index.md b/content/posts/2022/_index.md new file mode 100644 index 0000000..3bf91c1 --- /dev/null +++ b/content/posts/2022/_index.md @@ -0,0 +1,3 @@ +--- +title: 2022 +--- diff --git a/content/posts/2023/2023-02-23-the-woes-of-awsvpnclient.md b/content/posts/2023/2023-02-23-the-woes-of-awsvpnclient.md new file mode 100644 index 0000000..a852793 --- /dev/null +++ b/content/posts/2023/2023-02-23-the-woes-of-awsvpnclient.md @@ -0,0 +1,91 @@ +--- +date: 2023-02-23 +title: The Woes of AWSVPNClient +tags: +- Amazon +- AWS +- AWSVPNClient +--- + +For my current `$dayjob` I am required to start using the AWS VPN Client. This +is not a problem per se, however, this piece of software has given me some +particular headaches. In this post, I want to air some frustrations that it has +brought me in the past two days, trying to get this software working properly +on Debian. + +## GNU+Linux Support + +The AWS VPN Client has gotten an official client for GNU+Linux users. Not all +of them, sadly, they specifically support Ubuntu 18.04. I find it important to +note that this is 2 LTS versions behind the current Ubuntu version 22.04. Apart +from that, only Ubuntu is rather limited. Amazon isn't a small company, and +they should be able to support various distributions. + +In general I would recommend to support the upstream distribution, which in +this case would be Debian. This would ensure that it becomes available on +Ubuntu by virtue of it being Debian based. + +That said, only Ubuntu packages wouldn't be a huge problem if not for the next +issue I have with this software... + +## Proprietary Software + +The code for this application is private, and Amazon has no intention to change +this. There's nothing very special about the application, it's just a +proprietary wrapper around OpenVPN, so in my mind I find it hard to believe +that they're trying to "protect" anything sensitive. It feels like a simple +move to instill the idea that you're highly dependent on them. + +If they _were_ to make this software free (as in freedom), packaging could be +done by package maintainers, or really just anyone who feels like doing it. +This would remove a burden on Amazon, and ensure better availability for all +potential users. + +Additionally, it would make debugging issues much easier. Because... + +## Logging + +The logging the application does is pathetic. There's a lot of duplicated logs +that are spammed hundreds of times per second. Tailing your logs can also be +more annoying than it needs to be, since the client rotates which file it logs +to every 1048629 bytes. + +I currently have 30 log files, generated by two sessions. In these log files, +the line `[INF] Begin receive init again` appears 509114 times. Over _half a +million_ times. The total number of log lines in all these log files is 510394, +meaning only 1280 lines are something different. + +Of those 1280 lines, the logs themselves aren't much better. I apparently had +to install `systemd-resolved` in order to fix the following error: + +```txt +2023-02-23 10:02:50.870 +01:00 [DBG] CM received: >LOG:1677142970,F,WARNING: Failed running command (--up/--down): external program exited with error status: 1 +>FATAL:WARNING: Failed running command (--up/--down): external program exited with error status: 1 + +2023-02-23 10:02:50.870 +01:00 [DBG] CM processsing: >LOG:1677142970,F,WARNING: Failed running command (--up/--down): external program exited with error status: 1 +2023-02-23 10:02:50.870 +01:00 [DBG] CM processsing: >FATAL:WARNING: Failed running command (--up/--down): external program exited with error status: 1 +2023-02-23 10:02:50.870 +01:00 [DBG] Fatal exception occured +2023-02-23 10:02:50.870 +01:00 [DBG] Stopping openvpn process +2023-02-23 10:02:50.870 +01:00 [DBG] Sending SIGTERM to gracefully shut down the OpenVPN process +2023-02-23 10:02:50.871 +01:00 [DBG] Invoke Error +2023-02-23 10:02:50.871 +01:00 [DBG] DeDupeProcessDiedSignals: OpenVPN process encountered a fatal error and died. Try connecting again. +``` + +It is not particularly clear this fails due to not having `systemd-resolved` +installed and running. The `.deb` provided by Amazon does not even depend on +`systemd-resolved`! + +Another gripe I've had with the logs is their location. It saves these in +`~/.config/AWSVPNClient/logs`. It may seem weird since this path contains a +directory named `.config`, and indeed, this is not a great place to store logs. +The [XDG Base Directory +Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html) +specifies `$XDG_STATE_HOME`, with one explicit example for it being logs. +However, for this to make sense, the application needs to respect the `XDG_*` +values to begin with, which it currently doesn't. + +## All in all + +This software is pretty bad, but if it were free software, at least the users +could improve it to suck less, and easily introduce support for various +additional platforms. Instead, we're just stuck with a piece of bad software. diff --git a/content/posts/2023/2023-03-08-using-laminar-for-selfhosted-ci.md b/content/posts/2023/2023-03-08-using-laminar-for-selfhosted-ci.md new file mode 100644 index 0000000..7158ed1 --- /dev/null +++ b/content/posts/2023/2023-03-08-using-laminar-for-selfhosted-ci.md @@ -0,0 +1,64 @@ +--- +date: 2023-03-08 +title: Using Laminar for Self-hosted CI +tags: +- Bash +- CI +- Git +- GNU+Linux +--- + +I've hosted my [own git repositories](https://git.tyil.nl) for quite a while, +but I hadn't found a simple self-hosted CI solution yet. I've tried several, +and found them to be a bit too cumbersome to setup and actually put to use. The +majority requires you to host a full "git forge", such as GitLab or Gitea, in +order to use their webhook functionality in order to trigger a CI build. This +didn't seem worth the effort to me, so I kept looking for an alternative that +worked well for me. + +I think I've finally found one in [Laminar](https://laminar.ohwg.net/), after a +suggestion from a friend on the Fediverse. I do wonder how I could've spent so +much time searching without ever finding this solution! + +Laminar itself was easy to install from source, but another person chimed in to +let me know they already made an `ebuild` for it, which is available in their +overlay, making it even easier for me to try out. A single `emerge laminar`, +and a couple seconds of building it, and I was ready to start trying it out. + +Configuration of jobs is done through scripts in whichever language you prefer, +giving you quite a bit of power. The documentation seems to mostly use Bash, +and that seemed to be a logical choice for me too, so that's what I've been +playing with as well. + +Running jobs itself is as easy as `laminarc queue <name>`. It can't be much +simpler, and this CLI interface makes it very easy to start a new job from a +git `post-receive` hook. I wrote one which also shows the URL of the job's logs +whenever I push new comments to [the Bashtard +repository](https://git.tyil.nl/bashtard/about/). + +{{<highlight bash>}} +while read old new ref +do + laminarc queue bashtard \ + "GIT_BRANCH=$ref" \ + "GIT_COMMIT=$new" \ + | awk -F: '{ print "https://ci.tyil.nl/jobs/"$1"/"$2 }' +done +{{</highlight>}} + +Using this, I can verify a job started, and immediately go to the page that +shows the logs. I plan to use Laminar's post-job script to leverage `ntfy` to +send me a notification on failed builds. + +Since all the worthwhile configuration for Laminar is just plain text, it is +also very easy to manage in your preferred configuration management system, +which is also something I plan to do in the nearby future. + +One slight annoyance I have so far is that I can't use (sub)directories for all +the job scripts. Since I don't have many yet, this isn't a real problem yet, +but it could pose a minor issue in the far future once I've written more job +scripts. + +Given that that's the only "issue" I've found thus far, after a couple days of +playing with it, I'd highly recommend taking a look at it if you want to set up +a CI system for your self-hosted git repositories! diff --git a/content/posts/2023/2023-03-26-finally-templating-bashtard.md b/content/posts/2023/2023-03-26-finally-templating-bashtard.md new file mode 100644 index 0000000..b80270c --- /dev/null +++ b/content/posts/2023/2023-03-26-finally-templating-bashtard.md @@ -0,0 +1,86 @@ +--- +date: 2023-03-29 +title: Finally, Templating in Bashtard! +tags: +- Bash +- Bashtard +- FreeBSD +- GNU+Linux +--- + +In the past year, I've written Bashtard, a simple configuration system written +in Bash to minimize the required dependencies, and to have a better system to +handle different distributions/OSs in your cluster. Especially the past two +months I've done quite a bit of work on it. I've worked out how to do reusable +playbooks, generate a usable Debian package from the Makefile, extend the +supported platforms, and more. And now, I've finally found a library to improve +templating functionality, [Bash Pure Template](https://github.com/husixu1/bpt). + +When I originally started Bashtard I had looked around for nice and simple +templating solutions that I could use. Sadly, pretty much all the available +results required me to add dependencies, or couldn't really do more than what I +did using `sed` and `awk`. + +For a long time, I had accepted that the kind of system that I wanted didn't +exist, and I wasn't interested in making it myself at the time. Last night, +however, I decided to just give it a quick search to see if anything had +changed, and BPT popped up somewhere in my results. Having a quick look through +the documentation made me very interested, it seemed to have all the features I +desired, while still sticking to utilities I've already accepted for Bashtard. + +With one small exception, `md5sum`. This utility is not available on the FreeBSD +systems I maintain. On FreeBSD, this tool is called `md5`, and has different +options it can use. On the bright side, both `md5sum` and `md5` accept the +content to be hashed on `STDIN`, and will write the hash to `STDOUT`. +Additionally, Bashtard already contains logic to deduce what kind of system it +is running on. + +And so I decided it's worth a try. There's only 5 references to `md5sum`, and +the all happen in the same function, `bpt.fingerprint`. I've added an extra +variable, `util`, and a `case...esac` to set this variable. + +```bash +local util + +case "${BASHTARD_PLATFORM[key]}" in + freebsd) util=md5 ;; + linux-*) util=md5sum ;; + *) + debug "bpt/fingerprint" "Falling back to md5sum for hashing" + util=md5sum + ;; +esac +``` + +After that, just replace all the `md5sum` invocations with `"$util"`. And a +quick test later, it seems to function just fine. Implementing BPT as a library +was incredibly straightforward too. + +```bash +. "$BASHTARD_LIBDIR/vendor/bpt.bash" + +file_template_bpt() +{ + local file + + file="$1" ; shift + + eval "$* bpt.main ge \"$file\"" +} +``` + +The `eval` is a bit icky, but it saves me from polluting the environment +variables through various `export`s. + +Another small adjustment I've made to BPT is the shebang. Upstream uses +`#!/bin/bash`, but this is incorrect on some systems, including FreeBSD. It uses +`#!/usr/bin/env bash` in the Bashtard version. Additionally, the upstream +repository uses `.sh` as the file extension, which I've updated to be `.bash` to +more accurately reflect which shell it is used with. Upstream also uses a +4-space indent, which I've left as-is for now, since indentation is more of a +personal choice, even if that choice is wrong. Finally, I added 3 `shellcheck +disable` rules to make shellcheck happy. + +After some playbook testing on my own systems, I can say that BPT works pretty +well so far, and I'm very glad the author made it available as free software. +Thanks! diff --git a/content/posts/2023/2023-05-23-bashtard-2.0.0.md b/content/posts/2023/2023-05-23-bashtard-2.0.0.md new file mode 100644 index 0000000..654435f --- /dev/null +++ b/content/posts/2023/2023-05-23-bashtard-2.0.0.md @@ -0,0 +1,110 @@ +--- +date: 2023-05-23 +title: Bashtard v2.0.0 +tags: +- Bash +- Bashtard +- FreeBSD +- GNU+Linux +--- + +A little over a year ago I started on a project to create my own configuration +management system. I've been disappointed with existing alternatives, such as +Ansible, on the grounds that they don't work all that well if you have a mix of +different distros with different package managers, and sometimes even different +paths to store data in. + +I've been having a lot of fun working on it, since the limitations I've put on +it result in having to solve some problems in different ways than I would in a +full-fledged programming language. These limitations also keep things pretty +simple, and ensure that most of the features I have worked on need little to no +additional effort to run on all the different systems I use for my computing +needs. + +And now, a year later, I feel confident enough about a new release. There's some +small backwards incompatible changes, so a new major release version is the way +to go. [Bashtard v2.0.0](https://www.tyil.nl/projects/bashtard/releases/2.0.0/) +is now available. There are a few big things that I want to go into a little +bit, but you can also find a full list of changes in the changelog included on +the release page. + +# Templating + +After using the templating features I [wrote about]() last month, I've decided +to _not_ include it into Bashtard. I am not convinced after using it in practice +that it adds enough value to warrant the size of the added code, and hassling +with two licenses instead of one. I am still very much open to the idea of a +good base templating engine, but for now you can always install `jinja2` or +something on the target machine, and call that manually. The new +`playbook_path()` function should make it easy to generate the path to your +playbook's files. + +# Additional `$BASHTARD_*` vars + +Apart from having a new key in `$BASHTARD_PLATFORM` called `init`, there's a +completely new variable in this version: `$BASHTARD_PLAYBOOK_VARS`. Currently, +it's only used to set a given variable as required, but can be extended in the +future with other kinds of checks. This allows playbooks to define some data to +be required for it to run, and have it refuse to run if those are not supplied, +rather than having to manually check them when the playbook runs. This is mainly +intended for use with playbooks you intend to share, so that other people can +get reasonable feedback as to what they _need_ to configure, vs what they _can_ +configure. + +# Re-usable playbooks + +So let's talk about one of the more important updates to Bashtard. At least, in +my opinion. How playbooks are being used has been altered slightly, in order to +allow a little easier re-use of them. I consider this a very important feature +of any configuration management system, the ability to share your playbooks with +others easily, and being able to use other people's playbooks with minimal +effort. This greatly reduces the barrier to get started, and encourages people +to show off what they've made. + +The current implementation is built upon git submodules, and the `bashtard pull` +command will take them into account. Perhaps I'll add an `import` subcommand in +the future to abstract the git submodule effort away, as I know that many people +find it difficult to work with. However, since `git` is already ingrained in +Bashtard, this addition keeps dependencies low, and allows me to keep the +complexity out of the Bash code. + +# data.d + +Having re-usable playbooks introduced the need to have a place for data that is +important to my setup, but completely useless to someone else's setup. For this, +the `data.d` directory was added. You can store information that should be +preserved across sync runs on your machines, but are not a good fit to keep in +the actual playbook itself. I personally use it for my +[`vpn-tinc`](https://git.tyil.nl/bashtard/vpn-tinc/) playbook to keep the host +files in. + +Another use-case for this directory is without a playbook at all. You can put a +regular directory in it, and symlink to it from a host system to keep a given +directory in sync across all your machines. In my case, I have an `etc-nixos` +directory in my `data.d` directory. On my NixOS system I have a symlink from +`/etc/nixos` to `/etc/bashtard/data.d/nixos`. If I ever continue with NixOS, I +can have this on all systems, and share any `.nix` files across all machines. + +# Binary packages! + +Lastly, I've [written +about](https://www.tyil.nl/post/2023/03/08/using-laminar-for-self-hosted-ci/) +Laminar before. I'm still using it, and I'm still very happy with its +simplicity. Since setting it up I've added jobs to verify my Bashtard code with +`shellcheck`, and if it passes, it'll queue up additional jobs to create a +`.tar.gz` distribution and a `.deb` distribution. I hope to expand this to also +generate binaries for use with Alpine, FreeBSD, and Archlinux. I've recently set +up an S3-compatible object storage, + +Additionally, I've recently set up an S3-compatible object store, which Laminar +should push such artifacts to immediately. This will simplify new releases of +any software, and offload this kind of storage to an actual remote server, +rather than hosting `dist.tyil.nl` directly from my desktop. + +# Wrapping up + +All in all, I've been very happy with Bashtard so far, and I've been having a +_lot_ of fun working on it. I hope to be able to continue working on it and +making it even better that it is in this release. + +Thanks for reading, and perhaps even using Bashtard! diff --git a/content/posts/2023/2023-07-13-getting-emoji-to-work-in-kde-on-debian.md b/content/posts/2023/2023-07-13-getting-emoji-to-work-in-kde-on-debian.md new file mode 100644 index 0000000..a5b0980 --- /dev/null +++ b/content/posts/2023/2023-07-13-getting-emoji-to-work-in-kde-on-debian.md @@ -0,0 +1,138 @@ +--- +date: 2023-07-13 +title: Getting Emoji to Work in KDE on Debian +tags: +- Debian +- GNU+Linux +- KDE +--- + +This is going to be a relatively short and uninteresting post for most, it'll +just document how to get emoji to work in KDE. + +While it will work with most applications out of the box, this doesn't appear to +work in Qt applications by default, including the notification panel. As I use +my notifications for messages I get from my work chat, and I dislike seeing the +squares, I set out to find the solution. I've had to string together a couple +sources of information to get to the correct setup, and this blog post intends +to show just the useful bits. So here goes! + +You'll need an emoji font (in my case `fonts-noto-color-emoji`), add two +configuration files for fontconfig, rebuild the fontconfig cache, and most +likely log out and back into KDE. Installing the emoji font is probably the easy +bit and won't need any additional explanation I hope. So let's get started on +the first configuration file, which will enable the Noto emoji font to be used, +and also force it to be used in favour of other emoji fonts if any application +was using that specifically. I have it saved as +`/etc/fonts/conf.d/75-noto-color-emoji.conf`. + +```xml +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE fontconfig SYSTEM "fonts.dtd"> +<fontconfig> + <!-- Add generic family. --> + <match target="pattern"> + <test qual="any" name="family"><string>emoji</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + + <!-- This adds Noto Color Emoji as a final fallback font for the default font families. --> + <match target="pattern"> + <test name="family"><string>sans</string></test> + <edit name="family" mode="append"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test name="family"><string>serif</string></test> + <edit name="family" mode="append"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test name="family"><string>sans-serif</string></test> + <edit name="family" mode="append"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test name="family"><string>monospace</string></test> + <edit name="family" mode="append"><string>Noto Color Emoji</string></edit> + </match> + + <!-- Block Symbola from the list of fallback fonts. --> + <selectfont> + <rejectfont> + <pattern> + <patelt name="family"> + <string>Symbola</string> + </patelt> + </pattern> + </rejectfont> + </selectfont> + + <!-- Use Noto Color Emoji when other popular fonts are being specifically requested. --> + <match target="pattern"> + <test qual="any" name="family"><string>Apple Color Emoji</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>Segoe UI Emoji</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>Segoe UI Symbol</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>Android Emoji</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>Twitter Color Emoji</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>Twemoji</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>Twemoji Mozilla</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>TwemojiMozilla</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>EmojiTwo</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>Emoji Two</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>EmojiSymbols</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> + <match target="pattern"> + <test qual="any" name="family"><string>Symbola</string></test> + <edit name="family" mode="assign" binding="same"><string>Noto Color Emoji</string></edit> + </match> +</fontconfig> +``` + +The second configuration file, saved as `/etc/fonts/conf.d/local.conf`, simply +adds the Noto emoji font as a fallback. This enables the use of it when an emoji +is going to be rendered. + +```xml +<?xml version='1.0'?> +<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'> +<fontconfig> + <match target="pattern"> + <edit name="family" mode="append"> + <string>Noto Color Emoji</string> + </edit> + </match> +</fontconfig> +``` + +And after this, a relog of your (graphical) session should be all that is needed +in order to make it work. You can easily test it with `notify-send`, or trying +to render some emoji in `konsole`. diff --git a/content/posts/2023/2023-07-24-new-server-rack-mieshu.md b/content/posts/2023/2023-07-24-new-server-rack-mieshu.md new file mode 100644 index 0000000..f024784 --- /dev/null +++ b/content/posts/2023/2023-07-24-new-server-rack-mieshu.md @@ -0,0 +1,89 @@ +--- +date: 2023-07-23 +title: "My New Server Rack: Mieshu" +tags: +- GNU+Linux +- Gentoo +- Systemd +- Garage +--- + +After saving up for a long while and thinking about what I want in my new home, +I have finally taken the leap and gotten myself a server rack for home use. Its +has a 15U capacity, which should be plenty to get started, but this same brand +has larger racks too, in case I do want to upgrade it and keep the same style. + +That said, for now there's only two 4U units in them, one for (file) storage, +and one for database purposes. I sadly don't have anything dedicated for +workloads yet, so for now, both of these servers are intended to also run some +light workloads. I haven't made my mind up yet on how to solve the workload +issues. Now that I have a rack, I obviously want something rack-mountable, and +I probably want it to run a Kubernetes cluster too. + +In this regard, I _could_ go for a set of [Raspberry Pi](https://www.raspberrypi.com/) +units, there's [3U mounts that can hold up to 12 Raspberry Pi machines](https://www.uctronics.com/uctronics-19-inch-3u-rack-mount-for-raspberry-pi-4-with-8-mounting-plates.html), +which would be a nice amount. However, I am not yet completely sold on full-ARM +workloads, and I'm not entirely convinced of the power of Raspberry Pi units in +general. I'd much rather standardize on another brand, [Odroid](https://www.hardkernel.com/), +as they have more types of units available, and are not limited to just ARM. But +since they're not the popular kid in class, there's very few off-the-shelf +rack mounting equipment for it. I'll be thinking about this for just a bit more +before making a decision. + +For now, though, I wanted to talk about the setup of the first server, Mieshu, +who will be used as a storage server. Mieshu currently runs 8 HDDs, and 2 NVMe +drives. One of the NVMe drives is used for the rootfs, and the other is used for +caching in certain applications. The HDDs themselves offer the data storage +capacity. + +The HDDs are currently comprised of four 16TB drives, and four 8TB drives. The +smaller disks come from my desktop, Edephas, which used to serve as data storage +until Mieshu took over. All disks are configured into pairs, which themselves +make mirrors. This means I have four sets of mirror pools, two times 16TB, and +two times 8TB, for a total of 48TB of storage. I'm currently using about half of +this, and it should give me plenty of time before needing to increase the size +again. + +I chose to use mirrors since it has a good chance of your data being recoverable +on disk failure, and it allows me to buy disks per two, rather than in larger +numbers. This hopefully keeps the cost of expansion within reasonable limits. +The mirrors themselves are currently [ZFS](https://openzfs.org/wiki/Main_Page) +pools, but I hope to be able to use [bcachefs](https://bcachefs.org/) very soon +as well. + +Just a bunch of mirrors is rather inconvenient, however, so I'm also leveraging +[MergerFS](https://github.com/trapexit/mergerfs) to combine all the mirrors into +a single usable pool. This slightly odd setup was chosen over RAID-0 or RAID-Z* +to lower the impact of disk failure. Even if two disks in the same mirror were +the die at the same time, I wouldn't lose _all_ data, just the bits on that +particular mirror. It would be very annoying, but it wouldn't be disastrous. + +Apart from generic mass storage, I also host S3 buckets for personal use. This +is where I upload CI artifacts to, and [my MissKey instance](https://fedi.tyil.nl/@tyil) +uses it for storing objects as well. Future services such as [Mimir](https://grafana.com/oss/mimir/) +will probably leverage S3 for storage as well. This is achieved through +[Garage](https://garagehq.deuxfleurs.fr/). I've also tried [SeaweedFS](https://seaweedfs.github.io/), +which is a very neat project on its own, but Garage is just simpler to +configure, and allows a replicated setup with only two servers, whereas SeaweedFS +demands an odd number of master servers. + +And lastly, Mieshu runs [K3s](https://k3s.io/) for its Kubernetes component. It +is currently not serving anything yet, as the other server is supposed to become +the database server, which is needed for most workloads. Once that is up and +running, Mieshu will start hosting things such as +[Grafana](https://grafana.com/oss/grafana) and [Loki](https://grafana.com/oss/loki/), +monitoring stuff basically. Perhaps I'll move [Laminar](https://laminar.ohwg.net/) +to this server as well, but I'm unsure if I will run that as a Kubernetes service. + +The server itself runs on Gentoo, as it still is the most stable experience I +can get out of any GNU+Linux distribution. I am, however, not using the default +of OpenRC as the init system and service manager. For the first time, I'm +running Gentoo with systemd. After several years, it appears to have become +stable enough to trust with serious workloads. With its increased use, however, +some things have become simpler by just using systemd. I hope to get a better +understanding of it, and learn to bend it to my will as needed, by simply using +it on my own systems. + +I hope to have time to work on the other server sooner rather than later, so +I can finish up the base of my new setup. Be on the lookout for the next post, +where I'll go into detail on Nouki, the database server. diff --git a/content/posts/2023/2023-08-05-new-server-rack-nouki.md b/content/posts/2023/2023-08-05-new-server-rack-nouki.md new file mode 100644 index 0000000..fbbc326 --- /dev/null +++ b/content/posts/2023/2023-08-05-new-server-rack-nouki.md @@ -0,0 +1,88 @@ +--- +date: 2023-08-05 +title: "My New Server Rack: Nouki" +tags: +- GNU+Linux +- Gentoo +- PostgreSQL +- Prometheus +- Systemd +- ZFS +--- + +After setting up [mieshu](/post/2023/07/23/my-new-server-rack-mieshu/), nouki is +the next server to work on in my home rack. Nouki is intended to live as my main +database server, mainly for PostgreSQL, but perhaps later on in life MySQL if I +ever want a service that doesn't support superiour databases. + +The setup for nouki is much simpler in that regard, the base system is almost +identical. This server has ZFS with 2 NVMe disks running in a mirror +configuration. It is also a Gentoo based system, and again with systemd rather +than openrc. The experience of systemd with mieshu was much less painful than I +anticipated. It would seem that it has had time to mature, though I still +dislike how it kills diversity in init/service managers on GNU+Linux. + +Both PostgreSQL and ZFS have received some tweaking to run more smoothly. I'm no +DBA, so if you see anything silly in here, do let me know so I can improve my +life. + +For ZFS, tweaking was rather minimal. I've made a seperate dataset for +PostgreSQL to use, with `recordsize=8K` as option. For PostgreSQL, I've altered +a bit more. First and foremost, the `pg_hba.conf` to allow access from machines +in my tinc-based VPN. + +```conf +host all all 10.57.0.0/16 scram-sha-256 +``` + +The `postgresql.conf` file received the following treatment, based solely on the +guidance provided by [PGTune](https://pgtune.leopard.in.ua/). + +```conf +listen_address = 10.57.101.20 +max_connections = 200 +shared_buffers = 8GB +effective_cache_size = 24GB +maintenance_work_mem = 2GB +checkpoint_completion_target = 0.9 +wal_buffers = 16MB +default_statistics_target = 100 +random_page_cost = 1.1 +effective_io_concurrency = 200 +work_mem = 5242kB +min_wal_size = 1GB +max_wal_size = 4GB +max_worker_processes = 12 +max_parallel_workers_per_gather = 4 +max_parallel_workers = 12 +max_parallel_maintenance_workers = 4 +``` + +With this, PostgreSQL seems to perform very well on this machine, applications +using it are noticably faster. Sadly I have no timings from when it all ran on +my desktop, so I cannot make an exact statement on how much faster everything +has become. + +Additionally, I wanted to start gathering metrics of my machines and services, +so I can start thinking about dashboards and alerts. I've chosen to use the +current industry standard of Prometheus for this. Since I consider Prometheus to +be a database for metrics, it has been deployed on my database server as well. + +Prometheus is currently set to scrape metrics from the `node_exporter` and +`postgresql_exporter`, and seems to work fine. I expect I may need to tweak it +in the future to configure how long I want metrics to be available, since I've +seen it use quite a large amount of memory when storing a large amount of +metrics for a very long time. + +To actually see the metrics and have alerts, I currently intend to go with +Grafana. I already have ntfy running, and it appears relatively simple to mold +Grafana alerts into ntfy notifications. To do this properly, I will require some +machines to handle regular workloads. Most likely these will be Intel NUCs, or +similar machines, as they draw very little power for reasonable performance. +Raspberry Pi units would be cheaper, but also seem vastly less powerful, and I'd +need to ensure all my intended workloads can run on ARM which could become a +nuisance very quickly. + +As I already have an Intel NUC to play with, that's what I'll be doing for the +coming few days to see if this can work for my desires. Perhaps I can try out a +highly available cluster setup of K3s in the near future! diff --git a/content/posts/2023/2023-08-29-releasing-raku-modules-with-fez.md b/content/posts/2023/2023-08-29-releasing-raku-modules-with-fez.md new file mode 100644 index 0000000..edc1914 --- /dev/null +++ b/content/posts/2023/2023-08-29-releasing-raku-modules-with-fez.md @@ -0,0 +1,74 @@ +--- +date: 2023-08-29 +title: "Releasing Raku modules with fez" +tags: +- Argo +- Raku +--- + +Last week I got a message on Matrix, asking me to update one of my +[Raku](https://raku.org/) modules, +[`Config::Parser::TOML`](https://git.tyil.nl/raku/config-parser-toml/). One of +the dependencies had been updated, and the old one is no longer available +through the module installer `zef`. Its not that big a change, and there are +tests available, so its a reasonably small fix on itself. + +Recently I've set up [Argo Workflows](https://argoproj.github.io/workflows/) for +my CI/CD desires, and I found this a good and simple Raku project to try and +incorporate into a workflow. Since I had some additional quality checks ready to +use in my workflow, this has resulted in [REUSE](https://reuse.software/) +compliance for this Raku module, in addition to the regular `prove` tests +already available in the project. Additionally, the de facto default module +authoring tool `fez` also brings a few new checks that have been incorporated. + +While all that is good, there were some annoyances I encountered while +configuring this. Notably, I've found `fez` to be a chore to work with when it +comes to non-interactive use. All CI/CD jobs run in their own Kubernetes pods, +and _should_ not require any interaction from myself during these runs. I am +writing this blog post mainly to write down the annoyances I encountered, hoping +that `fez` can be improved in the future. + +Lets start with the first issue I encountered while setting up the workflow: +`zef install fez` fails by default. `zef` gives the advice to `--exclude` one of +the dependencies, and going by the issues reported on their Github repository, +this seems to be accepted workaround. However, I'd argue that this workaround +should not be needed to begin with. Especially seeing as `fez` works fine and I +have absolutely no clue what this `z` is or how I can supply it. Either drop +this dependency, or document its use and upstream so people can package it. + +The second issue I encountered was with the `login` functionality of `fez`. +There seems to be no way to handle this non-interactively. The way around this +for me has become to use `expect` scripts, but this is obviously not very pretty +and will break whenever the interactive interface of `fez` changes. A good means +of non-interactive authentication would be great to have. I've considered to +just mount `fez`'s config/cache into the containers, but the documentation warns +that tokens aren't permanent to begin with. + +Next up there's the actual `upload` command. I'm running it twice in my +workflow, once with `--dry-run` and once with `--force`. The first one is done +as a preliminary quality check to see if there's any obvious issues that ought +to be fixed beforehand. I noticed on a subsequent run (the one with `--force`) +that the _dry_ run isn't all that dry. It leaves an `sdist` directory, which in +turn will get included in the next step. There's a flag to create this `sdist` +directory, but no flag to do the inverse. My solution is to end this step with +`rm -fr -- sdist` to clean it up again. + +And lastly, when all quality assurance checks have passed, the `fez upload +--force` command is ran on the working directory. I'd rather not force anything +here, but the alternative is that another interactive question pops up and the +job hangs forever. I don't know all the possible prompts `fez` can generate, and +for this one I didn't even bother to try and look that up. Rather than a +`--force` to practically say "yes" to everything, I'd prefer an option to say +"no" to everything, failing the pipeline immediately. + +Another pet-peeve of mine is that `fez` seemingly doesn't use exit codes. No +matter what happens, even something quite important such as `login` with +incorrect credentials, it _always_ returns `0` as exit code. This should +obviously be fixed sooner rather than later, as it is quite simple and it is the +basis for _many_ systems to check the exit code to deduce something is wrong. + +Uploads of module updates are currently working, which is good, but I feel like +a lot of workaround code I had to write should not be necessary. If `fez` can +fix these issues, it will be much more of a breeze to use, which in turn +hopefully encourages more automated testing and distributing of Raku modules. +This can be a great boon for the module ecosystem and overall community. diff --git a/content/posts/2023/_index.md b/content/posts/2023/_index.md new file mode 100644 index 0000000..adf7d34 --- /dev/null +++ b/content/posts/2023/_index.md @@ -0,0 +1,3 @@ +--- +title: 2023 +--- diff --git a/content/posts/2024/2024-04-02-bashtard-2.1.0.md b/content/posts/2024/2024-04-02-bashtard-2.1.0.md new file mode 100644 index 0000000..9485dd5 --- /dev/null +++ b/content/posts/2024/2024-04-02-bashtard-2.1.0.md @@ -0,0 +1,57 @@ +--- +date: 2024-04-02 +title: Bashtard v2.1.0 +tags: +- Bash +- Bashtard +- FreeBSD +- GNU+Linux +--- +Its been about another year since I made a post about Bashtard, its [2.0.0 +release](https://www.tyil.nl/post/2023/05/23/bashtard-v2.0.0/). Today marks the +[2.1.0 release](https://www.tyil.nl/projects/bashtard/releases/2.1.0), meaning +I've gone almost a year without breaking backwards compatibility. To me, this is +a very good sign. + +The release today isn't as big as the 2.0.0 release was, mostly because most +of the functionality I want to have is already present. Some new features were +added over the year, though. The most important one is _variable references_. +This allows re-use of a variable's value for another variable. Its quite +simplistic in how it works, due to the nature of Bashtard being written in Bash +and trying to keep things rather simple and lightweight. It does however get the +job done for most use-cases I had for it. + +Another feature that was added in this release is the `zap` subcommand. It is a +convenience command more than anything, simply removing an existing registry +entry without going through `playbook_del()`. The use case for this is mostly +for testing new playbooks. I found that while writing a playbook I'd often +remove the playbook entry from the registry to re-run the `playbook_add()` +function to see if it works exactly as desired, and I wanted this to be more +convenient. In theory this new `zap` subcommand is also useful for dealing with +entries of broken or removed playbooks. + +For a future release I'd like to add `import` and `export` subcommands, for +making it easier to handle external playbooks. The `import` subcommand is +intended to take an URL and optionally a name to simply add an existing +repository as a submodule in the `playbooks.d` directory. + +The `export` subcommand should take a name as it exists in the `playbooks.d` +directory and turn it _into_ a git submodule, so it can be pushed to its own +repository. The intent is that you can just make a regular playbook for use +within Bashtard, and if you decide "hey, this could actually be useful for +others in its current state", you can simply export it, and push it to a +repository for other people to pull from. + +Additionally, I would like to remove the `backup` subcommand from Bashtard, as I +feel it adds a level of bloat and scope-creep which simply should not be there. +While this would result in a 3.0.0 release of Bashtard, keeping it just for +backwards compatibility seems a little silly to me. + +I'm on the fence for the `ssh` subcommand, as it seems more closely aligned to +Bashtard being used to manage several systems, and ssh can be very useful to +check something across your entire network. Considering the recent [SSHd +vulnerability](https://www.cve.org/CVERecord?id=CVE-2024-3094), it was quite +easy to run a single command across all my machines and get an overview of which +machines were affected. + +Let's see what the rest of this year brings in Bashtard changes! diff --git a/content/posts/2024/_index.md b/content/posts/2024/_index.md new file mode 100644 index 0000000..5321fd3 --- /dev/null +++ b/content/posts/2024/_index.md @@ -0,0 +1,3 @@ +--- +title: 2024 +--- diff --git a/content/posts/_index.md b/content/posts/_index.md new file mode 100644 index 0000000..d6075d8 --- /dev/null +++ b/content/posts/_index.md @@ -0,0 +1,16 @@ +--- +title: Blog +--- + +Over time, I've written a number of blog posts. Some to voice my opinion, some +to help people out with a tutorial, or just because I found something fun or +interesting to talk about. You can find these blog posts on this section of my +site. If you have any comments, feel free to send an email to +[`~tyil/public-inbox@lists.sr.ht`](mailto:~tyil/public-inbox@lists.sr.ht), or +reach out through any other method listed on the home page. + +{{< admonition title="note" >}} +**These articles reflect my opinion, and only mine**. Please refrain from +accusing other people of holding my opinion for simply being referenced in my +articles. +{{< / admonition >}} diff --git a/content/projects/_index.md b/content/projects/_index.md new file mode 100644 index 0000000..6907ad3 --- /dev/null +++ b/content/projects/_index.md @@ -0,0 +1,7 @@ +--- +title: Projects +--- + +This page lists all projects I actively work on, with some information about +them, releases, any packages, and documentation to get people started on using +them. diff --git a/content/projects/bashtard/_index.md b/content/projects/bashtard/_index.md new file mode 100644 index 0000000..9e31798 --- /dev/null +++ b/content/projects/bashtard/_index.md @@ -0,0 +1,17 @@ +--- +title: Bashtard +repository: https://git.tyil.nl/bashtard +languages: +- Bash +--- + +Bashtard is a configuration management system built on the idea of simplicity +for the user. It lets you write reasonably simple Bash scripts to configure +your systems, while providing just enough abstractions to make it easy to +work with various base systems. + +It is similar in purpose as other configuration management tools, such as +Ansible and Puppet, however Bashtard tries to keep dependencies to a minimum +while still providing some abstractions to make the process easier. This +allows Bashtard to run in more constrained environments, with the abstractions +allowing it to manage a varied array of systems in a single network. diff --git a/content/projects/bashtard/releases/1.0.0.md b/content/projects/bashtard/releases/1.0.0.md new file mode 100644 index 0000000..ef54893 --- /dev/null +++ b/content/projects/bashtard/releases/1.0.0.md @@ -0,0 +1,9 @@ +--- +title: Bashtard v1.0.0 +date: 2022-05-06 +type: project-release +packages: + bashtard-1.0.0.tar.gz: https://dist.tyil.nl/packages/bashtard/bashtard-1.0.0.tar.gz +--- + +This is the initial release of Bashtard. diff --git a/content/projects/bashtard/releases/2.0.0.md b/content/projects/bashtard/releases/2.0.0.md new file mode 100644 index 0000000..29623a7 --- /dev/null +++ b/content/projects/bashtard/releases/2.0.0.md @@ -0,0 +1,68 @@ +--- +title: Bashtard v2.0.0 +date: 2023-05-22 +type: project-release +packages: + bashtard-2.0.0.deb: https://dist.tyil.nl/bashtard/bashtard-2.0.0/bashtard-2.0.0.deb + bashtard-2.0.0.tar.gz: https://dist.tyil.nl/bashtard/bashtard-2.0.0/bashtard-2.0.0.tar.gz +--- + +### Added + +- The `var` subcommand is now referenced in `usage()`. +- A `pkg` subcommand has been added, to allow for direct interaction with the + `pkg_*()` utilities provided by Bashtard. +- `config_subkeys()` and `config_subkeys_for` have been added, to look up + subkeys defined in config files. These can help when you want to use a list + somewhere in your configuration. +- A `backup` subcommand has been added. This backup system uses borg, which must + be installed, but should be generic enough to be usable by most people out of + the box. +- The `Makefile` has been extended with targets for creating packages for + GNU+Linux distributions. +- The `$BASHTARD_PLATFORM` variable now contains an additional entry, `init`, to + allow for handling different init systems on GNU+Linux in a cleaner fashion. +- A `file_hash` utility function has been added. It currently uses `md5`, but is + written in such a fashion that this can easily be updated for the future. Its + intent is to encapsulate differences between naming and usage of hashing + utilities found on different systems. +- A `dir_hash` utility function has been added, which will give you a hash based + on the file contents of a directory. This function will find files + recursively, calculate a hash for each of them, and then calculate a hash + based on the total result. The intended goal is to allow running before and + after templating some files, to deduce whether something actually changed. +- A `diff` subcommand has been added to show all non-committed changes. It is a + convenience wrapper to avoid having to change directory and run `git diff` to + get an overview of all pending changes. +- A `pull` subcommand has been added to only pull the latest changes into the + `$BASHTARD_ETCDIR`, without running `sync` on all the playbooks. +- A new global variable, `$BASHTARD_PLAYBOOK_VARS` has been added. Currently, + its only purpose is to check for "required" variables to be used in the + playbook. Before an `add` or `sync`, any variables declared to be `required` + in the `$BASHTARD_PLAYBOOK_VARS` array will be checked to be non-empty. If any + are empty, an error will be thrown and the playbook will not be ran. +- A new directory has been added, `data.d`, for data that should be shared + between playbook runs. This new directory is intended to create a clearer + seperation between a playbook and a user's specific data used by the playbook, + which in turn should make re-using playbooks easier. +- A convenience function has been introduced, `playbook_path()`, which can give + you the absolute path to the playbook's base or data directory. +- A `top` subcommand has been added to give some generic information of all + nodes known to Bashtard. It uses information from the `sysinfo` subcommand, + which it will pull in through an `ssh` invocation. + +### Changed + +- The `ssh` subcommand's configuration has been nested under `bashtard`, e.g. + `ssh.host` is now `bashtard.ssh.host`. It should also be correctly using this + value for establishing the SSH connection. +- `svc_enable()` now checks for the `rc.d` file to exist before running `grep` + on it. +- `pkg_*()` functions no longer _require_ a `pkg.*` value to be defined. If one + is not set explicitly, a warning will be generated, but the original name + passed to the `pkg_*()` function will be used by the host's package manager. +- `datetime()` now always passes `-u` on to `date`. +- All manpages now include a `NAME` section. +- The `sync` subcomman will now `stash` any changes before it attempts to + `pull`. Afterwards, `stash pop` will be ran to apply the last `stash`ed + changes again. diff --git a/content/projects/bashtard/releases/2.0.1.md b/content/projects/bashtard/releases/2.0.1.md new file mode 100644 index 0000000..e8bf49c --- /dev/null +++ b/content/projects/bashtard/releases/2.0.1.md @@ -0,0 +1,19 @@ +--- +title: Bashtard v2.0.1 +date: 2023-09-25 +type: project-release +packages: + bashtard-2.0.1.deb: https://dist.tyil.nl/bashtard/bashtard-2.0.1/bashtard-2.0.1.deb + bashtard-2.0.1.tar.gz: https://dist.tyil.nl/bashtard/bashtard-2.0.1/bashtard-2.0.1.tar.gz +--- + +### Added + +- A new `make` target has been added to build a .tar.gz distributable. + +### Changed + +- The `svc_` utils should now check which init service you're using when using a + linux system. The supported options are still only openrc and systemd. +- The `pull` subcommand should now properly return with exit-code 0 if no + problem were encountered. diff --git a/content/projects/bashtard/releases/2.0.2.md b/content/projects/bashtard/releases/2.0.2.md new file mode 100644 index 0000000..5acaaa9 --- /dev/null +++ b/content/projects/bashtard/releases/2.0.2.md @@ -0,0 +1,13 @@ +--- +title: Bashtard v2.0.2 +date: 2024-02-28 +type: project-release +packages: + bashtard-2.0.2.deb: https://dist.tyil.nl/bashtard/bashtard-2.0.2/bashtard-2.0.2.deb + bashtard-2.0.2.tar.gz: https://dist.tyil.nl/bashtard/bashtard-2.0.2/bashtard-2.0.2.tar.gz +--- + +### Fixed + +- Configuration values with `=` in their value part should now work properly + with `file_template`. Keys with `=` in them are still *not supported*. diff --git a/content/projects/bashtard/releases/2.1.0.md b/content/projects/bashtard/releases/2.1.0.md new file mode 100644 index 0000000..24caa53 --- /dev/null +++ b/content/projects/bashtard/releases/2.1.0.md @@ -0,0 +1,29 @@ +--- +title: Bashtard v2.1.0 +date: 2024-04-02 +type: project-release +packages: + bashtard-2.1.0.deb: https://dist.tyil.nl/bashtard/bashtard-2.1.0/bashtard-2.1.0.deb + bashtard-2.1.0.tar.gz: https://dist.tyil.nl/bashtard/bashtard-2.1.0/bashtard-2.1.0.tar.gz +--- + +### Added + +- Configuration variables can be assigned values of other variables with the + `&=` assignment. This allows a single value to be re-used dynamically, rather + than having to explicitly set the same value several times. +- A `zap` command has been added to remove a playbook from the registry without + running the playbook's `playbook_del()` function. This is intended to easily + remove registry entries when a playbook itself has been deleted or is + otherwise broken in a way that the regular `del` subcommand cannot fix. + +### Changed + +- The `description.txt` is now allowed to be used without the `.txt` suffix. + Usage with the `.txt` suffix continues to be supported as well. + +### Fixed + +- Passing an empty string as default value to `config` should now properly + return an empty string without a warning about the configuration key not + existing. diff --git a/content/projects/bashtard/releases/_index.md b/content/projects/bashtard/releases/_index.md new file mode 100644 index 0000000..c98ddda --- /dev/null +++ b/content/projects/bashtard/releases/_index.md @@ -0,0 +1,4 @@ +--- +title: Bashtard +type: project-release +--- diff --git a/content/recipes/_index.md b/content/recipes/_index.md new file mode 100644 index 0000000..337e63b --- /dev/null +++ b/content/recipes/_index.md @@ -0,0 +1,9 @@ +--- +title: Cookbook +--- + +People have often asked me to share recipes for various meals and snacks I've +served over time, so I've started writing down my recipes. This section of my +blog covers these recipes in my own personal cookbook. If you want to stay up to +date with the latest additions, subscribe to the [RSS +feed](/recipes/index.xml). diff --git a/content/recipes/basics/bechamel.md b/content/recipes/basics/bechamel.md new file mode 100644 index 0000000..1839a6e --- /dev/null +++ b/content/recipes/basics/bechamel.md @@ -0,0 +1,34 @@ +--- +title: Bechamel +date: 2022-04-18 +preptime: 0 +cooktime: 10 +serves: 1 +tags: +- basics + +ingredients: +- label: Butter + amount: 50 + unit: grams +- label: Flour + amount: 60 + unit: grams +- label: Milk + amount: .25 + unit: liter + +stages: +- label: Cooking + steps: + - Put the pot on low heat. + - Add the butter to the pot and let it melt completely. + - Once completely melted, immediately add the flour. + - Keep on low heat, and continuously stir the mixture to avoid burning. + - After 3 - 5 minutes, the mixture will smell slightly nutty, indicating the + flour is cooked. + - While stirring, slowly add the milk to the mixture. + - Let this cook for 3 - 5 minutes, until you reach your desired viscosity. +--- + +A thick, creamy sauce, usually as a base for another thick and savory sauce. diff --git a/content/recipes/condiments/applesauce.md b/content/recipes/condiments/applesauce.md new file mode 100644 index 0000000..2b0603e --- /dev/null +++ b/content/recipes/condiments/applesauce.md @@ -0,0 +1,38 @@ +--- +title: Applesauce +date: 2022-04-18 +tags: +- condiments +- sweet +preptime: 75 +cooktime: 10 +serves: 1 + +ingredients: +- label: Apple + amount: 5 +- label: Water + unit: liter + amount: 1 +- label: Cinnamon + +stages: +- label: Preparation + steps: + - Peel the apples. + - Cut the apples into small chunks. +- label: Cooking + steps: + - Put the apples in the pot. + - Add water to the pot, just enough to cover about half of the apple. + - Put the pot on the stove, and turn it on to high heat. + - When the water starts bubbling, lower the heat to about a third, and cover + the pot with a lid kept slightly ajar. + - Let this cook for about 10 to 15 minutes. + - Once cooked, drain the water from the pot. + - Mash the apples to your preferred thickness. + - Add cinnamon to taste. + - Let cool completely in the fridge. +--- + +A sweet, thick sauce made of apple. diff --git a/content/recipes/condiments/mayonnaise.md b/content/recipes/condiments/mayonnaise.md new file mode 100644 index 0000000..26380d6 --- /dev/null +++ b/content/recipes/condiments/mayonnaise.md @@ -0,0 +1,40 @@ +--- +title: Mayonnaise +date: 2022-11-20 +preptime: 5 +cooktime: 10 +serves: 5 +tags: +- condiments +- vegetarian + +ingredients: +- label: Egg + amount: 3 +- label: Lemon Juice + amount: 10 + unit: grams +- label: Mustard + amount: 25 + unit: grams +- label: Olive Oil (Mild) + amount: 200 + unit: grams + +stages: +- label: Preparing + steps: + - Seperate the egg whites from the yolks + - Put the yolks into a tall container for easy mixing with a stick blender + - Add the mustard to the container +- label: Mixing + steps: + - Start mixing the ingredients in the tall container + - Slowly add in the oil, ensuring it all gets blended into a thick mass + - Continuously add in the oil until all is used + - Add in the lemon juice + - Mix for another minute or so to ensure the juice is incorporated properly +--- + +A simple sauce to go well with everything, though most popular with french +fries. diff --git a/content/recipes/condiments/salsa.md b/content/recipes/condiments/salsa.md new file mode 100644 index 0000000..01b81e2 --- /dev/null +++ b/content/recipes/condiments/salsa.md @@ -0,0 +1,59 @@ +--- +title: Sweet and Spicy Salsa +date: 2022-10-02 +draft: true +tags: +- snacks +- sweet +- spicy +preptime: 15 +cooktime: 0 +serves: 4 + +ingredients: +- label: Bell Pepper + amount: 50 + unit: grams +- label: Black Pepper +- label: Garlic + amount: 2 + unit: cloves +- label: Honey + amount: 1 + unit: tablespoon +- label: Jalapeno + amount: 25 + unit: grams +- label: Ketjap Manis + amount: 1 + unit: tablespoon +- label: Red Onion + amount: 50 + unit: grams +- label: Salt +- label: Spring Onion + amount: 25 + unit: grams +- label: Tomato + amount: 100 + unit: grams +- label: Worcestershire Sauce + amount: 1 + unit: tablespoon + +stages: +- label: Preparations + steps: + - Chop all choppable ingredients into small bits. + - Combine all ingredients in a bowl. + - Mix around until it combines into a salsa. +--- + +A sweet and spicy salsa, great for parties. + +<!--more--> + +It is inspired by the cooking video from You Suck At Cooking, which teaches [the +way of rgogsh](https://youtube.alt.tyil.nl/watch?v=HCNwSe3t8ek). This recipe has +been made over time to create my favourite salsa, but you can easily swap a few +ingredients around to get something that works for all sorts of parties. diff --git a/content/recipes/condiments/sauce-mushroom.md b/content/recipes/condiments/sauce-mushroom.md new file mode 100644 index 0000000..0865faf --- /dev/null +++ b/content/recipes/condiments/sauce-mushroom.md @@ -0,0 +1,65 @@ +--- +title: Mushroom Sauce +date: 2022-09-30 +tags: +- condiments +- savory +preptime: 5 +cooktime: 25 +serves: 1 + +ingredients: +- label: Butter + amount: 50 + unit: gram +- label: Cream + amount: 150 + unit: gram +- label: Garlic + amount: 2 + unit: clove +- label: Mushrooms + amount: 100 + unit: gram +- label: Mustard + amount: 1 + unit: teaspoon +- label: Onion + amount: 1 +- label: Thyme + amount: 1 + unit: teaspoon +- label: White Wine + amount: 0.05 + unit: liter +- label: Worcestershire Sauce + amount: 0.02 + unit: liter +- label: Pepper +- label: Salt + +stages: +- label: Preparation + steps: + - Clean and cut the mushrooms. + - Finely dice the onions. + - Finely dice the garlic. + - Chop the thyme. +- label: Cooking + steps: + - Melt the butter in a pan with a little bit of oil to prevent the butter from + burning. + - Add the onion and garlic and fry for about 1 minute. + - Add the mushrooms and fry until cooked, about 3 to 4 minutes. + - Add a pinch of salt + - Reduce the heat. + - Add the white wine to deglaze the pan. + - Add the cream to the pan. + - Add the mustard to the pan. + - Add the worcestershire sauce to the pan. + - Add the thyme to the pan. + - Add the pepper to the pan. + - Stir everything together, and let simmer for 10 to 15 minutes to thicken up. +--- + +A savory sauce to be served warm with your dish. Works very well for steaks. diff --git a/content/recipes/condiments/sriracha.md b/content/recipes/condiments/sriracha.md new file mode 100644 index 0000000..4db9370 --- /dev/null +++ b/content/recipes/condiments/sriracha.md @@ -0,0 +1,29 @@ +--- +title: Sriracha Sauce +date: 2022-04-18 +tags: +- condiments +- spicy +preptime: 10 +cooktime: 432000 +serves: 1 + +ingredients: +- label: Peppers +- label: Garlic +- label: Sugar +- label: Salt + +stages: +- label: Preperation + steps: + - Cut the peppers in chunks that will easily blend + - Weigh all the ingredients without the salt + - Take 3% of the total weight as salt, and add it to the rest of the ingredients + - Blend it all together until smooth +- label: Fermentation + steps: + - Let ferment for 5 - 30 days +--- + +A very spicy variant of the world's most popular hot sauce. diff --git a/content/recipes/dishes-hot/meatball-jordanese.md b/content/recipes/dishes-hot/meatball-jordanese.md new file mode 100644 index 0000000..f609e2e --- /dev/null +++ b/content/recipes/dishes-hot/meatball-jordanese.md @@ -0,0 +1,66 @@ +--- +title: Jordanese Meatball +date: 2022-04-18 +preptime: 20 +cooktime: 270 +serves: 1 +tags: +- meal +- hot + +ingredients: +- label: Minced meat + ratio: 1 + amount: 150 + unit: grams +- label: Egg + ratio: 0.05 + amount: 7.5 + unit: grams +- label: Garlic +- label: Onion +- label: Salt + ratio: 0.012 + amount: 18 + unit: grams +- label: Black Pepper + ratio: 0.004 + amount: 1 + unit: grams +- label: Nutmeg + ratio: 0.002 + amount: 0.5 + unit: grams +- label: Panko + ratio: 0.08 + amount: 12 + unit: grams +- label: Cayenne Pepper + ratio: 0.002 + amount: 0.5 + unit: grams + +stages: +- label: Pre-cook + steps: + - Sweat the garlic and onion + - Set the garlic and onion aside to cool off +- label: Shaping + steps: + - Toss all the other ingredients into a mixing bowl + - Mix together + - Add the cool onion and garlic + - Mix together + - Form the meat into balls + - Cover the meatballs in panko +- label: Cooking + steps: + - Fry the meatballs to ensure a crust forms outside of it (10 min) + - Fill a dutch oven with enough (meat) stock to almost fully submerge the + meatballs + - Let the stock thicken up slightly + - Put in the meatballs, making sure they are submerged. + - Cook the meatballs slowly until completely cooked through. +--- + +A slowly stewed meatball. diff --git a/content/recipes/dishes-hot/pancakes.md b/content/recipes/dishes-hot/pancakes.md new file mode 100644 index 0000000..1781092 --- /dev/null +++ b/content/recipes/dishes-hot/pancakes.md @@ -0,0 +1,46 @@ +--- +draft: true +title: Pancakes +date: 2022-04-18 +preptime: 5 +cooktime: 15 +serves: 3 +tags: +- meal +- hot + +ingredients: +- label: Egg + amount: 1 +- label: Flour + amount: 150 + unit: grams +- label: Milk + amount: 250 + unit: milliliters +- label: Bacon +- label: Cinnamon + +stages: +- label: Batter + steps: + - Put the eggs into the mixing bowl. + - Use the mixer to stir them into a homogenous liquid, for about 30 to 45 seconds. + - Add the milk, and mix together until homogenous again, about 5 to 10 seconds. + - Add the flour, and mix into a smooth batter, about 1 to 3 minutes. +- label: Cooking + steps: + - Get your pan as hot as you can. + - Add a little butter (or oil) to the pan, and ensure it covers the entire pan slightly. + - Put some strips of bacon in the pan. + - Immediately after, add 1 scoop of pancake batter. Start pouring over the + bacon, then swirl around the pan to spread the batter around the surface of + the pan. + - Let the pancake cook until the top side is solid. + - Flip the pancake. + - Let cook for another 30 to 60 seconds. + - Rinse and repeat until you're out of pancake batter. +--- + +Thin, Dutch pancakes with bacon. To be eaten with the gift of the Dutch Gods, +known as [Stroop](https://nl.wikipedia.org/wiki/Stroop). diff --git a/content/recipes/dishes-hot/rice.md b/content/recipes/dishes-hot/rice.md new file mode 100644 index 0000000..7912935 --- /dev/null +++ b/content/recipes/dishes-hot/rice.md @@ -0,0 +1,49 @@ +--- +title: Rice +date: 2024-05-15 +preptime: 5 +cooktime: 10 +serves: 2 +tags: +- halal +- hot + +ingredients: +- label: Rice + amount: 1 + unit: cup +- label: Star Anice + amount: 2 + unit: unit +- label: Cardamom + amount: 4 + unit: pods +- label: MSG + amount: 4 + unit: teaspoon +- label: Pepper + amount: 2 + unit: teaspoon + +stages: +- label: Preparations + steps: + - Bruise the cardamom pods so that they are slightly open, but the seeds are + not falling out of them. +- label: Steaming + steps: + - Add the rice, water, star anice, and cardamom to your steamer, and follow + its instructions on how to properly steam rice. + - Turn on the steamer and let it do its job. +- label: Fluffing + steps: + - Once the steamer is done, add the MSG and pepper into the pot. + - Use a spoon or fork to toss the rice around, mixing in the spices. +--- + +A delicious, steamed white rice, to go with any meal as desired. + +You might think rice is quite basic, and that there are not many variations to +it, but you would be wrong. Plain white rice is often considered quite boring, +but by steaming it together with herbs, spices, or stock, you can create quite +delicious forms of rice that are great on their own. diff --git a/content/recipes/dishes-hot/soup-mushroom-cream.md b/content/recipes/dishes-hot/soup-mushroom-cream.md new file mode 100644 index 0000000..5a0d43e --- /dev/null +++ b/content/recipes/dishes-hot/soup-mushroom-cream.md @@ -0,0 +1,68 @@ +--- +draft: true +title: Cream of Mushroom Soup +date: 2022-10-08 +preptime: 20 +cooktime: 60 +serves: 5 +tags: +- hot +- meal +- soup +- vegetarian + +ingredients: +- label: Butter + amount: 25 + unit: grams +- label: Cream (40%) + amount: 750 + unit: milliliter +- label: Mushroom + amount: 500 + unit: grams +- label: Parsley +- label: Parmesan + amount: 50 + unit: grams +- label: Vegetable Stock + amount: 100 + unit: milliliter +- label: Onion + amount: 200 + unit: grams +- label: Shallot + amount: 50 + unit: grams + +stages: +- label: Preparation + steps: + - Cut the onions in half circles. + - Cut the shallots into half circles. + - Cut the mushrooms into quarter slices. + - Grate the parmesan. + - Finely chop the parsley. +- label: Caramelizing the Onion + steps: + - Get your soup pot, and add the butter to it. + - Set your stove to medium-high. + - Let the butter melt completely. + - Add the onions to the pot. + - Add the shallots to the pot. + - Cook until the onions become soft, about 5 minutes. + - Turn your stove to low-medium heat. + - Continue cooking while stirring occasionally, until the onions become brown. +- label: Soup + steps: + - Deglaze the pot with the vegetable stock. + - Add the mushrooms. + - Cook until the mushrooms turn soft, about 10 minutes. + - Add the cream. + - Add the parmesan. + - Add the parsley. + - Stir everything together, and let cook for about 5 more minutes. + - Add salt and pepper to taste. +--- + +My own take of a cream-based mushroom soup. diff --git a/content/recipes/dishes-hot/soup-pea-halal.md b/content/recipes/dishes-hot/soup-pea-halal.md new file mode 100644 index 0000000..231b984 --- /dev/null +++ b/content/recipes/dishes-hot/soup-pea-halal.md @@ -0,0 +1,86 @@ +--- +title: Halal Pea Soup +date: 2022-10-16 +preptime: 30 +cooktime: 120 +serves: 8 +tags: +- hot +- meal +- soup +- halal + +ingredients: +- label: Vegetable Broth + amount: 3 + unit: liter +- label: Split peas (dried) + amount: 500 + unit: grams +- label: Beef ribs, including bone + amount: 400 + unit: grams +- label: Lamb strips, with fat + amount: 300 + unit: grams +- label: Smoked Sausage (chicken) + amount: 250 + unit: grams +- label: Carrot + amount: 350 + unit: grams +- label: Onion + amount: 200 + unit: grams +- label: Leek + amount: 150 + unit: grams +- label: Celery + amount: 100 + unit: grams +- label: Potato + amount: 250 + unit: grams +- label: Celeriac + amount: 300 + unit: grams +- label: Salt +- label: Black Pepper +- label: Parsley + +stages: +- label: Preparation + steps: + - Cut the smoked sausage into slices. + - Cut all the vegetables into small bits. + - Caramelize the onion. +- label: Base + steps: + - Pour the vegetable stock in the pot. + - Add the split peas. +- label: Meats + steps: + - Add the beef ribs. + - Add the lamb strips. + - Bring the entire mixture to a soft boil. + - Let boil for 45 minutes, stirring every 5 minutes. Skim off any scum that + floats to the top. + - Remove the meats from the pot. + - Debone the beef ribs. + - Cut the lamb strips into smaller bits. + - Put the meats back into the pot. +- label: Vegetables + steps: + - Add the all the vegetables. + - Add the parsley. + - Let boil for at least 1 hour, stirring every 5 to 10 minutes. The peas need + to be dissolved, and the soup should be a little thick. +- label: Finishing Touches + steps: + - Add salt and pepper to taste. +- label: Serving + steps: + - Let sit overnight, and serve the next day for optimal enjoyment. +--- + +A halal version of a famous Dutch winter meal, pea soup. diff --git a/content/recipes/dishes-hot/stew-dutch.md b/content/recipes/dishes-hot/stew-dutch.md new file mode 100644 index 0000000..9806492 --- /dev/null +++ b/content/recipes/dishes-hot/stew-dutch.md @@ -0,0 +1,97 @@ +--- +title: Dutch Stew +draft: true +date: 2022-05-22 +preptime: 30 +cooktime: 450 +serves: 4 +tags: +- Dutch +- beef +- hot +- meal +- meat + +ingredients: +- label: Beef + amount: 500 + unit: grams +- label: Sweet onion + amount: 2 (large) +- label: Garlic + amount: 3 + unit: teaspoons +- label: Leek + amount: 1 +- label: Beans + amount: 200 + unit: grams +- label: Carrot + amount: 300 + unit: grams +- label: Chick peas + amount: 200 + unit: grams +- label: Mushrooms + amount: 200 + unit: grams +- label: Oudewijvenkoek + amount: 3 + unit: slices + links: + - https://nl.wikipedia.org/wiki/Oudewijvenkoek +- label: Beef stock + amount: 750 + unit: milliliters +- label: Dark beer + amount: 330 + unit: milliliters +- label: Appelstroop + amount: 2 + unit: tablespoons + links: + - https://nl.wikipedia.org/wiki/Appelstroop +- label: Mustard + amount: 3 + unit: teaspoons +- label: Bay leaf + amount: 4 +- label: Pepper +- label: Salt +- label: Smoked paprika +- label: Thyme + +stages: +- label: Prep + steps: + - Cut the beef into bite-sized chunks + - Cut all the vegetables into chunks, about 1cm in diameter where applicable + - Slice off 3 slices of the oudewijvenkoek, about 1cm thick + - Spread the mustard unto one side of the oudewijvenkoek slices +- label: Searing + steps: + - Sear the beef on all sides + - Remove the beef from the pot +- label: Stewing + steps: + - Saute the onions in the pot + - Add in the garlic, cook for about half a minute + - Add the leek to the pot, and cook for a minute + - Add the carrot to the pot + - Add the beer and beef stock to the pot + - Add the bay leaves to the pot + - Add the appelstroop to the pot + - Stir everything together + - Add the mustard-laced oudewijvenkoek, with the mustard facing down in the + pot + - Let this stew for about 6 hours, occasionally checking in to make sure its + simmering slowly. If too much liquid evaporates, you can add more water or + beef stock, the solids should be completely submerged + - Add the spices (pepper, salt, smoked paprika, thyme) to reach your desired + flavour + - Add the potatoes, chickpeas, and beans to the pot + - Let it stew for another 90 - 120 minutes +--- + +A hearthy and veggie-heavy stew, best served when the weather outside is cold +and wet. diff --git a/content/recipes/dishes-side/salad-stewed-beaf.md b/content/recipes/dishes-side/salad-stewed-beaf.md new file mode 100644 index 0000000..c7e6d15 --- /dev/null +++ b/content/recipes/dishes-side/salad-stewed-beaf.md @@ -0,0 +1,133 @@ +--- +title: Stewed Beef Salad +date: 2022-11-20 +preptime: 30 +cooktime: 300 +serves: 14 +tags: +- Dutch +- cold +- meat +- beef +- salad + +ingredients: +- label: Stewed Beef + amount: 500 + unit: grams +- label: Mayonnaise + amount: 400 + unit: grams + links: + - /recipes/condiments/mayonnaise/ +- label: Pickle + amount: 150 + unit: grams +- label: Potato + amount: 250 + unit: grams +- label: Carrot + amount: 150 + unit: grams +- label: Red Onion + amount: 150 + unit: grams +- label: Spring Onion + amount: 150 + unit: grams +- label: Capers + amount: 150 + unit: grams +- label: Egg + amount: 7 +- label: Paprika +- label: Salt +- label: Pepper +- label: Garlic Powder + +stages: +- label: Stewing + notes: | + This is a very simple means of stewing beef. You can adapt this to your + preferred recipe for stewed beef and use it all the same. Since this is the + longest process, you can perform all other steps in the meantime. + steps: + - Cut the beef into bite-sized cubes + - Sear the cubes of beef on all sides + - Put the seared beef in a pot + - Fill the pot with stock until all the beef is covered + - Add bay leaves to the pot + - Add apple syrup to the pot + - Add paprika to the pot + - Let the beef stew for about 4 hours +- label: Chopping + notes: | + All the ingredients should be chopped to around the same size, around 2 + millimeters big. The finer you chop, the smoother the eventual salad will + be. + steps: + - Chop the pickle + - Chop the red onion + - Chop the spring onion + - Chop the carrot + - Chop the potatoes +- label: Cooking + notes: | + The cooking process removes the raw taste, and makes the ingredients + slightly softer. Depending on how finely you chopped the ingredients, this + process only has to take 1 or 2 minutes per ingredient. + steps: + - Bring a pot of water to a boil + - Put in the chopped carrot + - Boil until _just_ ready + - Remove the carrot from the pot + - Rinse the carrot in cold water until the carrot is completely cooled off + - Repeat the cooking steps for the potatoes + - Boil the eggs for about 9 minutes +- label: Drying + notes: | + All the ingredients should be reasonably dry before mixing it all together, + or the salad will get watery and soggy. The method I use for drying all + these ingredients is to put them between sheets of paper towel, and press + down on it to expunge most of the moisture, then remove the paper towels. + steps: + - Dry the pickles + - Dry the red onion + - Dry the spring onion + - Dry the carrots + - Dry the potatoes + - Dry the capers +- label: Combining + steps: + - Shred the stewed beef + - Grab a big bowl + - Add the pickles + - Add the red onion + - Add the spring onion + - Add the carrots + - Add the potatoes + - Add the capers + - Add the shredded beef + - Add the mayonnaise + - Mix together until combined into a cohesive salad + - Add salt, pepper, paprika, and garlic powder to taste +- label: Serving + notes: | + You can obviously serve it in any way you desire, but this is how I + traditionally encountered it. + steps: + - Cut the boiled eggs in half + - Place the salad on a plate + - Add a boiled egg on top, cut side up + - Garnish with leftover pickle and spring onion +--- + +A small, hearty salad. Served cold, usually as a side-dish, but also works great +as a little snack. + +<!--more--> + +If you use home-made mayonnaise, you can cook the egg whites in a scrambled +fasion, and add it to the salad as well. This won't affect the flavour too much, +but will make it a more filling snack, and you won't have to make meringue +_again_. diff --git a/content/recipes/dishes-side/stewed-pears.md b/content/recipes/dishes-side/stewed-pears.md new file mode 100644 index 0000000..9312feb --- /dev/null +++ b/content/recipes/dishes-side/stewed-pears.md @@ -0,0 +1,50 @@ +--- +title: Stewed Pears +date: 2022-11-20 +preptime: 10 +cooktime: 180 +serves: 2 +tags: +- cold +- fruit +- sweet +- vegetarian + +ingredients: +- label: Stewing Pears + amount: 400 + unit: grams +- label: Cinnamon + amount: 4 + unit: grams +- label: Light Caster Sugar + amount: 16 + unit: grams +- label: Strawberry Lemonade Syrup + amount: 20 + unit: grams +- label: Water + +stages: +- label: Preparation + steps: + - Peel the pears, and remove the cores. + - Cut the pears into quarters. +- label: Stewing + steps: + - Put the pears in a pot. + - Fill the pot with water until it covers all the pears. + - Add the cinnamon. + - Add the sugar. + - Add the syrup. + - Stir until everything is combined. + - Let the pears stew until they are soft and have changed their color to a + bright pink. +--- + +A sweet dish, commonly served with gamey-meat or stewed meat. + +<!--more--> + +Originally made by my grandmother, this recipe is my attempt to get as close as +possible to this little treat. diff --git a/content/recipes/dishes-side/waffles.md b/content/recipes/dishes-side/waffles.md new file mode 100644 index 0000000..4a5165d --- /dev/null +++ b/content/recipes/dishes-side/waffles.md @@ -0,0 +1,56 @@ +--- +title: Waffles +date: 2024-04-15 +preptime: 10 +cooktime: 5 +serves: 2 +tags: +- warm +- sweet +- vegetarian + +ingredients: +- label: Flour + amount: 100 + unit: grams +- label: Granulated Sugar + amount: 30 + unit: grams +- label: Baking powder + amount: 10 + unit: grams +- label: Salt +- label: Milk + amount: 100 + unit: milliliters +- label: Egg + amount: 50 + unit: grams +- label: Vanilla Extract + amount: 1 + unit: teaspoon +- label: Butter + amount: 25 + unit: grams + +stages: +- label: Preparation + steps: + - Put the butter in the microwave until it is completely melted + - Mix all ingredients together in a mixing bowl until the batter is smooth +- label: Baking + steps: + - Ensure your waffle iron is on optimal temperature + - Apply the appropriate amount of batter to the waffle iron + - Let bake until lightbrown +--- + +Sweet and fluffy waffles. Great as a snack or as a side-dish to a larger meal. + +<!--more--> + +These waffles can be served with ice cream and/or fruit to create a simple but +delicious snack or dessert. The instructions for the baking itself are rather +simplistic, but it appears that waffle irons differ wildly in size and settings +even in my own country, let alone in other countries. For this reason, you may +need to experiment a little with your waffle iron of choice. diff --git a/content/recipes/snacks/buttercake.md b/content/recipes/snacks/buttercake.md new file mode 100644 index 0000000..0441fab --- /dev/null +++ b/content/recipes/snacks/buttercake.md @@ -0,0 +1,53 @@ +--- +title: Buttercake +date: 2022-04-18 +tags: +- snacks +- rich +- sweet +preptime: 15 +cooktime: 30 +serves: 8 + +ingredients: +- label: Egg + amount: 1 +- label: Butter + amount: 200 + unit: grams +- label: Flour + amount: 250 + unit: grams +- label: Granulated Sugar + amount: 200 + unit: grams +- label: Vanilla Sugar + amount: 8 + unit: grams + +stages: +- label: Preparations + steps: + - Ensure the butter is on room temperature + - Heat up the oven to 180℃ +- label: Batter + steps: + - Whip the butter until its nice and soft + - Add in the sugars, and mix until combined + - Add in an egg, and mix until combined + - Sift in the flour, and mix until combined +- label: Shaping + steps: + - Butter your cake tin + - Put the cake batter into the cake tin + - Smooth out the cake batter, using a wet spoon + - Using a fork, carve a diamond pattern on the cake batter + - Put your 2nd egg in a small cup, and whisk into a single cohesive substance + - Lightly coat the cake with egg +- label: Baking + steps: + - Bake in the oven for 25-30 minutes + - Let it cool to room temperature, preferably leaving it overnight until serving +--- + +A rich snack from the glorious Netherlands. diff --git a/content/recipes/snacks/cheesecake-basque-burned.md b/content/recipes/snacks/cheesecake-basque-burned.md new file mode 100644 index 0000000..e59b9b2 --- /dev/null +++ b/content/recipes/snacks/cheesecake-basque-burned.md @@ -0,0 +1,65 @@ +--- +title: Basque-burned Cheesecake +date: 2022-04-18 +tags: +- snacks +- sweet +preptime: 75 +cooktime: 10 +serves: 1 + +ingredients: +- label: Cream cheese + amount: 900 + unit: grams +- label: Egg + amount: 500 + unit: grams +- label: Flour + amount: 50 + unit: grams +- label: Granulated sugar + amount: 300 + unit: grams +- label: Heavy cream (35% fat) + amount: 500 + unit: grams +- label: Creme Fraiche + amount: 125 + unit: grams +- label: Salt + amount: 1 + unit: teaspoon +- label: Vanilla extract + amount: 1 + unit: tablespoon + +stages: +- label: Preparation + steps: + - Ensure all the ingredients are at room temperature. + - Turn on the oven to 477K +- label: Batter + steps: + - Put the cream cheese into the mixing bowl. + - Put the sugar into the mixing bowl. + - On a low speed, mix the cream cheese and sugar together into a single, soft mixture. + - Add in one egg, and mix until combined, repeat until all eggs have been mixed in. + - Add the vanilla extract and salt, and mix until combined. + - Sift in the flour, and mix until combined. + - Grease up the mixing bowl, to make it slightly sticky for the baking sheet. +- label: Shaping + steps: + - Put the baking sheet into the mixing bowl, it does *not* need to look pretty! + - Ensure the baking sheet sticks out of the cake tin, along the sides, as the + cheesecake itself will rise way above the cake tin's height. + - Pour the cake batter from the mixing bowl into the cake tin. +- label: Baking + steps: + - Bake the cheesecake for about 60 minutes. + - Let the cheesecake cool to room temperature. + - Refridgerate the cheesecake for at least 2 hours. +--- + +The easiest cheesecake you've ever made. Baked quickly and at a high temperature +without regard for looking pretty. diff --git a/content/recipes/snacks/kruidnoten.md b/content/recipes/snacks/kruidnoten.md new file mode 100644 index 0000000..3613b95 --- /dev/null +++ b/content/recipes/snacks/kruidnoten.md @@ -0,0 +1,50 @@ +--- +title: Kruidnoten +date: 2022-04-18 +tags: +- snacks +preptime: 60 +cooktime: 15 +serves: 2 + +ingredients: +- label: Baking Powder + amount: 3 + unit: grams +- label: Bastard Sugar (Dark) + amount: 60 + unit: grams +- label: Butter + amount: 70 + unit: grams +- label: Flour + amount: 100 + unit: grams +- label: Milk + amount: 25 + unit: grams +- label: Speculaas spices + amount: 6 + unit: grams +- label: Vanilla Extract + amount: 3 + unit: grams + +stages: +- label: Batter + steps: + - Put all ingredients in a mixing bowl + - Use a mixer with dough hooks to combine everything into a cohesive dough + - Take the dough out of the mixing bowl, and wrap it in plastic wrap + - Leave the wrapped dough in the fridge for 30 - 45 minutes to rest +- label: Baking + steps: + - Take the dough out of the fridge + - Turn on the oven to 448K + - Take small bits of dough, and roll them into balls + - Put the dough balls on a baking sheet + - Bake the dough balls for 15 minutes + - Take the kruidnoten out of the oven, and let cool on a wire rack for 15 - 30 minutes +--- + +A Dutch snack for Sinterklaas, but very tasty at any time of the year. diff --git a/content/services/_index.md b/content/services/_index.md new file mode 100644 index 0000000..f303c7a --- /dev/null +++ b/content/services/_index.md @@ -0,0 +1,12 @@ +--- +title: Services +--- + +These are all the services I run for public use. I give no guarantee on the +stability of any of these services, nor the longevity of them. + +<ul> +{{ range .Pages }} + <li><a href="{{ .Permalink }}">{{ .Title }}</a></li> +{{ end }} +</ul> diff --git a/content/services/fiche.md b/content/services/fiche.md new file mode 100644 index 0000000..62e0fe8 --- /dev/null +++ b/content/services/fiche.md @@ -0,0 +1,12 @@ +--- +title: Fiche +location: https://p.tyil.nl +upstream: https://github.com/solusipse/fiche +--- + +Fiche is a service to host pastes, which can be sent to it through various +command line utilities. The easiest way to create a new paste is with `nc`. + +```sh +$command | nc tyil.nl 9999 +``` diff --git a/content/services/invidious.md b/content/services/invidious.md new file mode 100644 index 0000000..211879f --- /dev/null +++ b/content/services/invidious.md @@ -0,0 +1,8 @@ +--- +title: Invidious +location: https://youtube.alt.tyil.nl +upstream: https://github.com/iv-org/invidious +--- + +Invidious is an alternative front-end to YouTube. It greatly diminishes the +amount of JavaScript required to watch content. diff --git a/content/services/nitter.md b/content/services/nitter.md new file mode 100644 index 0000000..5bb111a --- /dev/null +++ b/content/services/nitter.md @@ -0,0 +1,8 @@ +--- +title: Nitter +location: https://twitter.alt.tyil.nl +upstream: https://github.com/zedeus/nitter +--- + +Nitter is an alternative front-end to Twitter, which uses no JavaScript at all +to render the posts and comments. It also supports RSS feeds for user profiles. diff --git a/content/services/omgur.md b/content/services/omgur.md new file mode 100644 index 0000000..68c73ee --- /dev/null +++ b/content/services/omgur.md @@ -0,0 +1,9 @@ +--- +title: Omgur +location: https://imgur.alt.tyil.nl +upstream: https://github.com/geraldwuhoo/omgur +--- + +Omgur is a JavaScript free alternative front-end to Imgur. This project does +not include a "front page", only pages which show actual uploaded content are +implemented. diff --git a/content/services/searxng.md b/content/services/searxng.md new file mode 100644 index 0000000..abdcaa2 --- /dev/null +++ b/content/services/searxng.md @@ -0,0 +1,9 @@ +--- +title: SearxNG +location: https://searxng.tyil.nl +upstream: https://docs.searxng.org/ +--- + +SearXNG is a free internet metasearch engine which aggregates results from more +than 70 search services. Users are neither tracked nor profiled. It is a fork of +Searx. diff --git a/content/services/teddit.md b/content/services/teddit.md new file mode 100644 index 0000000..a8eff08 --- /dev/null +++ b/content/services/teddit.md @@ -0,0 +1,8 @@ +--- +title: Teddit +location: https://reddit.alt.tyil.nl +upstream: https://github.com/teddit-net/teddit +--- + +Teddit as an alternative front-end to Reddit, without the need for any +JavaScript to operate. |