As part of my public homelab setup, one of the services I configured was a mail server! I got started with around a month ago, and have been running a bare minimum setup for a week. Here are some notes about the configuration process.
Finding the right infrastructure
Most providers and ISPs block port 25 traffic due to it being an easy target for spam. It makes sense to do so, as even one machine sending spam would put the entire network on a blocklist, unable to send email.
Some providers have an option to request to unblock the port. Vultr is one such provider I heard of, so I started by hosting my website there, and submitted an unblock request within a few days that was eventually rejected.
That left me with two options:
- Switch to a provider that allows SMTP traffic
- Use an external service called a SMTP relay to handle sending and self-host receiving email.
The second option is the approach I saw recommended most often, as it delegates the responsbility of maintaining sender reputation while letting you store email on a server you control. I started with this option on the Vultr VPS, but eventually went with the first option.
In hindsight, I could have easily stuck with the Vultr VPS, and this was an important lesson about firewalls. Firewalls have separate rules for incoming and outgoing traffic, and port 25 is only blocked for outgoing traffic by most providers. Incoming traffic at port 25 is allowed, so receiving mail works fine. I didn’t test this and assumed that port 25 was blocked in either direction, which led to the switch to my current provider.
An unintentional advantage of this switch is that I’m now responsible for handling both sending and receiving, which has been a better learning experience. Whether this remains an advantage or not - time will tell.
Finding the right learning material
There isn’t a shortage of tutorials out there, but none of them went into details behind the “why”. While searching about one of the aspects, I came across a series of posts titled Setting up Postfix and Dovecot Slowly and Properly by brokker.net, which is exactly what I needed! It takes you through the setup in small increments, with each step ending with a working setup you can test, which I really liked. This series gave me the foundation I needed, and from here I could decide which aspects I wanted to implement.
Another setup guide I found useful was Install and Configure a Secure Mail Server on Debian with Postfix and Dovecot by std.rocks. It’s a newer, concise guide, and especially useful after having gone through the series. In particular, I followed its instructions for setting up OpenDKIM, SPF and DMARC much earlier than mentioned in the series, as these aspects seem to have gone from “nice to have” to “absolutely required” over time for sending to work, especially when sending emails to Gmail addresses.
Configuring a test server
I already had this domain hosted elsewhere and have been sending and receiving mails at this domain. I initially started by switching the primary mail server to the test server by changing the DNS record. However, the setup was taking me a while as I was taking time to understand and test the various aspects. That would mean I may or may not receive emails sent during that time, and needed a test setup.
The test setup that worked was a secondary mail server - mx.orgnizedmess.net - that receives emails at a subdomain instead of the root domain. I would send test emails to user@mx.orgnizedmess.net instead of user@orgnizedmess.net.
This was not as straightforward as I thought, as parts of the configuration varied for subdomains, that wasn’t as obvious to me:
-
TLS certificates: I issued a certificate with the test subdomain included during testing and re-issued them without it once the setup was complete. I was using Caddy to generate SSL certificates for my website earlier, so I switched to a dedicated client (I’m using dehyrdated) to be able to use the same certificates for both the website and the mail server.
-
DKIM: Similar to TLS, I had to generate the key pair for the subdomain during testing and then generate it for the root domain when I was ready to switch servers. The DNS record also had to be set to the subdomain during testing - selector._domainkey.mx.orgnizedmess.net instead of selector._domainkey.orgnizedmess.net.
Where the lines between different services get blurred
There are two main servers - SMTP (Postfix) and IMAP (Dovecot). Postfix handles sending and receiving of emails (aka the MTA or Mail Transfer Agent), and Dovecot handles mailboxes for different users and lets me access mail in a mail client (aka the MUA or Mail User Agent). Any setup beyond is where the services started to rely on each other, and also where I got confused the most.
-
Postfix relies on Dovecot for authentication using SASL or the Simple Authentication and Secure Layer - a long name for something that’s meant to be simple.
-
Postfix runs a submission service at port 587 (aka MSA or Mail Submission Agent) that encrypts the email sent from a MUA before sending it to the MTA. Connections on this port start as plaintext, and then upgrade to TLS if supported (aka Opportunistic TLS). There is another service called SMTPS at port 465, which starts as with the TLS handshake directly (aka Implicit TLS). As per Section 3.3 of RFC 8314, it is recommended to run both services and eventually transition to SMTPS.
-
By default, Postfix delivers any received email to the user’s mailbox (aka MDA or Mail Delivery Agent). For any additional processing on the emails like filtering before being delivered, Postfix can be configured to use Dovecot as the MDA. The protocol used to do so is LMTP or the Local Mail Transfer Protocol.
Wow that’s a lot of acronyms.
I haven’t been able to make progress on my homelab. The setup was all local and I was mostly working on my own, so it wasn’t as fun to work on after a point. I also started overthinking parts of the setup, which is ironical since the idea of a lab is to experiment. Then I started seeing discussions around self-hosting email, and that got me thinking about how I could reframe the project.
My current public infrastructure is a combination of three services - a web server, DNS and email - which overlapped with ideas I had for the homelab. Being able to manage these services myself sounded like a fun challenge - and suddenly the project became exciting to work on again!
I was sure that I wanted to host on a Virtual Private Server or VPS - I wasn’t ready to host a machine at home just yet. A VPS lets me configure a Linux machine the way I would at home, but without me having to deal with public Internet traffic at home. The setup is also not tied to any specific platform, so I can move to a Linux machine on another provider or switch to a home-based setup whenever required.
I have hosted my website on a small VPS before and know that it can handle the current level of traffic, so I decided to stick with the same specs and see if it could handle DNS and email too. After some trial and error with providers, I settled on the smallest VPS offering from Mythic Beasts - 1 CPU, 1GB RAM and a 5GB SSD. It is currently hosting this website, a mailserver and an authoritative DNS server for this domain, and it seems to be running fine so far.
The original homelab setup is still useful as it has more resources than the public machine does. This makes it useful for running multiple virtual machines to simulate networks, or for ideas that require me to interact with lower levels of the network stack or the hardware.
I have been visiting some websites out of habit that don’t make me feel great, so I started looking for website blocker apps. In the process of doing so, an easier solution came to mind - editing the /etc/hosts file.
I opened /etc/hosts, and set the IP address for the domain I want to block (LinkedIn in this case) to 0.0.0.0. The address isn’t a valid destination address, so the browser fails to connect after the domain is resolved.
0.0.0.0 www.linkedin.com linkedin.com
I can append other domains to the above line after the IP address, or add another line with the same address and another domain if this line starts to get too long.
I flushed the DNS cache on my machine and after a while, LinkedIn doesn’t load anymore! Mission accomplished!
This site has moved from pjg1.site to orgnizedmess.net!
I’ve been thinking of making this move from at least a year, and a recent price hike for the previous domain ($26.24 -> $28.84) finally made me switch. The new name comes from a username I’ve used previously and liked, and it costs me nearly half the price ($12.52), a win-win!
I have set up a permanent redirect from the old domain to the new one. I won’t be renewing the old domain any further, so the redirects will work till July. Those subscribed to the site via RSS can re-subscribe using the new URL by then to continue receiving posts.
I’ve tried quite a few systems and tools to manage my projects and ideas, and switching gets tiring. I don’t think I’ll stop switching anytime soon though, so here are some guidelines to help me choose which tools to use whenever I feel the urge to try something new.
Stick to plaintext - preferably Markdown
Plaintext files are easy to write in, maintain and switch formats when needed. When I am looking at a new tool to try, my main decision to use it would be on the basis of whether it supports plaintext or not. If I really feel the urge to try a tool that is backed by cloud storage and/or databases, I export the notes back to plaintext eventually.
After having tried custom formats, Markdown and org mode, I keep coming back to Markdown as my format of choice. It’s well supported by most editors and I’m used to the syntax, so it makes sense to stick to it.
It’s boring, but it works.
Keep short-term context extremely brief when overwhelmed
Combining notes and actionable items is tempting, acting as a single source of truth to refer to. However, seeing everything together (even when structured) gets overwhelming for me on some days, so I would like the option to have a focused view of what I have to work on in that moment.
This could look like creating a file or post-it note for each day or each work session, with only relevant tasks and information in that document. I could then process the document when I feel less overwhelmed.
Have multiple places for capturing notes
I like to write things in different places depending on my mood (see point above) and what I have access to in that moment - it could be a laptop, a notebook, sticky notes or my phone. In that case, having multiple capture systems is essential, without caring about their format or organization.
One place would be ideal, but in a system that combines digital and analog tools, having multiple places is realistic, reduces friction to writing and scratches the novelty itch.
Add useful context to a captured note
I’ve saved links to read or watch or written an idea, and then forgot why I captured them when revisiting them weeks or months later. Adding the reason behind why I saved something makes it easier to process a note, especially if I reference it way after I wrote it. It could bring back the spark in the idea or I can delete if the idea no longer interests me.
Have a strict review process
Capture systems have to be supported with a strong review process, otherwise finding stuff is going to be a struggle, something that happens often. I have made time for reviewing in the past, but I either get overwhelmed or choose to leave notes thinking they’ll be useful sometime in the future. That sometime rarely comes, and I’m left with a pile of semi-processed and unprocessed notes.
Create strong indications of which notes have been processed and which haven’t. This could look like adding a symbol next to a note or deleting the note all together. If I cannot decide what to do with a note, revisit it for a maximum of three times. If I still don’t know what to do with the note or if the note has lost its relevance till then, delete.
Schedule regular reviews - weekly and whenever I reach a checkpoint in my projects. Despite my best efforts, there will be times where regular reviews may not happen and I end up with a huge backlog. In those cases, it would be best to leave the backlog as is, rely on search and filtering to get the relevant information, and process the rest when I have the time and energy.
I would also like to extend this process to long-form notes. Rather than letting them collect in a folder, I would like to consider publishing more of them to this site.