We then also need to ensure the directory for storing the Podman Quadlet files is created
mkdir -p ~/.config/containers/systemd
Static sites
If you want to serve a static side from within the Caddy container, also create a directory for each one, following the pattern
mkdir -p ~/containers/caddy/sites/{site-name(s)}
Replace
site-name(s) : a comma seperated list of directory names for the sites you intend to serve
If you don’t, you could still consider simply creating the parent directory ~/containers/caddy/sites, so you don’t have to touch the generated Quadlet file.
You would need to create corresponding entries in your Caddyfile anyway, for it to be active.
mkdir -p ~/containers/caddy/sites
Ports
As we’re running Podman rootless, the ports 80 (HTTP) and 443 (HTTPS) will certainly not be available.
There are numerous resolutions.
I intend to use this Caddy container to exclusively manage every incoming (web) traffic, however.
That’s why I simply decided to forward the ports 80 and 443 to a port my non-privileged user can get a hold of, namely 1880 and 1443.
I mostly utilize this central Caddy instance for reverse proxying.
For example, I might have a second container running a web server on port 5000.
To serve it under the subdomain service.dustvoice.de, I would simply populate the Caddyfile under ~/containers/caddy/config/Caddyfile with
I often use environment variables to, for example, specify domains of my sub-sites (the sites this frontend Caddy instance proxies to), etc.
For this, I specify an EnvironmentFile in the Quadlet file.
You can then specify environment variables within this file using the NAME=val pattern,
for example:
If you don’t want to use said environment file, you simply need to replace all occurences of a variable ({$VAR_NAME}) within the Caddyfile with the appropriate value.
Logging
I usually insert a subdomain-log macro at the top of my Caddyfiles, to quickly enable logging within a subdomain section
-v ~/containers/caddy/sites:/srv:ro,z(Remove this, if you would rather not serve static sites from this Caddy instance)
EnvironmentFile
A file specifying environment variables available within the container following a VAR_NAME=VALUE pattern.
I often use this for specifying variables used in the Caddyfile using the {$VAR_NAME} syntax. Also see Environment variables
What are the :z and :Z labels?
These two labels are specific to SELinux, which is enabled by default on Fedora.
Although some people might see it as an inconvenience, you shouldn't simply disable it, especially on a server, as it greatly hardens your system and increases security.
I found a good explanation elaborating these two options a bit in this blog post.
Boot it up
Reload
Reload the daemon
As Quadlet files are systemd service files, you need to reload the daemon.
systemctl --user daemon-reload
This generates appropriate .service files.
Tip
Sometimes, this can fail and not generate a .service file.
To debug this, immediately drop into the user journal, to see any error messages
Sometimes, the non-service-specific journal can be helpful in debugging a problem.
In that case, simply restart the service and immediately drop into the journal:
You can easily create a crude index.html file in, for example, ~/containers/caddy/sites/test
~/containers/caddy/sites/test/index.html
<html> <head></head> <body> <p>Hello from Caddy!</p> </body></html>
and add a corresponding entry to your Caddyfile
~/containers/caddy/config/Caddyfile
test.dustvoice.de { root * /srv/test file_server}
The site should now be accessible through the domain you specified and greet you with a Hello from Caddy!.
Consider removing it
For security purposes, I would probably remove the ~/containers/caddy/sites/test folder after testing.
Removing the corresponding lines in your Caddyfile might be sufficient, but you most likely don’t need to test it this way anytime soon again, so why clutter your system.
Harden
Don’t terminate TLS
In the current scenario, the frontend caddy consumes TLS and proxies to the backend caddy or caddies.
This should be fine, if you trust the local network of your server, or at least your machine if you employed reasonable efforts of setting up, e.g., firewall rules, etc.
An alternative would be to also use TLS encryption between the frontend and backend instances.