
Spinach Garden by lichenaut
Hey again, and welcome to my website walk-through :)
Tech Stack
I'll begin by justfying the tech stack I chose:
• Django: Django is a "batteries-included" Python framework with a large community, great security features, and a built-in administration panel. I knew that my backend logic would be pretty simple, which doesn't really lend itself to Django's comprehensive feature list, but I was already familiar with the framework, so I could pick and choose which pieces I wanted to use. Simple logic also meant that I didn't need a high level of customizability and could rely on Django's high-level nature. Finally, Python, and by extension Django, are not the most performant options, but I didn't need to worry much about this for two reasons: one, there would likely be larger bottlenecks in speed that would render Django's performance basically negligible, such as download and upload speeds, and two, there isn't much scaling in this use case. Of course, Django can be scaled when purposefully designed for it.
• uWSGI: A "Web Server Gateway Interface" (WSGI) focuses on communication between web servers and Python applications. This bridge is necessary because it allows a Python application to not have to worry about the context in which it's being run. This bridge is nice to have because it standardizes web-to-Python communication, improves performance and scalability, and includes various more advanced features. I chose uWSGI, a type of WSGI, because it's open source.
• Nuxt: Nuxt is a feature-rich JavaScript meta-framework built on top of Vue.js. It has a great developer experience, Vue is relatively popular, and it is applicable for many use cases.
• Tailwind CSS: Tailwind CSS is a collection of utility CSS classes. I find that Tailwind strikes a good balance between standardizing styling and avoiding a "cookie-cutter" approach.
• Caddy: Caddy is a feature-rich, easy-to-configure web server. Its well-abstracted configuration made it a great fit for my simple use case, but it's not limited to simple scenarios.
• Podman: Podman is a containerization tool that is more lightweight and secure than Docker. Unlike Docker, which uses a root-level daemon to manage itself, Podman manages containers using systemd services.
• Hetzner: Hetzner is a German hosting company. I find their servers to be cost-effective, and I appreciate that they are not based in the United States. Currently, my personal circumstances are not ideal for self-hosting, which requires stability and an investment of time and money.
• Cloudflare: Cloudflare is a versatile platform that offers a wide range of services, but is perhaps best known for its content delivery capabilities. In addition to these services, Cloudflare also provides free DNS and SSL/TLS certificates. I also bought my domain through Cloudflare.
Project Structure
The project stucture looks like this:
.
├── backend
├── caddy
├── deploy_destructive.sh
├── frontend
├── LICENSE.txt
├── package.json
├── README.md
├── update_destructive.sh
└── venv
We can see that I organize Django and uWSGI into the 'backend' directory, Caddy into the 'caddy' directory, and Nuxt into the 'frontend' directory. Additionally, I have two destructive scripts at the root of the project.
The 'update_destructive.sh' script is used for updating and formatting, and should only be run in development contexts.
#!/bin/bash
# Update pip and its packages
source venv/bin/activate
pip install --upgrade pip
pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs pip install --upgradell --upgradell --upgradell --upgrade
cd backend
pip freeze > requirements.txt
# Migration
rm -rf db.sqlite3
python3 manage.py makemigrations && python3 manage.py migrate
# Format
cd ..
black $(pwd)
deactivate
# Update node packages
(cd frontend && npm install && npm audit fix)
The 'deploy_destructive.sh' script is used for Podman deployment:
...
if [ -z "$(podman pod ls | grep lichenaut-website)" ]; then
podman pod create --name lichenaut-website -p 80:80 -p 443:443
fi
if [ -z "$(podman volume ls | grep lw-uwsgi-data)" ]; then
podman volume create lw-uwsgi-data
fi
...
podman build -t lw-uwsgi ./backend
podman run -d --name lw-uwsgi --pod lichenaut-website -v lw-uwsgi-data:/usr/src/app/db lw-uwsgi
...
podman build -t lw-nuxt ./frontend
podman build -t lw-caddy ./caddy
podman run -d --name lw-nuxt --pod lichenaut-website lw-nuxt
podman run -d --name lw-caddy --pod lichenaut-website --env-file ./caddy/.env lw-caddy
Django
Django follows a Model-View-Template (MVT) pattern, where the model defines the data structure and behavior, the view handles HTTP requests and responses, and the template generates the HTML output. However, since I used a JavaScript meta-framework in conjunction with Django, the template aspect of MVT wasn't directly relevant to this project.
Django's 'settings.py' file allows me to import apps or create my own, with each of my own apps havings its own directory within the 'backend' directory. For this project, I created an app called 'api', which includes files such as 'middleware.py', 'models.py', 'urls.py', and 'views.py'. The main Django directory within 'backend' is also named 'backend' and contains key files like 'settings.py' and 'urls.py', among others.
To gain a better understanding of how Django operates, let's walk through an example request-handling process. The process begins with the URL of the incoming request being matched against the URL patterns defined in Django's 'urls.py' file. In my case, the 'urls.py' file contains the following URL configurations:
urlpatterns = [
# path("admin/", admin.site.urls),
path("api/", include("api.urls")),
]
I commented out the path to the administration panel, as it is not required for my application. I have routed all other requests to my 'api' app, since all of my URLs will be prefixed with "api/" as a personal design choice.
The request, being routed to my 'api' app, is then matched against the URL patterns defined in my app's 'urls.py' file:
urlpatterns = [
path("guestbook/", GuestbookView.as_view(), name="guestbook"),
path("visit_tracker/", VisitTrackerView.as_view(), name="unique_visit_count"),
]
As we can see, both URLs ultimately lead to views defined in my app's 'views.py' file. However, before these views can execute, any defined middleware is executed. My app's 'middleware.py' file contains the following code:
class UniqueVisitMiddleware(MiddlewareMixin):
def process_request(self, request):
VisitTracker.update_visit()
As we can see, this middleware class simply calls a method defined in my app's 'models.py' file, which increments the website visit count. We'll delve into the details of this method later.
Let's revisit the URL matching. If the request path is "api/visit_tracker/", it matches my app's "visit_tracker/" URL pattern. The associated "VisitTrackerView" view then processes the request, which is defined as follows:
class VisitTrackerView(APIView):
def get(self, request):
try:
return Response({"unique_visit_count": VisitTracker.get_count()})
except Exception as e:
return Response(
{"error": str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
As we can see, this view class calls a method defined in my app's 'models.py' file, which returns the visit count value incremented by the middleware, along with some error handling.
At this point, we have encountered two calls to the "VisitTracker" class in my app's 'models.py' file. The class is defined as follows:
class VisitTracker(models.Model):
timestamp = models.DateTimeField(auto_now=False, null=True, blank=True)
count = models.IntegerField(default=0)
@classmethod
def get_count(cls):
"""Get the current visit count."""
return cls.objects.filter(id=1).values_list("count", flat=True).first() or 0
@classmethod
def update_visit(cls):
"""Update visit count if 10 minutes have passed since last update."""
time_threshold = timezone.now() - timedelta(minutes=10)
visit_tracker, created = cls.objects.get_or_create(id=1, defaults={"count": 1})
if not created and visit_tracker.timestamp < time_threshold:
visit_tracker.count += 1
visit_tracker.timestamp = timezone.now()
elif created:
visit_tracker.timestamp = timezone.now()
visit_tracker.save()
This model defines the structure of the data in the database by specifying the "timestamp" and "count" fields, as well as the two methods we encountered earlier.
To summarize the request-handling process, the request URL is matched to a view, after which middleware executes. The view then executes its logic, which in this case involves calling a model class method. Models are responsible for structuring and interacting with stored data, and can query for information that can be passed back up the call stack to the frontend that requested it.
Obviously, there's a lot more you can do with Django, but I hope this example has helped you understand the high-level flow of the framework.
uWSGI
uWSGI, pronounced "you whiskey", is mainly configured by editing the 'uwsgi.ini' file. Here's my configuration:
[uwsgi]
http-socket = :8000
module = backend.wsgi
master = true
processes = 2
vacuum = true
The module setting is, in part, what links uWSGI to Django. Of course, there are many more options that can be set, and these will vary depending on the context.
Backend Dockerfile
Here is the 'backend' directory's 'Dockerfile':
FROM python:3.13-alpine
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/app
COPY . .
RUN apk update && apk add python3-dev build-base linux-headers pcre-dev sqlite
RUN pip install --upgrade pip && pip install --no-cache-dir -r requirements.txt
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
And its 'entrypoint.sh' script:
#!/bin/sh
set -e
python manage.py makemigrations
python manage.py makemigrations api
python manage.py migrate
python manage.py collectstatic --noinput
uwsgi --ini uwsgi.ini
As we can see, this Dockerfile is based on an Alpine image that comes with Python pre-installed. It copies the directory's contents into the image, installs the required APK and pip packages, and then runs the 'entrypoint.sh' script. The script prepares Django for deployment and initiates the uWSGI server.
Some potential improvements for this Dockerfile include using a builder step to minimize the image size, although I initially found this approach to be error-prone when I first wrote it. However, since a lot has changed since then, I may revisit this approach in the future. Additionally, I only installed the SQLite package so that I could execute SQL commands, such as deleting a comment in my guestbook, for example. It is not necessary for the container to work correctly.
Nuxt
Nuxt is a versatile tool with many capabilities, but I'll focus on how I organized my code. The contents of my 'frontend' directory are as follows:
.
├── app.vue
├── assets
├── components
├── composables
├── Dockerfile
├── error.vue
├── node_modules
├── nuxt.config.ts
├── nuxt.d.ts
├── package.json
├── package-lock.json
├── pages
├── public
├── server
└── tsconfig.json
At this level, the three key files are 'app.vue', 'error.vue', and 'nuxt.config.ts'. The 'app.vue' file serves as the main component and entry point of the project, taking precedence over the 'pages/index.vue' page. Custom error handling is managed through the 'error.vue' component, which is the only other entry point. As a result, all pages are routed through either one of these two components. The 'nuxt.config.ts' file, on the other hand, is where project-wide settings can be configured. Here's an example of mine:
import tailwindcss from "@tailwindcss/vite";
export default defineNuxtConfig({
compatibilityDate: "2025-05-23",
devtools: { enabled: true },
telemetry: false,
plugins: [],
modules: ["@nuxt/image", "@nuxtjs/sitemap"],
css: ["assets/css/globals.css"],
image: {
provider: "ipx",
},
components: [
"~/components",
"~/components/bar",
"~/components/blog",
"~/components/desktop",
"~/components/efilism",
"~/components/error",
"~/components/liosapp",
"~/components/music",
"~/components/page",
"~/components/visual",
],
vite: {
plugins: [tailwindcss()],
},
});
Finally, let's take a brief look at some of the directories. The 'assets' directory is where I import Tailwind CSS and write custom styles. The 'components' directory contains reusable pieces of HTML, TypeScript, and CSS that can be used throughout the project to promote modularity. The 'composables' directory holds TypeScript files that can be thought of as script-only components. The file structure within the 'pages' directory defines the page structure of the website. Finally, the 'public' directory stores all the media files referenced by my HTML, which are served to the user.
Frontend Dockerfile
Here is the 'frontend' directory's 'Dockerfile':
FROM node:20.19.2-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install -g npm@latest
RUN npm install
RUN npm audit fix
COPY . .
RUN npm run build
CMD ["node", "/usr/src/app/.output/server/index.mjs"]
As we can see, this Dockerfile is based on an Alpine image that has Node pre-installed. It copies the directory's contents into the image, updates npm, installs the required packages, builds the application, and then starts the Node server.
Some potential improvements for this Dockerfile are to add a build step and to specify a version for npm, particularly if one plans to pin to a specific Node.js version for an extended period, as only certain versions of npm are compatible with specific versions of Node.
Caddy
Configuring Caddy is mainly done by editing the 'Caddyfile'. Here's a configuration that automatically renews certificates from Cloudflare:
lichenaut.com {
tls {
dns cloudflare {$CLOUDFLARE_API_TOKEN}
}
reverse_proxy /api/* http://placeholder:8000
reverse_proxy /* http://placeholder:3000
encode zstd gzip
}
And here's one that uses a locally stored certificate and key, which will expire 15 years from its creation date.
lichenaut.com {
tls certificate.pem private.key
reverse_proxy /api/* http://placeholder:8000
reverse_proxy /* http://placeholder:3000
encode zstd gzip
}
As we can see, both configurations serve as reverse proxies for my uWSGI and Node servers, providing an additional layer of separation between the public internet and my services. This separation enhances security and customizability. My deployment script replaces the "placeholder" instances with the Podman pod's IP address. Naturally, Caddy's capabilities extend far beyond this specific example.
Caddy Dockerfile
Here is the 'caddy' directory's 'Dockerfile':
FROM docker.io/caddy:builder AS caddy-builder
RUN xcaddy build --with github.com/caddy-dns/cloudflare
FROM docker.io/caddy:alpine
WORKDIR /etc/caddy
COPY --from=caddy-builder /usr/bin/caddy /usr/bin/caddy
COPY . .
CMD ["caddy", "run", "--config", "/etc/caddy/Caddyfile"]
As we can see, this Dockerfile is based on the Caddy builder image. It installs a third-party Cloudflare module for Caddy, and then copies the Caddy configuration and directory contents into the image, finally running Caddy.
Podman
I think that my explanation of Podman at the start of this post, in combination with the deployment script excerpts I included earlier, are mostly enough to understand how I used Podman for this project. Of course, there's much more you can do with Podman.
I opted for Podman pods over container composition because I preferred having all the components of my website explicitly grouped under the same IP address, which makes more logical sense and provides better control. Given that my use case was relatively straightforward, I wanted to simplify the networking aspect of connecting my containers, making it more streamlined.
Systemd Service
To increase the uptime of the site, I connected my site to a systemd service by creating the file '/etc/systemd/system/lichenaut-website.service' with the following contents:
[Unit]
Description=Run personal website on boot
After=network.target
[Service]
User=root
ExecStart=/root/lichenaut-website/deploy_destructive.sh renew
Restart=on-failure
TimeoutStartSec=1000
Type=forking
[Install]
WantedBy=multi-user.target
By configuring this service, my website will automatically start up when the server boots, and will automatically restart if it exits or crashes for any reason.
Firewall
For the Hetzner firewall surrounding my website, I did not modify any egress (outgoing) settings and added the following ingress (incoming) settings, which apply to any IPv4 and IPv6 sources:
• ICMP:any, allowing my website to be more discoverable.
• TCP:22, enabling SSH access to my server.
• TCP:53, permitting my website to resolve domain names using external DNS servers.
• TCP:80, allowing HTTP traffic.
• TCP:443, allowing HTTPS traffic.
Of course, implementing more advanced security measures, such as ICMP packet type-based filtering or SSH IP-based filtering, would further enhance the security of this basic setup.
End
Thank you for reading this high-level walkthrough of how I created my website. I hope this inspires you to build your own projects, and please don't hesitate to reach out if you'd like me to elaborate on any specific aspect. Additionally, I'd appreciate your feedback on whether I should create a template directory or a script that asks setup questions and generates a starter project similar to this one.
lichenaut