So, maybe you’ve heard about the Musk take-over of twitter, and maybe you’ve heard about the mass exodus of a large number of users from the old platform. Personally I try not to knee-jerk react to things like this but I haven’t been exceedingly active on twitter lately. I have also long had an interest in decentralized options like Diaspora, Friendica, and Mastodon. De-centralized social media just fits the model that the world wide web’s founders had envisioned! It takes your data and puts it back in your hands, or at least in the hands of someone you can identify and maybe even trust. So, when I heard many of my friends from twitter were starting to migrate like a herd of pachyderms over to Mastodon, I decided it was finally time to set up my own instance. I had tried Diaspora a few times, and Friendica for a little while, but neither seemed great. I love Diaspora, but it doesn’t federate outside of its own platform. And Friendica.. well the UI left a bit to be desired. Mastodon though, it feels like twitter, the UI is pretty slick, and there are some long-standing instances out there packed with folks I’ve already connected with on other platforms. It’s time.

What is Mastodon?

Others have already put great explanations out there on what Mastodon is, so I’ll be brief here. Mastodon is an open-source decentralized social network. It federates with other instances using a protocol called ActivityPub. So any instance can reach any other instance (unless they can’t because of network issues or admin-level blocks/restrictions). The overall feel is similar to Twitter, mechanically anyway. Short-form posts (called Toot’s) and similar syntax like @user except if you’re interacting with a user from a different instance, its @user@instance.domain. And development is pretty active, so the code base is always maturing.


However like so many software projects today, the ecosystem seems focused on a certain linux distribution. This happens a lot, developers design their app with whatever their chosen distribution is in mind. Then the path of least resistance is to deploy on that platform. Luckily today’s world offers solutions! Containerization has removed the distro boundaries. If your app can be containerized, it can run on about anything with a container runtime. So, now I can run your app on the platform I am most comfortable running it on. Even if inside that container its running a different distribution.

For me, that platform is RHEL, and the container tool is Podman. This still offers some challenge though, as all of the info online about running Mastodon in a container is very docker-centric. Podman is designed to make this easy though. The big snag is docker-compose. Yes, it’s supposed to work with podman now. I have never had good luck there though. Instead, I like to use podman play kube, which will consume Kubernetes yaml files. I believe these are Helm charts in Kubernetes? I do not know enough about k8s to really say if they’re the same thing. So I normally handle this by building the app manually using the docker-compose file as a reference. Then I use podman generate to spit out a yaml file. That file gives me a pod definition that I can then modify, and import back into podman. And THAT is the background you might want before continuing to read this article. If you’re interested in running mastodon in a more traditional Docker environment, there are lots of other write-ups on that. One of which I followed for reference! You can find that article here. It is also worth mentioning that Mastodon’s git repo already has a helm chart for Kubernetes, but I do not know if this will directly run in Podman. You can check that out here.


Now, you’ll need a few things. Any service you intend to host has some requirements. Usually, it’s in hardware, storage, and accessibility.


I decided to run my instance at home. I have an old server that I use for home lab and home services. I made a RHEL 9 VM with 4 CPUs and 8gb of memory. You can get RHEL using a developers sub if you don’t have access to it. Or you could go with a re-spin like Rocky. If you’re planning to run this instance as more of a public service to expand mastodon’s capacity, I’d recommend using a distro with some reliability and support. As with any open-source deployment though, this is entirely up to you.


I have a Synlogy NAS that I use for VM storage, and lots of other things. So I decided to use it over NFS for this project. If you don’t have one, you could build an NFS server, use local storage, heck even use an S3 compatible object store. In the end you need a place to store media, your database, and redis data. I ended up defining a few volumes in podman.

[root@social0 ~]# podman volume list
local mastodon-vol-db
local mastodon-vol-pubsys
local mastodon-vol-redis

These volumes can be defined in yaml as well, and I did that so I could re-create this whole setup later if needed. This is what one of the volumes looks like:

[root@social0 mastodon]# cat vol-db.yaml 
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
# Created with podman-4.1.1
apiVersion: v1
kind: PersistentVolumeClaim
  annotations: local async nfs
  creationTimestamp: "2022-11-04T03:44:43Z"
  name: mastodon-vol-db
  - ReadWriteOnce
      storage: 1Gi
status: {}

Obviously your definitions will look different. Podman will allow you to use s3 as a back-end here as well so if you’re deploying on a cloud provider you may be able to save some money on storage right off the bat by using s3 behind podman. If you’re using local directory paths as your storage, you’ll need to use HostPath, I’ve included commented examples of how those might work in the yaml we’ll work with later.


Now, how can people access your instance? Bare minimum you’re going to need a domain here. I decided to use my existing domain with a new hostname of Since I am self-hosting out of my basement, I also needed some method to get access from the world at large into that VM. I used Clourflared for that. Your situation may vary.

Other things

A few other things you’ll need to support Mastodon.

  • SSL certificate
    • I used Let’s Encrypt. I highly recommend it
  • SMTP, you need some way of sending mail from your instance. I ended up with AWS’s Simple Email Service.

Mastodon’s Services

Mastodon isn’t a single service. Like many modern web applications it requires a database, a message queue, and a few other things. The base setup using Mastodon’s example docker-compose deploys the following services.

  • Postgres 14
  • Redis 7
  • Mastodon – web
  • Mastodon – Streaming
  • Mastodon – Sidekiq

You can also deploy Elastic Search, but I did not. I may add it in later. And there is some config in there for federating with tor instances, I didn’t bother with that either.

The three mastodon containers are all the same code base but running with different options. Postgres I would recommend locking at the suggestion version (14) until the mastodon folks tell us to upgrade. Redis, honestly I do not know how important the version is. I just stuck with the version in the compose file.

Mastodon itself I would recommend checking their release page, and choose a release you are comfortable with. I picked 3.5.3, because it was the latest release when I deployed. 4.0.0RC1, 2, and 3 have released since I deployed ( a week ago!) and several instances have already updated with great success. I am sticking with 3.5.3 until 4.0 comes out of RC.

Now, let’s get to work


On RHEL (or one of its clones) you’ll need podman, git, and I chose to use nginx as my ingress proxy. So lets get these installed.

$ sudo dnf install -y nginx container-tools git

Key generation

First we’ll need to generate some keys from Mastodon itself. The container makes this relatively easy. First, the secret key, and the otp secret, remember to replace v3.5.3 with the version of mastodon you intend to run.

$ podman run -it --rm bundle exec rake secret

That random string is the secret key, note it down as the SECRET_KEY_BASE

Next is the OTP secret. We use the exact same command, get another string, and note that as the OTP_SECRET

$ podman run -it --rm bundle exec rake secret

Now, same concept, but we need to generate vapid keys.

$ podman run -it --rm bundle exec rake mastodon:webpush:generate_vapid_key

And you guessed it, Note those down.


Now is where you need to define those volumes. I did the rest as root because Podman’s nfs volume support seems to require root at the moment. I don’t see any reason that you couldn’t run the entire stack aside from the nginx config as a non-root user otherwise. This is an example of my database volume. You’ll need to do this (or an equivelent for whatever storage back-end you’re choosing) for mastodon-vol-db, mastodon-vol-redis, and mastodon-vol-pubsys.

podman volume create mastodon-vol-db --opt type=nfs --opt o=async --opt device=

A local path is a bit problematic, as I have not found a way to define a hostpath (that is, a local path in your filesystem) as a volume in podman. So I’ve provided some blocks at the end of the yaml definition that will allow you to specify local paths instead of volumes if needed. More on that later.

The names are important, it doesn’t matter what storage back-end you use, as long as the names match up. Because later when we deploy the yaml template with podman play, it will have those volume names inside already.

Podman Play

So now we have the pieces we need, let’s tie them all together. This is the piece that took me a bit to work out. This has been through two revisions at this point. First I defined a pod, and all of the containers, then I used podman generate kube to throw all of that into a yaml definition. This gives you a pretty basic all in one dump of the config. The piece that I found a little clumbsey was the environment variables. Mastodon uses environment variables for a lot of its config, it gets defined in a .env.production file. I’ve since moved this into a ConfigMap. I ran into some issues reading the configmap with podman though. You’re supposed to be able to read the entire configmap into the env variables. This just for some reason didn’t work. I had to one by one map the env variables to the configmap. If I ever work around this I’ll update it. I have all of this available in a github repo here.

Go go ahead and grab that repo.

$ git clone

Now you’ll have to edit the configmap to match your config. Those keys we generated earlier, your smtp config, the email address youd like notifications to come from, and the domain name of your instance. It should end up looking something like this:

apiVersion: v1
# This configmap is used to define the env variables that Mastodon uses.
# In podman, i can't seem to just pass all variables to a container, so I had to map them all in the container definitions
# If you add a new variable here, or uncomment one, you will need to add or uncommet them in the mastodon-pod.yaml as well.
#    Each container has an env definition, with mappings for env to keys.  You will need to add those to each container that needs your new variable.
#    This was the cleanest way I could get this done in podman.
kind:  ConfigMap
  name: mastodon-env
  # DB Config
  POSTGRES_USER: "mastodon"
  POSTGRES_PASS: "SomePassword"
  POSTGRES_MASTER_PASS: "SomeOtherPassword"
  POSTGRES_DB: "mastodon_production"
  DB_PORT: "5432"
  DB_HOST: "localhost"
  # Site Config
  SECRET_KEY_BASE: "105bd1c7adbb7840ac2752780708d36104a08ff052da5978ab3d6e43ae11a2bdbd44917083404f29d3b9d8a76404799cce2b645812881c0051ef3d5d64ff23de"
  OTP_SECRET: "2cebfeff327bdabd9d08784335ea94cc0b3282fc137247aa18a1a7bb1e93280ea231d29041471104e1323b273a0113b565ea1d305e9907c026061e388889abb7"
  VAPID_PUBLIC_KEY: "BIsVCz2PYfdl6DTBIf4Q_CKnAFpk229V6VH0UL2eW_aF6Ms5JD0--DE3qA_QmxyOOutg7forhyWb-hjjXR56O0U="
  # Mail Config, SMTP settings explnations here:
  SMTP_PORT: "587"
#  SMTP_TLS: "false"
#  SMTP_SSL: "false"
  # Redis Cnfig
  REDIS_HOST: "localhost"
  REDIS_PORT: "6379"

Once that’s configured, you can start up the containers with:

podman play kube --configmap ./mastodon-configmap.yaml ./mastodon-pod.yaml

What that command does is pull down any containers defined in mastodon-pod.yaml and defines the configured containers based on the definition and the configmap in mastodon-configmap.yaml. It’s a great way to define your config in a repeatable way. However, the first run doesn’t really get much done. If you did a podman ps right now you’d see that probably postgres is running, and redis, maybe sidekiq and streaming, but the important one, web, is not. That’s because on the first run postgres creates the database, so the other containers cant connect to the database yet.

# podman logs mastodon-web
=> Booting Puma
=> Rails 6.1.6 application starting in production
=> Run `bin/rails server --help` for more startup options
/opt/mastodon/vendor/bundle/ruby/3.0.0/gems/activerecord-6.1.6/lib/active_record/connection_adapters/postgresql_adapter.rb:83:in `rescue in new_client': could not connect to server: Connection refused (ActiveRecord::ConnectionNotEstablished)

However, if we check on postgres, we should see messages about the db getting created. And maybe even some errors about mastodon trying to connect and perform queries that do not yet work.

That’s because the actual mastodon database doesn’t exist yet. We have to tell mastodon to create it using the setup script included. So one more time, we need to run mastodon in a temporary container. We’ll need some postgres info for this step, so get your configmap file ready to view.

podman run -it --rm --pod mastodon bundle exec rake mastodon:setup

Most of the answers you give here mean nothing, they’re just so you can get Mastodon instantiated, so don’t be too concerned. You’ll have to provide database and redis info in order for this to work. You dont have to, but this is also a fun time to test your mail config, so if you want to do that, answer the mail config properly and test it. And of course, you get a chance to create your admin account, which will get created and locked out. We’ll reset the password in a bit.

# podman run --pod mastodon -it --rm bundle exec rake mastodon:setup
Your instance is identified by its domain name. Changing it afterward will break things.
Domain name:

Single user mode disables registrations and redirects the landing page to your public profile.
Do you want to enable single user mode? No

Are you using Docker to run Mastodon? Yes

PostgreSQL host:
PostgreSQL port: 5432
Name of PostgreSQL database: mastodon_production
Name of PostgreSQL user: mastodon
Password of PostgreSQL user:
Database configuration works! 🎆

Redis host:
Redis port: 6379
Redis password:
Redis configuration works! 🎆

Do you want to store uploaded files on the cloud? No

E-mail address to send e-mails "from": Mastodon <>
Send a test e-mail with this configuration right now? no

This configuration will be written to .env.production
Save configuration? Yes
Below is your configuration, save it to an .env.production file outside Docker:

# Generated with mastodon:setup on 2022-11-14 02:18:37 UTC

# Some variables in this file will be interpreted differently whether you are
# using docker-compose or not.

It is also saved within this container so you can proceed with this wizard.

Now that configuration is saved, the database schema must be loaded.
If the database already exists, this will erase its contents.
Prepare the database now? Yes
Running `RAILS_ENV=production rails db:setup` ...

Database 'mastodon_production' already exists

All done! You can now power on the Mastodon server 🐘

Do you want to create an admin user straight away? Yes
Username: myAdminAccount
You can login with the password: 732149811c1de80302436994c1e176d7
You can change your password once you login.

NOW! We should be able to start up with web and sidekiq containers.

# podman pod restart mastodon
[root@social1 defs]# podman ps
CONTAINER ID  IMAGE                                    COMMAND               CREATED         STATUS            PORTS                                                                   NAMES
91f084bdfcc1  localhost/podman-pause:4.1.1-1658931970                        27 minutes ago  Up 9 seconds ago>3000/tcp,>4000/tcp,>9200/tcp  66029b1c3d82-infra
63177673da71     postgres              27 minutes ago  Up 7 seconds ago>3000/tcp,>4000/tcp,>9200/tcp  mastodon-db
7415ca02e4e7         redis-server          27 minutes ago  Up 7 seconds ago>3000/tcp,>4000/tcp,>9200/tcp  mastodon-redis
d7e28ca844fa      ./streaming           27 minutes ago  Up 8 seconds ago>3000/tcp,>4000/tcp,>9200/tcp  mastodon-streaming
911c06a76b03      exec sidekiq          27 minutes ago  Up 8 seconds ago>3000/tcp,>4000/tcp,>9200/tcp  mastodon-sidekiq
0695fe6729d4      -c rm -f /mastodo...  27 minutes ago  Up 8 seconds ago>3000/tcp,>4000/tcp,>9200/tcp  mastodon-web

And now we need to fix that admin user password. It showed you a password during setup, but when I did this, that password for some reason didn’t work. Maybe you won’t need to, but I had to reset it. Luckily Mastodon has a command line “tootctl” that lets you do local administration. Of course we can access that within our container.

# podman exec -it mastodon-web tootctl accounts modify myAdminAccount --reset-password
New password: b7288a2c451d80b272bf30eaa748d96e

Boom! Admin password. Of course “myAdminAccount” gets replaced with whatever admin username you specified during setup.


So awesome, we, theoretically, have mastodon running. Now how do we get to it?! Well there’s lots of ways to do this, you could make an ingress pod, running nginx, traefik, whatever. Personally I like to put nginx in front of my pods, and I do it on the podman host. This way things like ssl and other config can be put in place in a more traditional manner. You also need to serve out some static assets that come from the mastodon git repo. So running nginx on the host just makes this super simple.

We’ll need nginx, and the git repo cloned to our system. I placed it in /srv/mastodon/static/ because I keep container data there, but you can put it somewhere more traditional, or in /home/mastodon, just remember where you cloned it to. Remember to replace --branch with the version you like.

# pwd
# git clone --branch v3.5.3
Cloning into 'mastodon'...

Now, in the ./mastodon directory, you should have the v3.5.3 (or whatever release you cloned) repo, and inside of it a /public directory. You will also find a /dist/ directory, which has example configs for nginx and systemd. Let’s not worry about systemd as we’re in podman. That nginx config though…. Make a directory in /etc/nginx for virtual hosts, and then copy the file into it.

[root@social1 dist]# mkdir -p /etc/nginx/virt.d
[root@social1 dist]# cp ./nginx.conf /etc/nginx/virt.d/mastodon.conf
[root@social1 dist]# restorecon -vFR /etc/nginx/virt.d
Relabeled /etc/nginx/virt.d from unconfined_u:object_r:httpd_config_t:s0 to system_u:object_r:httpd_config_t:s0
Relabeled /etc/nginx/virt.d/mastodon.conf from unconfined_u:object_r:httpd_config_t:s0 to system_u:object_r:httpd_config_t:s0
[root@social1 dist]#

And now we need to configure nginx. In /etc/nginx/nginx.conf we need to add in the include directive for virt.d. In that file, around line 40, I added:

    include /etc/nginx/virt.d/*.conf;
    server_names_hash_bucket_size 64;

I am not certain what the hash bucket size does for us, but it was recommended.. so I added it. This was added withing the http block, right under the existing include for conf.d. Its a long story why, but I like to add my own virt.d for virtual hosts, and leave conf.d for config additions. At this point you’ll also need to figure out ssl. I am not going to to into this here. I will say that Let’s Encrypt integrates auto-renewal quite nicely with certbot and dns with Cloudflare. Next we need to edit the config.

In both server blocks (for http and https) you’ll need to set your server_name to the domain name you’re serving mastodon on. And the root directive also needs to be set to wherever you stuck that public folder from the mastodon git repo. In my case /srv/mastodon/static/mastodon/public. I also found it very useful to add access and error logs for both http and https. And of course speficy your ssl certificates.

My Production nginx config ended up something like this:

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;

upstream backend {
    server fail_timeout=0;

upstream streaming {
    server fail_timeout=0;

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=CACHE:10m inactive=7d max_size=1g;

server {
  listen 80;
  listen [::]:80;
  root /srv/mastodon/static/mastodon/public;
  location /.well-known/acme-challenge/ { allow all; }
  location / { return 301 https://$host$request_uri; }
  access_log            /var/log/nginx/social-access.log combined;
  error_log             /var/log/nginx/social-error.log;


server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  access_log            /var/log/nginx/social-tls-access.log combined;
  error_log             /var/log/nginx/social-tls-error.log;

  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  # Uncomment these lines once you acquire a certificate:
  ssl_certificate     /etc/letsencrypt/live/;
  ssl_certificate_key /etc/letsencrypt/live/;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 80m;

  root /srv/mastodon/static/mastodon/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml image/x-icon;

  location / {
    try_files $uri @proxy;

  # If Docker is used for deployment and Rails serves static files,
  # then needed must replace line `try_files $uri @proxy;` with `try_files $uri @proxy;`.
  location = sw.js {
    add_header Cache-Control "public, max-age=604800, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ~ ^/assets/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ~ ^/avatars/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ~ ^/emoji/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ~ ^/headers/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ~ ^/packs/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ~ ^/shortcuts/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ~ ^/sounds/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ~ ^/system/ {
    add_header Cache-Control "public, max-age=2419200, immutable";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;

  location ^~ /api/v1/streaming/ {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Proxy "";

    proxy_pass http://streaming;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";

    tcp_nodelay on;

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://backend;
    proxy_buffering on;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    #proxy_cache CACHE;
    #proxy_cache_valid 200 7d;
    #proxy_cache_valid 410 24h;
    #proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
    add_header X-Cached $upstream_cache_status;

    tcp_nodelay on;

  error_page 404 500 501 502 503 504 /500.html;

Now, you should be able to test the nginx config, enable it, and start it up.

[root@social1 dist]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@social1 dist]# systemctl enable nginx.service --now
Created symlink /etc/systemd/system/ → /usr/lib/systemd/system/nginx.service.
[root@social1 dist]#

Oh, and don’t forget about the firewall….

[root@social1 dist]# firewall-cmd --add-service http --add-service https --permanent
[root@social1 dist]# firewall-cmd --reload

And that, my friends, should be that!


so there you have it, that should get you up and running. Next time an update for mastodon comes out you should be able to switch the versions in the yaml file, take down the pod, and re-create it with podman play. Of course following whaterver upgrade instructions the folks who make Mastodon recommend.

I hope you’ve found this helpful! And Happy Tooting! 😛