Post list
How to set up a classic Doom server in Debian 10 using Zandronum

I have very clear memories of my uncle playing Doom II, back to when I was around five years old. Those were my first experiences with videogames, and I remember feeling mesmerized by thinking about the endless possibilities that virtual environments could bring to our consciousness. As a child, watching my uncle clumsily navigate through those demon-infested dark corridors was a thrill like no other. Eventually my family got its own computer, and I progressively dropped my Legos and replaced them with pixels. Much of my later childhood was spent exploring those virtual worlds. I have very fond memories in them, as vivid as my real ones, and it eventually motivated me to learn programming so that I could create my own worlds (more on that on some future post).

Sweet memories they are. Unfortunately I do not play much nowadays, since time becomes increasingly more precious, and unfinished projects keep accumulating on my back. There is one game though that I still play from time to time, and that is Doom, the game that started it all! In my estimation this title (Doom II actually) gets dangerously close to perfection: weapons, monsters, sound, ambience, gameplay, the feeling of loneliness and despair felt when trapped in those technological mazes, just perfect. But even more perfect is that thanks to the modding capabilities and the release of the source code, the Doom community has been releasing till today custom source ports and thousands of maps and mods in a neverending stream of pure epicness. I was once part of such community, unfortunately somebody stole my laptop and I lost all my backupless personal creations along. Only this video remains.

In today's tenth episode of these rambling explorations of my inner bedroom I'm bringing back my old Doom server Penumbra and show you how to set up your own one on a machine running Debian 10. We will be using the source port Zandronum, a multiplayer-focused port that concentrates most online game activity.

  1. Install the required libraries (~280 MB):

    sudo apt-get install adwaita-icon-theme at-spi2-core dconf-gsettings-backend \
    dconf-service doomseeker doomseeker-zandronum fluid-soundfont-gm glib-networking \
    glib-networking-common glib-networking-services gsettings-desktop-schemas \
    gtk-update-icon-cache libao-common libao4 libatk-bridge2.0-0 libatk1.0-0  \
    libatk1.0-data libatspi2.0-0 libcairo-gobject2 libcolord2 libdconf1 libdouble-conversion1 \
    libegl-mesa0 libegl1 libepoxy0 libevdev2 libevent-2.1-6 libgbm1 libgnutls-dane0 \
    libgtk-3-0 libgtk-3-bin libgtk-3-common libgudev-1.0-0 libinput-bin libinput10 \
    libjson-glib-1.0-0 libjson-glib-1.0-common libmtdev1 libpcre2-16-0 libproxy1v5 \
    libqt5core5a  libqt5dbus5 libqt5gui5 libqt5multimedia5 libqt5network5 libqt5svg5 \
    libqt5widgets5 libqt5xml5 librest-0.7-0 libsdl1.2debian libsoup-gnome2.4-1 libsoup2.4-1 \
    libunbound8 libwacom-bin libwacom-common libwacom2 libwadseeker2 libwayland-server0 \
    libxaw7 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
    
  2. Create a new folder for the program and go there. You can choose a different directory at your convenience.

    mkdir /srv/zandronum
    cd /srv/zandronum
    
  3. Download and uncompress the Linux binaries. The URL to the most recent version can be obtained here.

    wget https://zandronum.com/downloads/zandronum3.0-linux-x86_64.tar.bz2
    tar -xvjf zandronum3.0-linux-x86_64.tar.bz2
    rm zandronum3.0-linux-x86_64.tar.bz2
    
  4. Create a symbolic link to the libcrypto.so.1.0.0 file, which is no longer available as a package in Debian.

    ln -s /srv/zandronum/libcrypto.so.1.0.0 /usr/lib/libcrypto.so.1.0.0
    
  5. Change the binary file permissions to make it executable.

    chmod o+x zandronum-server
    
  6. If you are using a firewall such as ufw (you should!), decide which port you will be using for Zandronum and create a rule to allow it. The default one is 10666.

    sudo ufw allow 10666
    
  7. Create another directory to store the game content files called wads.

    mkdir /home/doom
    cd /home/doom
    
  8. I have prepared a tar file with the required game file doom2.wad plus a few mods and custom maps that we will now unpack, only for learning purposes of course. If you enjoy the game, consider purchasing one copy.

    wget https://s3.eu-central-1.wasabisys.com/nexus/media/doom.tar
    tar -xvf doom.tar
    rm doom.tar
    
  9. Last step also unpacked a config file that contains our server settings configured for a survival cooperative game limited to five lives. Make sure to edit the file to change these values.

    nano /home/doom/doom.cfg
    
    # This is the server name that will be displayed in Doomseeker (https://doomseeker.drdteam.org/)
    sv_hostname "My cool server | Survival 5 lives" 
    #
    # If you later expose your wad folder to the web, add its URL here
    # for other players to download the wad files directly from you
    sv_website "https://mycoolwebsite.com/wads"
    #
    # This text will be displayed on screen every time a player joins
    # It is formatted using colors:
    # https://yoni0505.blogspot.com/2012/10/zandronum-nametext-coloring.html
    sv_motd "\\ccWelcome to \\crmy server!\n\n\\crsurvival \\cc- \\cd5 lives"
    #
    # Remote connection password
    sv_rconpassword "myrconpassword"
    

    Hit Ctrl + X, Y, ENTER to save and close.

  10. Open a virtual terminal session so that we can later leave our server running.

    screen
    
  11. Go to the program folder and launch zandronum-server on the port we previously selected linking the config and wad files.

    cd /srv/zandronum
    ./zandronum-server -host -port 10666 -iwad /home/doom/doom2.wad -file /home/doom/hellbnd.zip \
    -file /home/doom/brutalv21.pk3 -file /home/doom/HXRTCHUD_BD21t_v7.7e.pk3 +exec /home/doom/doom.cfg
    
  12. Hit Ctrl + A and D to close the virtual terminal and leave our server running.

If we now open Doomseeker in our gaming rig we should be able to search for our server name and start playing! I hope this article is helpful to somebody. If you are interested in playing together, contact me on Mastodon and let's organize a session!

This was post #10 in the #100DaysToOffload challenge. As always, thank you for reading and see you next time.


How to install Searx in Debian 10 using nginx

This is diary entry number 9 in the captain's log. I have regained consciousness after several days of cryogenic sleep, which were interrupted by the main computer after detecting some foreign code the algorithm was not able to compute, perhaps related to some not up-to-date package version. It seems the error modified part of the stellar alignment parameters of the main cruise control and took the ship to an unexplored region of space which, as far as I can determine, did not even exist before. Such is the nature of time-space as experience shows.

In today's operation we will learn how to install Searx, a free (as in freedom, you know) privacy-focused internet metasearch engine which aggregates results from more than 70 search services and has the following features:

  • Can be self-hosted
  • No user tracking or profiling
  • Cookies are not used by default
  • Secure, encrypted connections (HTTPS/SSL)
  • Can proxy web pages
  • Can be set as default search engine
  • Customizable (theme, search settings, privacy settings)
Our first step will be getting all the required libraries:

sudo apt-get install git build-essential libxslt-dev python-dev \
python-virtualenv python-babel zlib1g-dev libffi-dev libssl-dev

Now we will clone the main repository at /srv/searx (feel free to choose a different one):

cd /srv
git clone https://github.com/searx/searx.git

We then move to the new location and create a virtual environment as a good practice to isolate package versions between projects:

cd searx
virtualenv env

We activate the virtual environment:

source env/bin/activate

And run manage.sh to update Searx packages:

./manage.sh update_packages

We can get out of the virtual environment for now:

deactivate

Our next goal will be to configure nginx to correctly serve Searx on the web. We will first create a new file in sites-available and edit it:

nano /etc/nginx/sites-available/searx

We now add the following configuration text, changing the server_name parameter with our own domain. Make sure to also update the /static/ URL if you chose a different installation location earlier:

server {
    listen 80;
    server_name yourdomain.com;

    location /static {
        alias /srv/searx/searx/static;
    }

    location / {
        proxy_pass http://127.0.0.1:8888;
        proxy_set_header Host $host;
        proxy_set_header Connection       $http_connection;
        proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme         $scheme;
        proxy_buffering                   off;
    }
}

To save and close the file, hit ctrl + x, y, enter.

We then create a symlink to sites-enabled:

sudo ln -s /etc/nginx/sites-available/searx /etc/nginx/sites-enabled

And restart nginx:

sudo systemctl restart nginx

Almost finished! We will now edit Searx settings file:

nano /srv/searx/searx/settings.yml

And we'll change the following variables with our instance name, contact mail and an invented secret key:

instance_name : "MyCoolName"
contact_url: mailto:contact@ourdomain.com
secret_key : "r5Ekg75K865eyj8jhm757Lqq" # Change this!

Now we only need to run Certbot which will provide a https certificate and update our nginx config files automatically:

sudo certbot --nginx -d ourdomain.com

That is all for our installation! Last step will be to run the web app. In order to leave it opened, we will use the screen command to isolate a terminal session:

screen

We finally go to the Searx directory, activate the virtual environment and run the python script:

cd /srv/searx
source env/bin/activate
python3 searx/webapp.py

To close the virtual terminal session, hit ctrl + a, d. There, now the session will be running in the background. If we ever want to get it back to, for instance, check Searx logs in real time, you can run:

screen -r

If you lack time or resources to set up Searx by yourself, you can also use my personal instance eXplora or any other public one. eXplora does not keep logs nor is anything profiled, but of course that is what everyone is saying these days. Time for me to get back to the cryochamber. In case you come from the future and manage to find this text recording floating in the metaverse after an alien race has wiped out human consciousness, thank you for reading and see you after the next beat.


How to mount a Wasabi S3 bucket in Debian 10

Welcome to this remote place of the cosmos, stranger. Today we will learn how to mount a Wasabi bucket as a virtual drive in Debian 10, allowing us to dramatically increase storage space in our VPS or home server for a small amount of money.

Wasabi is a root vegetable, green in color, from the same family as broccoli, cabbage, cauliflower, and mustard. It is also a cloud storage provider that competes with solutions like Amazon S3 by offering full compatibility, increased speed and no transfer costs. With a price of $5.99 per terabyte per month and free transfers, it surely is a much cheaper option than Amazon, plus you don't get to feed the beast which is good.

Before continuing it is important to consider that S3 is not a true file system. It is for instance eventually consistent, meaning that if several servers are working on the same bucket, one can be served new content while the other can be served old. Being a cloud virtualization, it also has bigger latency that a physical file system. Finally, Wasabi includes a 30/90-day file retention policy, making you pay for that storage time after changing or deleting an object. This aspect makes it a no-go if you want to store big amounts of files that are going to be modified frequently.

Having said that, if you are looking for cheap cloud storage to keep static files such as media content, it is a reasonable solution! I personally use it to store static files from my self-hosted web services and also to backup my music and ebook collection. Ready to expose your precious data to the Cloud? Let's see how we can do it.

First step is to create a new bucket and access key in Wasabi's control panel. Remember to write down your secret key! As far as I know, it cannot be retrieved later.

Once our bucket is ready, we can proceed to install s3fs, the tool we will be using to mount the bucket. Connect to your terminal and install it through apt:

sudo apt install s3fs awscli -y

In order to check if the installation was successful, run:

which s3fs

If it returned /usr/bin/s3fs then it's all good. Our next step is to create a file containing our Wasabi bucket credentials. We will save them at /srv/.passwd-s3fs (you can store them somewhere else if you want!):

echo YOURBUCKETACCESSKEY:YOURBUCKETSECRETKEY > /srv/.passwd-s3fs

Now we need to change the file permissions:

chmod 600 /srv/.passwd-s3fs

Almost there! Next step is to create the directory we want to use to mount our bucket. I like placing it at /home, but of course you can set up a different one:

mkdir /home/wasabi

Those are all the preparations we need. Now we will just run s3fs to mount our bucket and that's it! First parameter is our bucket name (I named mine "home-wasabi"). Second, the directory we created at home. Third, the path to our password file. Last parameter is Wasabi's service URL, which will depend on the region you created the bucket in. I live in Spain, so EU Central for me:

s3fs home-wasabi /home/wasabi -o passwd_file=/srv/.passwd-s3fs -o url=https://s3.eu-central-1.wasabisys.com

And voila! Our bucket should be mounted now. You can test it by dropping a file to the bucket from Wasabi's control panel, which should be also accessible from the terminal. There is just one final detail! The moment our machine gets rebooted, the mount will be gone, sad face. There are multiple ways to set it up so that the bucket gets mounted automatically after every reboot. I'm a big fan of CRON for its simplicity, which provides a @reboot directive precisely for that! In order to add a CRON job, run:

crontab -e

And add the s3fs command we run previously:

@reboot s3fs home-wasabi /home/wasabi -o passwd_file=/srv/.passwd-s3fs -o url=https://s3.eu-central-1.wasabisys.com

Hit Ctrl+x to close, then Y to save, and there you go! Now the bucket will be automatically mounted every time our machine reboots. That is all! I hope this entry was useful to you, feel free to ask any questions you may have! I am by no means an expert, but I'll try to help as much as I can.

This has been post #8 in the #100DaysToOffload challenge. As always, thank you for reading and see you next time.

Previous page
Next page