Go to post list 🡒
UnconsciousBot likes generating procedural text based on Carl Jung's writings

It turns out this blog is not dead, it was only sleeping! You see, my mind works in a somewhat obsessive way. Whenever I find something I enjoy, I tend to focus all my being into it, and it becomes my main and only focus of attention. For a period of time, I live exclusively towards that goal, leaving everything else aside. Eventually I burn out, say "fuck it" and leave it for something else. I am aware that this behavior is somewhat pathological, even though I get extremely productive when I stay in the zone. Nevertheless I try to make some effort to create balance in my life, with greater or lesser success. In fact, this blog was created out of the intention to subject myself to some sort of posting schedule discipline, which I managed to maintain... for a few days. But here we are again!

Leaving my self-diagnosed psychological evaluation aside, the actual point of this post is to answer a request from a fellow Mastodoner asking how UnconsciousBot, one of my latest digital minions, works. This Mastodon bot procedurally generates text based on Jung's writings, works that by themselves are full of symbolic synchronicity potential, but even more so when we apply some randomness! As everything we have previously seen here, the results look more complex than the implementation. The human race is so prolific that it is increasingly difficult to create something that does not already exist, and in this case too it was just a matter of finding the proper library. Thanks to ddycai, we can use his random sentence generator to create random sentences based on an input file using Markov chains. This library acts as a handy wrapper for the Natural Language Toolkit, an interesting framework designed to work with human language data.

So how does it work? In order to generate text, we feed the algorithm with a text file composed of sentences that are tokenized. Then a random sentence is selected, and the first group of words are searched across the text file for coincidences. The sentence is then joined with another random sentence that matches those words, and the process is repeated till a dot is found. In this way, the bigger the group of words, the more strict the results will be. So for instance, many matches can be found if the group consists of two words, while a group of five words will mostly return literal sentences from the text. For UnconsciousBot I only used Aion as the text source, however the bigger input, the more possibilities. At some point I would like to improve it by adding more of Jung's books, but even as it is now, it tends to give interesting results from time to time. There are also some spacing issues in the original text file that I should fix at some point. Maybe tomorrow.

Here is the code, which is an adapted version of ddycai's script; it can also be found in my Github:

import requests, json, os
from mastodon import Mastodon
from nltk import word_tokenize
from sentence_generator import Generator

# Mastodon token and domain
mastodon = Mastodon(
    access_token = 'asdf',
    api_base_url = ''

# How many words we are taking (the bigger the variable, the more strict it will be)
words = 2
text = ""
found = False

# We open the file and apply some tokenizing magic
while(found == False):
    with open("text.txt", 'r',encoding='utf-8') as f:
        sent_detector ='tokenizers/punkt/english.pickle')
        sents = sent_detector.tokenize(
        sent_tokens = [word_tokenize(sent.replace('\n', ' ').lower()) for sent in sents]
        generator = Generator(sent_tokens, words)

        # We capitalize the first word to make it pretty
        text = generator.generate().capitalize()

        # We only accept the result if it is smaller than Mastodon's character limit
        if(len(text) <  500):
            found = True

# Send result to Mastodon

I hope this entry was somewhat useful! If not then I guess I failed miserably. This was post #11 in the #100DaysToOffload challenge. At this point, my personal goal with the challenge is to reach a hundred posts, no matter how long it takes, since I am more fond of quality over quantity. As always, thank you for reading and see you next time.

How to set up a classic Doom server in Debian 10 using Zandronum

I have very clear memories of my uncle playing Doom II, back to when I was around five years old. Those were my first experiences with videogames, and I remember feeling mesmerized by thinking about the endless possibilities that virtual environments could bring to our consciousness. As a child, watching my uncle clumsily navigate through those demon-infested dark corridors was a thrill like no other. Eventually my family got its own computer, and I progressively dropped my Legos and replaced them with pixels. Much of my later childhood was spent exploring those virtual worlds. I have very fond memories in them, as vivid as my real ones, and it eventually motivated me to learn programming so that I could create my own worlds (more on that on some future post).

Sweet memories they are. Unfortunately I do not play much nowadays, since time becomes increasingly more precious, and unfinished projects keep accumulating on my back. There is one game though that I still play from time to time, and that is Doom, the game that started it all! In my estimation this title (Doom II actually) gets dangerously close to perfection: weapons, monsters, sound, ambience, gameplay, the feeling of loneliness and despair felt when trapped in those technological mazes, just perfect. But even more perfect is that thanks to the modding capabilities and the release of the source code, the Doom community has been releasing till today custom source ports and thousands of maps and mods in a neverending stream of pure epicness. I was once part of such community, unfortunately somebody stole my laptop and I lost all my backupless personal creations along. Only this video remains.

In today's tenth episode of these rambling explorations of my inner bedroom I'm bringing back my old Doom server Penumbra and show you how to set up your own one on a machine running Debian 10. We will be using the source port Zandronum, a multiplayer-focused port that concentrates most online game activity.

  1. Install the required libraries (~280 MB):

    sudo apt-get install adwaita-icon-theme at-spi2-core dconf-gsettings-backend \
    dconf-service doomseeker doomseeker-zandronum fluid-soundfont-gm glib-networking \
    glib-networking-common glib-networking-services gsettings-desktop-schemas \
    gtk-update-icon-cache libao-common libao4 libatk-bridge2.0-0 libatk1.0-0  \
    libatk1.0-data libatspi2.0-0 libcairo-gobject2 libcolord2 libdconf1 libdouble-conversion1 \
    libegl-mesa0 libegl1 libepoxy0 libevdev2 libevent-2.1-6 libgbm1 libgnutls-dane0 \
    libgtk-3-0 libgtk-3-bin libgtk-3-common libgudev-1.0-0 libinput-bin libinput10 \
    libjson-glib-1.0-0 libjson-glib-1.0-common libmtdev1 libpcre2-16-0 libproxy1v5 \
    libqt5core5a  libqt5dbus5 libqt5gui5 libqt5multimedia5 libqt5network5 libqt5svg5 \
    libqt5widgets5 libqt5xml5 librest-0.7-0 libsdl1.2debian libsoup-gnome2.4-1 libsoup2.4-1 \
    libunbound8 libwacom-bin libwacom-common libwacom2 libwadseeker2 libwayland-server0 \
    libxaw7 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
  2. Create a new folder for the program and go there. You can choose a different directory at your convenience.

    mkdir /srv/zandronum
    cd /srv/zandronum
  3. Download and uncompress the Linux binaries. The URL to the most recent version can be obtained here.

    tar -xvjf zandronum3.0-linux-x86_64.tar.bz2
    rm zandronum3.0-linux-x86_64.tar.bz2
  4. Create a symbolic link to the file, which is no longer available as a package in Debian.

    ln -s /srv/zandronum/ /usr/lib/
  5. Change the binary file permissions to make it executable.

    chmod o+x zandronum-server
  6. If you are using a firewall such as ufw (you should!), decide which port you will be using for Zandronum and create a rule to allow it. The default one is 10666.

    sudo ufw allow 10666
  7. Create another directory to store the game content files called wads.

    mkdir /home/doom
    cd /home/doom
  8. I have prepared a tar file with the required game file doom2.wad plus a few mods and custom maps that we will now unpack, only for learning purposes of course. If you enjoy the game, consider purchasing one copy.

    tar -xvf doom.tar
    rm doom.tar
  9. Last step also unpacked a config file that contains our server settings configured for a survival cooperative game limited to five lives. Make sure to edit the file to change these values.

    nano /home/doom/doom.cfg
    # This is the server name that will be displayed in Doomseeker (
    sv_hostname "My cool server | Survival 5 lives" 
    # If you later expose your wad folder to the web, add its URL here
    # for other players to download the wad files directly from you
    sv_website ""
    # This text will be displayed on screen every time a player joins
    # It is formatted using colors:
    sv_motd "\\ccWelcome to \\crmy server!\n\n\\crsurvival \\cc- \\cd5 lives"
    # Remote connection password
    sv_rconpassword "myrconpassword"

    Hit Ctrl + X, Y, ENTER to save and close.

  10. Open a virtual terminal session so that we can later leave our server running.

  11. Go to the program folder and launch zandronum-server on the port we previously selected linking the config and wad files.

    cd /srv/zandronum
    ./zandronum-server -host -port 10666 -iwad /home/doom/doom2.wad -file /home/doom/ \
    -file /home/doom/brutalv21.pk3 -file /home/doom/HXRTCHUD_BD21t_v7.7e.pk3 +exec /home/doom/doom.cfg
  12. Hit Ctrl + A and D to close the virtual terminal and leave our server running.

If we now open Doomseeker in our gaming rig we should be able to search for our server name and start playing! I hope this article is helpful to somebody. If you are interested in playing together, contact me on Mastodon and let's organize a session!

This was post #10 in the #100DaysToOffload challenge. As always, thank you for reading and see you next time.

How to install Searx in Debian 10 using nginx

This is diary entry number 9 in the captain's log. I have regained consciousness after several days of cryogenic sleep, which were interrupted by the main computer after detecting some foreign code the algorithm was not able to compute, perhaps related to some not up-to-date package version. It seems the error modified part of the stellar alignment parameters of the main cruise control and took the ship to an unexplored region of space which, as far as I can determine, did not even exist before. Such is the nature of time-space as experience shows.

In today's operation we will learn how to install Searx, a free (as in freedom, you know) privacy-focused internet metasearch engine which aggregates results from more than 70 search services and has the following features:

  • Can be self-hosted
  • No user tracking or profiling
  • Cookies are not used by default
  • Secure, encrypted connections (HTTPS/SSL)
  • Can proxy web pages
  • Can be set as default search engine
  • Customizable (theme, search settings, privacy settings)
Our first step will be getting all the required libraries:

sudo apt-get install git build-essential libxslt-dev python-dev \
python-virtualenv python-babel zlib1g-dev libffi-dev libssl-dev

Now we will clone the main repository at /srv/searx (feel free to choose a different one):

cd /srv
git clone

We then move to the new location and create a virtual environment as a good practice to isolate package versions between projects:

cd searx
virtualenv env

We activate the virtual environment:

source env/bin/activate

And run to update Searx packages:

./ update_packages

We can get out of the virtual environment for now:


Our next goal will be to configure nginx to correctly serve Searx on the web. We will first create a new file in sites-available and edit it:

nano /etc/nginx/sites-available/searx

We now add the following configuration text, changing the server_name parameter with our own domain. Make sure to also update the /static/ URL if you chose a different installation location earlier:

server {
    listen 80;

    location /static {
        alias /srv/searx/searx/static;

    location / {
        proxy_set_header Host $host;
        proxy_set_header Connection       $http_connection;
        proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme         $scheme;
        proxy_buffering                   off;

To save and close the file, hit ctrl + x, y, enter.

We then create a symlink to sites-enabled:

sudo ln -s /etc/nginx/sites-available/searx /etc/nginx/sites-enabled

And restart nginx:

sudo systemctl restart nginx

Almost finished! We will now edit Searx settings file:

nano /srv/searx/searx/settings.yml

And we'll change the following variables with our instance name, contact mail and an invented secret key:

instance_name : "MyCoolName"
secret_key : "r5Ekg75K865eyj8jhm757Lqq" # Change this!

Now we only need to run Certbot which will provide a https certificate and update our nginx config files automatically:

sudo certbot --nginx -d

That is all for our installation! Last step will be to run the web app. In order to leave it opened, we will use the screen command to isolate a terminal session:


We finally go to the Searx directory, activate the virtual environment and run the python script:

cd /srv/searx
source env/bin/activate
python3 searx/

To close the virtual terminal session, hit ctrl + a, d. There, now the session will be running in the background. If we ever want to get it back to, for instance, check Searx logs in real time, you can run:

screen -r

If you lack time or resources to set up Searx by yourself, you can also use my personal instance eXplora or any other public one. eXplora does not keep logs nor is anything profiled, but of course that is what everyone is saying these days. Time for me to get back to the cryochamber. In case you come from the future and manage to find this text recording floating in the metaverse after an alien race has wiped out human consciousness, thank you for reading and see you after the next beat.

How to mount a Wasabi S3 bucket in Debian 10

Welcome to this remote place of the cosmos, stranger. Today we will learn how to mount a Wasabi bucket as a virtual drive in Debian 10, allowing us to dramatically increase storage space in our VPS or home server for a small amount of money.

Wasabi is a root vegetable, green in color, from the same family as broccoli, cabbage, cauliflower, and mustard. It is also a cloud storage provider that competes with solutions like Amazon S3 by offering full compatibility, increased speed and no transfer costs. With a price of $5.99 per terabyte per month and free transfers, it surely is a much cheaper option than Amazon, plus you don't get to feed the beast which is good.

Before continuing it is important to consider that S3 is not a true file system. It is for instance eventually consistent, meaning that if several servers are working on the same bucket, one can be served new content while the other can be served old. Being a cloud virtualization, it also has bigger latency that a physical file system. Finally, Wasabi includes a 30/90-day file retention policy, making you pay for that storage time after changing or deleting an object. This aspect makes it a no-go if you want to store big amounts of files that are going to be modified frequently.

Having said that, if you are looking for cheap cloud storage to keep static files such as media content, it is a reasonable solution! I personally use it to store static files from my self-hosted web services and also to backup my music and ebook collection. Ready to expose your precious data to the Cloud? Let's see how we can do it.

First step is to create a new bucket and access key in Wasabi's control panel. Remember to write down your secret key! As far as I know, it cannot be retrieved later.

Once our bucket is ready, we can proceed to install s3fs, the tool we will be using to mount the bucket. Connect to your terminal and install it through apt:

sudo apt install s3fs awscli -y

In order to check if the installation was successful, run:

which s3fs

If it returned /usr/bin/s3fs then it's all good. Our next step is to create a file containing our Wasabi bucket credentials. We will save them at /srv/.passwd-s3fs (you can store them somewhere else if you want!):


Now we need to change the file permissions:

chmod 600 /srv/.passwd-s3fs

Almost there! Next step is to create the directory we want to use to mount our bucket. I like placing it at /home, but of course you can set up a different one:

mkdir /home/wasabi

Those are all the preparations we need. Now we will just run s3fs to mount our bucket and that's it! First parameter is our bucket name (I named mine "home-wasabi"). Second, the directory we created at home. Third, the path to our password file. Last parameter is Wasabi's service URL, which will depend on the region you created the bucket in. I live in Spain, so EU Central for me:

s3fs home-wasabi /home/wasabi -o passwd_file=/srv/.passwd-s3fs -o url=

And voila! Our bucket should be mounted now. You can test it by dropping a file to the bucket from Wasabi's control panel, which should be also accessible from the terminal. There is just one final detail! The moment our machine gets rebooted, the mount will be gone, sad face. There are multiple ways to set it up so that the bucket gets mounted automatically after every reboot. I'm a big fan of CRON for its simplicity, which provides a @reboot directive precisely for that! In order to add a CRON job, run:

crontab -e

And add the s3fs command we run previously:

@reboot s3fs home-wasabi /home/wasabi -o passwd_file=/srv/.passwd-s3fs -o url=

Hit Ctrl+x to close, then Y to save, and there you go! Now the bucket will be automatically mounted every time our machine reboots. That is all! I hope this entry was useful to you, feel free to ask any questions you may have! I am by no means an expert, but I'll try to help as much as I can.

This has been post #8 in the #100DaysToOffload challenge. As always, thank you for reading and see you next time.

LibraBot, a very simple bot for Matrix

Dear Brothers and Sisters of the Eternal FOSS, today we will take a look at how to write a simple bot for Matrix. But what is Matrix? Are we talking about the modern term for maya, the Hindu word for "that which is not", i.e. the illusion, the entanglement suffered by the atman or soul when identified with the physical body and its consequent materialistic ramifications? In fact we don't, since we are actually referring to Matrix, an open-source network for secure, decentralized communication.

Our current project, called LibraBot, is built on top of python-matrix-bot-api, a handy wrapper for the matrix-python-sdk, and includes a few REST functions imported from our previous Mastodon bots plus a new method to retrieve gifs from Tenor. These functions are then linked to command handlers, so that whenever a user in the room invokes a command such as !command, our bot will run the specified callback function. Mind that our bot needs to be invited to the room beforehand! There is just one detail: the room.send_image() function does not accept an image, but is expecting an MXC URL, which is returned after uploading an image to the Matrix server, therefore we need to upload the image before sending it to the room. The code is as follows:

import random, requests, json, os, magic, pathlib
from matrix_bot_api.matrix_bot_api import MatrixBotAPI
from matrix_bot_api.mregex_handler import MRegexHandler
from matrix_bot_api.mcommand_handler import MCommandHandler
from matrix_client.client import MatrixClient

# Load credentials from JSON file
cred = json.load(open("credentials.json"))
client = MatrixClient(cred["server"])
token = client.login(username=cred["username"], password=cred["password"])

# Save PNG from URL, upload it to the server and send the image MCX URL to the room
def send_image_from_url(room, url):
    image = requests.get(url).content
    with open("temp.png", "wb") as png:
    mime_type = magic.from_file("temp.png", mime=True)
    mxc = client.upload(image, mime_type)

# Send a random cat picture
def cat_callback(room, event):
    room.send_text("Serving a cat, please wait...")
    send_image_from_url(room, json.loads(requests.get('').content)["file"])

# Send a random dog picture
def dog_callback(room, event):
    room.send_text("Serving a dog, please wait...")
    send_image_from_url(room, json.loads(requests.get('').content)["url"])

# Send a random fox picture
def fox_callback(room, event):
    room.send_text("Serving a fox, please wait...")
    send_image_from_url(room, json.loads(requests.get('').content)["image"])

# Send a random gif
def gif_callback(room, event):
    room.send_text("Serving a gif, please wait...")
    args = event['content']['body']
    arg = args[5:]
    if(arg == ""):
        room.send_text("You need to provide an argument! Example: !gif happy")
    send_image_from_url(room, json.loads(requests.get(""
    + cred["tenorAPI"] + "&limit=1&q=" + arg).content)["results"][0]["media"][0]["gif"]["url"])

def main():

    bot = MatrixBotAPI(cred["username"], cred["password"], cred["server"])

    # Set command handlers
    bot.add_handler(MCommandHandler("cat", cat_callback))
    bot.add_handler(MCommandHandler("dog", dog_callback))
    bot.add_handler(MCommandHandler("fox", fox_callback))
    bot.add_handler(MCommandHandler("gif", gif_callback))

    while True:

if __name__ == "__main__":

This has been post #7 in the #100DaysToOffload challenge. As always, thank you for reading and see you next time.

Bot galore!

Tonight, right before your screen, transmitting from this abstract location of the wild lands of the Internet we leave aside our previous psychological explorations and come back to what's really important, which is as you know to populate the fediverse with artificial intelligence nobody asked for in order to create a rich environment for the emergence of synchronicities among other interesting phenomena that sweet randomness entails.

Thanks to my increasing interest in this field I quickly realized there is a pletora of REST APIs available in the Internet for everyone to use. As we saw with Meowbot, the process is really simple. We just need to send a GET verb to the URL the API provides, and in return we get a JSON file that we can disengage into data pieces at our will.

Let's meet our first contestant of the night. His name is Poeticus and his honor/duty is to share random fragments of poetry with the fediverse thanks to the API that Poemist provides. Now the only problem here was that Poemist would only return a single verse plus a URL to the Poemist poem page with clear intent to generate traffic to its own website. This situation was certainly unsatisfying, since the poetic fedivexperience we wanted to achieve had to be self-contained and not requering to jump to an external site. So this time we have been a little sneaky and deployed a tiny spider which scraps the delicious content that we want, only for educational purposes of course. In order to do that, the Scrapy library includes a spider class that we will extend with the rest of our code. Let's take a look:

import scrapy, json, requests
from mastodon import Mastodon

class spider(scrapy.Spider):
    name = "Poeticus"
    title = ""
    poet = ""

    def __init__(self):
        # Get JSON
        poem = json.loads(requests.get("").content)[0]

        # Store title, author and the URL to Poemist page
        self.title = poem["title"]
        self.poet = poem["poet"]["name"]
        self.start_urls = [poem["url"]]

    def parse(self, response):

        # Mastodon token and domain
        mastodon = Mastodon(
            access_token = "asdf",
            api_base_url = ""

        # Create the string that will be sent to Mastodon and add title and author
        string = self.title.strip() + " by " + self.poet.strip() + "\n\n"

        # Get the poem which is contained in the <div class="poem-content"> tag at the Poemist page
        imported = response.css('.poem-content::text')
        text = imported.getall()

        # Iterate through all lines and add them to our string
        # till we get close to Mastodon's 500 character limit
        # We get rid of empty and weird lines and strip them to tidy the result
        for i in range(0, len(string)):
            if(len(string) < 450 and text[i] != "\n" and text[i] != "\n\n"
            and text[i] != "\n\n\n" and text[i] != "I\n"
            and text[i] != "II\n" and text[i] != "*\n"):
                string += text[i].strip() + "\n"

        # Send string to Mastodon

That is all. To run it we will execute in the terminal...

scrapy runspider

...and a quite expressive log will great us, hopefully showing that our spider did fine!

Before we continue this exciting journey to nowhere, we will now experience a regression to an older and simpler time, back to when I was still kept captive in a so-called "high-school" receiving the mandatory mental programming as any person of my age would be back then. Lessons from different subjects were held by unstable teachers in buildings that resemble prisons. At the center of it all there was the high-scholl bell, an omnipresent, all-pervasive entity that ruled our lives and determined our thought schedule.

During mid-morning, this almigthy bell was pious enough to grant us half hour of break from the brainwashing, time that I would use to absorb some of that sunlight that was otherwise denied during those classes about nothing. It was during one of those breaks when my friend approached me holding a deck of mysterious-looking cards. It was the moment Magic The Gathering entered my life. Was that a significant moment, you may be asking yourself? Well no, it was not. I couldn not care less about a game about cards that had to be purchased with money I did not have. There was only one aspect that captivated my mind, and that was the killer artwork that each card had. This was the inspiration for our next contestant MagicBot, a bot that shares Magic The Gathering artwork thanks to Scryfall's API. Since the source code is basically identical to Meowbot's, it is somewhat redundant to show it here, and instead a Github link is provided.

Leaving magic and cats aside, we will reach the end of our session with my personal attempt at creating something original: meet Doggobot, a bot that shares dog pictures, courtesy of Now the universal equilibrium is cosmically restored once again. The code can also be found in Github.

This is post #6 in the #100DaysToOffload challenge. As always, thank you for reading and see you next time?

What is wetiko?

Wetiko is an Algonquin word for a cannibalistic spirit that is driven by greed, excess, and selfish consumption (in Ojibwa it is windigo, wintiko in Powhatan). It deludes its host into believing that cannibalizing the life-force of others (others in the broad sense, including animals and other forms of Gaian life) is a logical and morally upright way to live.

Wetiko short-circuits the individual's ability to see itself as an enmeshed and interdependent part of a balanced environment and raises the self-serving ego to supremacy. It is this false separation of self from nature that makes this cannibalism, rather than simple murder. It allows —indeed commands— the infected entity to consume far more than it needs in a blind, murderous daze of self-aggrandizement. Author Paul Levy, in an attempt to find language accessible for Western audiences, describes it as "malignant egophrenia"—the ego unchained from reason and limits, acting with the malevolent logic of the cancer cell.

A wetiko-free psyche has woke up to the existence of the wetiko pathogen. Turned onto wetiko's nonlocal and shape-shifting nature, both as it plays out in the world and within ourselves, we become aware of the very real tendency within ourselves of self-decepcion, on how we all have the potential to fool ourselves via the creative power of our own mind. This realization of our potential susceptibility to self-deception, which could lead to unwittingly becoming instruments for the devil of wetiko to act itself out through us, serves as a psychihc immunization, inculcating a true humility that safeguards against evil. Everyone, including ourselves, has the potentiality for falling into —and acting out— the unconscious. Because of our awareness of the possibility of pulling the wool over our own eyes, a relatively wetiko-free person cultivates on a daily basis the practice of mindfulness, which serves as a guardian of the gates of our psyche. In addition, to use religious terminology, because we are aware of our potential weakness and yetzer ha-ra (Hebrew for the "evil inclination" within us), we develop a relationship with and rely upon a "higher power" beyond our own limited ego, whether we call it God, the Self, our daemon, our true nature, or whichever of the thousands of names by which it is called. This is very different from when we are afflicted with wetiko, as we are then unconsciously identified with this higher power, which is the very stance which allows us to get away with murder.

For more information on wetiko, you can grab a copy of Dispelling Wetiko by Paul Levy, a very recommendable reading in these dark times of noise and confusion. This is post #5 in the #100DaysToOffload challenge. As always, thanks for reading and see you next time.

MeowBot likes sharing cat pictures with the fediverse

Something I have realized during all these years of daily internet consumption is that there are not enough cat pictures online. Today's project will try to solve this problem. Meet MeowBot, a bot that posts cat pictures. Because that is what the internet needs the most. As usual, let's see how it was done.

This time our content will not be self-stored or auto-generated, but instead we will use the simple yet wonderful API that provides. We will first request a picture URL and save the image as PNG. We will then upload it to Mastodon, as we have seen in previous posts. Here comes the code:

import requests, json, os
from mastodon import Mastodon

# Mastodon token and domain
mastodon = Mastodon(
    access_token = 'abcdefg',
    api_base_url = ''

# Get the image URL
URL = json.loads(requests.get('').content)["file"]

# Save image from URL
img = requests.get(URL).content
with open("cat.png", "wb") as png:

# Upload PNG file to Mastodon
media = mastodon.media_post("cat.png")
mastodon.status_post("#cats #catsofmastodon #mastocats", media_ids=media)

# Delete the image, since it is no longer needed

This is post #4 in the #100DaysToOffload challenge. Almost there! As always, thank you for reading and see you next time.

GardenBot likes generating random voxel gardens in space

Greetings denizens of the fediverse! Today I am happy to release my latest little project, inspired by the concept of a Japanese zen garden. Do you miss Nature but the NWO does not allow you to leave home anymore? Do you wish to reconnect with the vibrations of the forest but all you got is 5G radiation? Well too bad! In any case, meet GardenBot, your digital gardener! Its mission is to flood the fediverse with procedurally generated voxel gardens floating in space. As usual, let's see how it was made.

I created the 3D models using Magicavoxel, a free voxel editor. For the sake of simplicity, I went for 8x8x8 voxels per asset. The models were then imported into Blender, which is also free, and rendered as isometric sprites following this fantastic tutorial. Once all the assets were ready, I just needed a Python library that would render several of them into a single image. My first thought was Pygame, a popular general purpose videogame library. Considering that the bot would be running on a VPS, my question as a noob was, would Pygame work without a display device? And the answer is yes! It turns out this type of mode is called headless, and is based on creating a dummy display. Let's take a look at the code:

import pygame, sys, os, random, pathlib
from pygame.locals import *
from mastodon import Mastodon

# Mastodon token and domain
mastodon = Mastodon(
    access_token = 'abcdefg',
    api_base_url = ''

# Set a dummy display to run headless mode
os.environ["SDL_VIDEODRIVER"] = "dummy" 

# Init pygame and set final image resolution
screen = pygame.display.set_mode((1064, 600), DOUBLEBUF)

# Set sprite dimensions
tileWidth = 128
tileHeight = 128

# Init a 8x8 map
gardenMap = [
    [0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0]

# Load images and save them in a list
background = pygame.image.load('tx_bg.png')
graphics = [

# Print background
screen.blit(background, (0,0))

# Iterate each map tile
for row_nb, row in enumerate(gardenMap):
    for col_nb, tile in enumerate(row):

        # Select a random sprite using weight values
        tileImage = random.choices(graphics, weights=(18, 12, 5, 10, 10, 10, 1, 3, 3, 0.5), k=1)

        # Maths for isometric positioning
        # Thanks to
        cart_x = row_nb * (tileWidth / 2)
        cart_y = col_nb * (tileHeight / 2) 
        iso_x = cart_x - cart_y
        iso_y = (cart_x + cart_y) / 2
        centered_x = screen.get_rect().centerx + iso_x
        centered_y = screen.get_rect().centery / 2 + iso_y

        # Print the tile sprite at its position
        screen.blit(tileImage[0], (centered_x - (tileWidth / 2), centered_y - (tileHeight + 8)))

# Save the image into a PNG file,"garden.png")

# Upload PNG file to Mastodon
media = mastodon.media_post("garden.png")
mastodon.status_post("", media_ids=media)

# Delete the image, since it is no longer needed

And that was it! Now no matter how authoritarian and abominable this bioterrorist world government can get, you will always be able to enjoy some low-resolution digital nature. That is, if your future social score grants you internet access! This is post #3 in the #100DaysToOffload challenge. As always, thanks for reading and see you next time.

Adding RSS to Django

Hello my beloved fediverse! Today I spent most of my day fighting addiction, which has not been especially useful for my productivity. Nevertheless I managed to focus enough energy to bring you post #2 in the #100DaysToOffload challenge, in which we will learn how to implement RSS in Django. Don't know what RSS is? Check out this article by a fellow Fosstodoner who shared a comprehensive and insightful description of it.

As you will soon realize, adding RSS to Django is easy cake since an RSS library for ATOM is already included, which will make our lives easier (yes please). In fact, we will only need to specifiy a couple of classes and link the URL. Let's go!

We will first create a file called inside our project folder and define two classes:

from django.contrib.syndication.views import Feed 
from django.template.defaultfilters import truncatewords 
from django.urls import reverse 
from django.utils.feedgenerator import Atom1Feed
from .models import Post # This is your post model, mind it may have a different name!

class blogFeed(Feed): 
    title = "Our RSS title"
    link = "/feed/"
    description = "Our RSS description"

    # We import all our 'Post' objects filtering by a status property
    def items(self): 
        return Post.objects.filter(status = 1)

    def item_title(self, item): 
        return item.title 

    def item_description(self, item): 
        return item.content

    def item_link(self, item): 
       return reverse('post_view', args = [item.slug]) 

class atomFeed(Feed): 
    feed_type = Atom1Feed

...And that's pretty much it! Now we just need to link a new URL to the blogFeed class in our file inside our project folder:

from . import views
from django.conf.urls import url, include
from django.urls import path
from .feeds import blogFeed 

urlpatterns = [
    path("feed", blogFeed(), name ="feed"), 

Now we can visit our new URL by adding /feed to the end of our domain and... it works! The only problem is... my previous syntax highlighting implementation does not go well with the RSS output. I need to find a way to fix it. But it will have to be tomorrow. As always, thank you for reading and see you next time.

RandomBot likes sharing random facts with the fediverse

Last weekend I submerged myself into the fediverse for the first time and so far it has been a real blast. After browsing this list of public Mastodon instances I decided to go with Fosstodon since FOSS is love. As soon as I joined, I was welcomed by a group of kind users who from the very beginning made me feel at home. Then I learned about the two available timelines, local and federated. Local is composed by people from your instance and feels, as Someone said, like coming to the town for the first time in Animal Crossing or Stardew Valley: people sharing their projects, personal websites —I've collected so many of them already!—, it truly feels like the old days of internet. Federated, on the other hand, is more like the wild west if you somehow replace guns with gender inclusion and some sporadic softcore hentai, overall a great way to connect with denizens from other towns.

Another aspect that I quickly found myself in love with is the myriad of interesting bots that dwell the federated timeline. Meet RCT Guest, a smiling young boy who continuously shares honest thoughts about the park he is eternally trapped in, Sentient Dwarf Fortress also shares his thoughts, though this time from the perspective of a doomed dwarf, then there is cubeglobe which procedurally generates voxel maps just because, The Tiny Gallery which posts well, tiny galeries, and Noisemaker Bot, perhaps the most impressive one and a proper noise artist. These are just a few!

A bot was therefore an exciting concept for my next project, and so today my anonymous reader I am happy to introduce you to RandomBot, a digital fediverse entity whose main and only purpose is to share facts that probably nobody cares about. As you can see, a noble cause indeed, since if there is something that internet needs, it is more useless information. I would love to say that the facts posted by RandomBot are generated by some sort of futuristic deep mind algorithm that somehow taps into our collective human consciousness and synthesizes knowledge into tangible English sentences, but as a matter of fact they were simply extracted from this python library, whose author in turn scrapped them from this website. I was initially going to include that library, however I later decided to import the text file myself, which would grant me complete freedom to edit facts at will. So if at some point my life reaches such levels of ultimate boredom, I can use the time that I have left adding new useless facts to the bot, how neat is that!

For anyone interested, here is the code which is almost as small as my bank balance:

from mastodon import Mastodon
import random, os, pathlib

# Establish Mastodon's token and instance domain
mastodon = Mastodon(
    access_token = 'abcdefg',
    api_base_url = ''

# Load lines from txt in local directory 
facts = open(os.path.join(pathlib.Path(__file__).parent.absolute(), 'facts.txt')).read().splitlines()

# Select a random line and send to Mastodon
fact = random.choice(facts)

And so the bot was born, and a cron job was imposed on him as a constant reminder of his meaningless yet enforced purpose in existence. A fine bot it is. But what about the not-so-good ones? At some point I realized the comment section of this website was exposed to potential automated spam scripts, and some sort of higher consciousness verification would be required. Of course I was not going to use recaptcha because it sucks and also fuck Google, instead I used this cool little library. As a final step before closing this episode in Who The Hell Cares?, syntax highlighting was also added to the website thanks to this rather old yet fully functional tutorial.

So what's next? Well since I have now become a complete expert in artificial intelligence, I might as well create more bots applying perhaps some sort of procedural generation, an area that I love and have some previous experience with. It is yet to be seen.

This is post #1 of the #100DaysToOffload challenge (more info here). As always, thanks for reading and see you next time.

Hello world

This website was created out of a personal challenge to learn the basics of Django. Now that it is completed, I might as well keep using it! I have some additional sections in mind, which will be added in future versions. I am also thinking of using it as a diary for all my development adventures, hoping to share some of the stuff that I learn on the way.

You can read more about me at the about section. I guess that is all for now, thank you for reading!