14 min read

Self-hosted GPS logging with PhoneTrack, Owntracks Recorder & Owntracks Frontend

Self-hosted GPS logging with PhoneTrack, Owntracks Recorder & Owntracks Frontend
Image from https://github.com/owntracks/frontend

I've always liked tracking my location and seeing the results on a map. So much so that, back in the day, I even got a dedicated GPS Logger (a Transystems iBlue 747 Pro S). A device I had to carry around, turn on and then wait up to one minute for a first fix, let it do all the logging, plug it into a computer, import the results, look at and edit them using myTracks and then export them for analysis in QGIS. That was way to much time and such a big hassle that I abandoned the project.

Recently however, I got into location logging again when I read about dual GPS phones and their improved accuracy which seemed to mean less worry and less manual editing. Having a phone do the tracking would also simplify the direct upload to some kind of database and just generally be more convenient.

The Setup

Due to the likely increase in battery consumption by having GPS on all the time, I did not want to use my main phone for this. So, the first thing I got for the project was a rather inexpensive Xiaomi Mi 11 5G Lite with a broken mic, which I got used for about 100€. Paired with a free SIM card which gives me 200MB per month (they send you marketing but I signed up with Apple's Hide my Email and don't care about marketing SMS as I won't be using the phone for communication) and I was set up well enough for increased accuracy location logging. The app I decided on was PhoneTrack which offers extensive logging setups, can manage multiple tracking jobs ("devices") in parallel, offers the ability to post HTTP messages and is available on F-Droid, meaning it is based fully on open source software.

As for the system to do the saving and visualizing, I opted for OwnTracks, as it is documented quite extensively, has a recording API and with OwnTracks Frontend offers a visually pleasing frontend. With all the hardware and software items above, this was the setup I decided upon:

	graph LR
    A[Phone] --"has installed"--> B[PhoneTrack]
	B --"sends data to"--> C[OwnTracks Recorder] 
	C --"gets displayed by"--> D[OwnTracks Frontend]

Building It

As setting up my client just meant downloading PhoneTrack to the Xiaomi, the next thing I needed was a web server. For about 4€ per month I could get the least-powered option from Hetzner Cloud and pair it with a domain I bought some time ago (you can use any domain here as long as you are able to change the DNS settings). The 4€ version also meant that I only had an IPv6 address, which would get interesting at some points along the way. The following are all the steps I took, retook and then abandoned, until I finally got the system running. Bits of it are taken from the only extensive article I could find. Mind you, I am by no means an experienced Linux SysAdmin, so I did the following points to the best of my knowledge. If you'd like to skip that part and already have your server running and secured, you can jump straight to Setting up OwnTracks.

Setting up the Server

Point your domain to the IPv6 address of your server. This means logging in to your DNS provider where you bought your domain and creating an AAAA record for an IPv6 address, or an A record for an IPv4 address, entering the subdomain (either www or something more explanatory like owntracks so you get owntracks.example.com), and adjusting the Time To Live (TTL) to the lowest amount (1h in my case).

The next thing you want to do is generate and save an SSH key to safely log in to your server. I followed this guide, but basically you want to generate the key first

ssh-keygen -t ed25519

and press enter to save it at the default location. When asked, be sure to set a passphrase to make SSH login even more secure. Depending on your cloud provider, the way of getting the key into your cloud dashboard may differ from the one in the link above. Be sure to check whether your provider also offers a dedicated tutorial.

To log in, enter the name you gave your SSH key or root followed by @ and your server's IP address. This should then prompt you for the passphrase you set above.

Once logged in, you should update and upgrade your system

sudo apt-get update
sudo apt-get upgrade

Then, create a new user and give it sudo rights, so you are not logged in as the root user all the time. In fact, you won't ever have to be again.

add user new_user
usermod -aG sudo new_user

Now, copy the key from your local device (i.e. you should have a terminal that is not SSH'd into the server) to the server. This will prompt you for the sudo user's (new_user) password.

ssh-copy-id -i ~/.ssh/id_ed25519.pub new_user@your:ipv6:is:here::1

You can now type exit and start a new SSH session using

ssh -p your_custom_port new_user@your:ipv6:is:here::1

Next, adapt your SSH configuration. You can change the port you use to SSH into the server to a non-standard port. Any not used port will work, check this with something like sudo lsof -i -P -n | grep LISTEN. Open the configuration file and make the changes. These here are just examples:

sudo nano /etc/ssh/sshd_config

# find these in sshd_config and adapt
Port    non-standard port
PermitRootLogin    no
MaxAuthTries    2
AllowAgentForwarding    no
AllowTcpForwarding    no
X11Forwarding    no
Allowusers    new_user

Restart your config to apply the changes.

systemctl restart sshd

If you have any kind of firewall running, now would be the time to change it to allow the non-standard port for SSH. For me this meant opening the non-standard port in the Hetzner Cloud console. After that, install fail2ban with the standard configuration. The only thing we'll be changing is the port because we are using a non-standard port.

sudo apt install fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Open the configuration file and change the port to the one you picked above and restart fail2ban.

sudo nano /etc/fail2ban/jail.local

port = your_custom_port

systemctl restart fail2ban

At this point, the server should be secure enough to be put on the internet. Of course, you could implement features such as port knocking or do a full system audit, but for our use case we should be good to go. The TTL of your DNS should also be reached now, so it's a nice moment to set up a webserver. I will be using nginx (I've tried with Caddy but couldn't find enough information and nginx is well documented, so that's why I opted for it).

In my case nginx was pre-installed, so I just had to check whether there was anything listening on ports 80 (used for HTTP traffic) and 443 (used for HTTPS traffic).

sudo lsof -i :80
sudo lsof -i :443

# if there is a service listening on one of the ports
sudo systemctl stop SERVICE

To get our website certificate for HTTPS, we are using certbot.

sudo apt install certbot python3-certbot-nginx

For Certbot to work you need to have both ports 80 and 443 open. So check your firewall that that's the case. Then you can create the certificate for your domain. This is whatever you chose in the beginning when assigning the A/AAAA record.

sudo certbot --nginx -d owntracks.example.com

Certbot will then proceed to ask you for some details and an email address. When the certificate is created, Certbot will tell you the location of the certificate on your server. Note this down somewhere, as you are going to need it later.

Let’s Encrypt certificates expire after 90 days. You should set up automatic renewal using cron (sudo crontab -e) that checks every day at noon whether the certificate will expire within the next 30 days.

0 12 * * * /usr/bin/certbot renew --quiet

Setting up OwnTracks

The good thing about Owntracks and the fronted is that both are available as Docker images. So, now's a good time to install Docker.

sudo apt install docker.io

# add your user to the Docker group
sudo usermod -aG docker new_user

# create a Docker daemon file
sudo nano /etc/docker/daemon.json

# add the IPv6 registry if you have an IPv6 only server
{"registry-mirrors": ["https://registry.ipv6.docker.com"]}

# reload Docker
systemctl reload docker

To have multiple containers work together nicely, you should install docker-compose.

sudo install -m 0755 -d /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/demodocker.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/demodocker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list

Then, create a docker-compose file in a folder of your choosing. I just picked the home folder and called it docker-compose.yml (mkdir /docker/owntracks when in the home folder, then cd into it and sudo nano docker-compose.yml). This will start all of your containers with the right settings applied.

version: '3.3'


    image: owntracks/recorder        # https://github.com/owntracks/docker-recorder/#docker-compose-files
      - OTR_PORT=0
      - 8083:8083
      - config:/config
      - store:/store
    restart: unless-stopped

    image: owntracks/frontend        # https://github.com/owntracks/frontend#docker                                              
      - 6083:80                      # the documentation has port 80 here, but I had something else listening there already
      - ./frontend-config.js:/usr/share/nginx/html/config/config.js
      - SERVER_HOST=owntracks-recorder
      - SERVER_PORT=8083
    restart: unless-stopped


As you can see, there is one file we need to create, the OwnTracks Frontend configuration. As the Docker file is in a directory docker in the home directory, I've just put the frontend config there as well. sudo nano /docker/owntracks/frontend-config.js. The maxPointDistance ensures that if you have points that are apart more than 1000 meters, they will not get connected. This is useful when you have areas with poor GPS reception such as when underground. Adjust this to your choosing.

window.owntracks = window.owntracks || {};
window.owntracks.config = {
    api: {
        baseUrl: "https://owntracks.example.com/owntracks/",
    map: {
        maxPointDistance: 1000,

If you now run docker compose up -d, both containers should start and be available via docker ps. As it is all running locally, we now need to expose the containers to the internet so we can point our phone to the server and record data.

Before activating nginx, you should set up basic auth. I needed to install apache2-utils via sudo apt install apache2-utils following this guide. Then, create a new password file for a user. The command will prompt you for a password which you then need to enter every time when visiting your website.

sudo htpasswd -c /path/to/owntracks.htpasswd user1

To set up the nginx configuration, create a .conf  file in the sites-available directory.

sudo nano /etc/nginx/sites-available/owntracks.example.com.conf

Put this into the configuration file (you can copy the #managed by Certbot bits from the default configuration where Certbot put it when you ran sudo certbot --nginx above. The auth_basic_user_file needs to match the file path of your htpasswd user. Remember to enter your actual (sub)domain for the domain parts in this config):

server {
    if ($host = owntracks.example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    listen 80 default_server;
    listen [::]:80 default_server;
    server_name owntracks.example.com;
    return 404; # managed by Certbot
server {
    listen [::]:443 ssl ipv6only=on http2;					# managed by Certbot
    listen 443 ssl http2;							# managed by Certbot

    server_name owntracks.example.com;
    server_tokens off;

    client_max_body_size 40m;

    # SSL config
    ssl_certificate /path/to/fullchain.pem;	# managed by Certbot
    ssl_certificate_key /path/to/privkey.pem;	# managed by Certbot
    include /path/to/options-ssl-nginx.conf;				# managed by Certbot
    ssl_dhparam /path/to/ssl-dhparams.pem;				# managed by Certbot

    auth_basic              "custom-name";
    auth_basic_user_file    /path/to/owntracks.htpasswd;

    # OwnTracks frontend
    location / {
        proxy_pass    ;
        proxy_http_version      1.1;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Real-IP $remote_addr;
        auth_request_set $auth_cookie $upstream_http_set_cookie;
        add_header Set-Cookie $auth_cookie;

    # OwnTracks backend

    # Proxy and upgrade WebSocket connection
    location /owntracks/ws {
        rewrite ^/owntracks/(.*)    /$1 break;	
        proxy_http_version  1.1;
        proxy_set_header    Upgrade $http_upgrade;
        proxy_set_header    Connection "upgrade";
        proxy_set_header    Host $host;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;

    location /owntracks/ {
        proxy_http_version  1.1;
        proxy_set_header    Host $host;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header    X-Real-IP $remote_addr;

    # OwnTracks Recorder Views
    location /owntracks/view/ {
	 # auth_basic		 off;    
	 proxy_buffering         off;            # Chrome
         proxy_pass    ;
         proxy_http_version      1.1;
         proxy_set_header        Host $host;
         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header        X-Real-IP $remote_addr;
    location /owntracks/static/ {
         proxy_pass    ;
         proxy_http_version      1.1;
         proxy_set_header        Host $host;
         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header        X-Real-IP $remote_addr;
    location /owntracks/utils/ {
         proxy_pass    ;
         proxy_http_version      1.1;
         proxy_set_header        Host $host;
         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header        X-Real-IP $remote_addr;

    # HTTP Mode
    location /owntracks/pub {
        proxy_http_version      1.1;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Real-IP $remote_addr;

        # Optionally force Recorder to use username from Basic
        # authentication user. Whether or not client sets
        # X-Limit-U and/or uses ?u= parameter, the user will
        # be set to $remote_user.
        # proxy_set_header        X-Limit-U $remote_user;

The way the config is set up, every page will need the basic auth username and password. If you want to display certain pages without requesting authentication, put auth_basic off; under that location.

Next, add a symlink from the config in sites-available to sites-enabled.

sudo ln -s /etc/nginx/sites-available/owntracks.example.com.conf /etc/nginx/sites-enabled/

Start nginx: sudo systemctl start nginx

When you now navigate to owntracks.example.com, you should see the frontend without any data available. Let's change that.

Recording Location

If you don't want to set up your phone right away, you can do a quick test using curl. Please note that the location of pub will vary depending on the location you put into your nginx config.

curl -u your_username:your_password \
-H "Content-Type: application/json" \
-d '{"_type":"location","t":"u","batt":"75","lat":32.540187,"lon":23.354742,"tid":"mb","tst":1683109800}' \

This should give you a basic entry with which you can verify that the setup works.

Now, to have your phone doing the logging, open PhoneTrack, click on the + symbol and add a new custom log job. Give it a descriptive title and add the target address like in the curl request, swapping the u(ser) and d(evice) for whatever you like. Select "Use POST method" and "send JSON payload" and enter your basic auth credentials. I selected the "walking" profile and only switched on "Keep GPS on between fixes" in addition to the default settings.

After turning on the log job, it should at least send one initial location without error. After that, whenever the settings of your job are met (i.e., accuracy, time, etc.) there should be a POST request getting sent to your server. This is indicated by a POI symbol which is either green or red depending on its state (sent to server, not sent to server). With the Xiaomi I am perfectly fine not having mobile data on and logging my position without noticeable losses in accuracy. As soon as I am connected to Wi-fi again, all points will get uploaded to the server.

Remember to select "Now" in the date picker to see your recently uploaded points. Also, make sure that the correct device is selected, otherwise there won't be any tracks shown.

Importing GPX files

Like I mentioned in the beginning, I have lots of old .gpx tracks on my hard drive. Especially with Owntrack Frontend's heatmap feature, I thought it would be nice to also have all of those tracks in view.

Owntracks doesn't have an import function, but there is a Python script on the Owntracks GitHub that converts a .gpx file to Owntrack's custom .rec format. I adapted it in the following way to work with folders of multiple .gpx files:

import gpxpy
import json
import time
import datetime
import sys
import os
from unidecode import unidecode

if len(sys.argv) < 2:
   print("Please pass a directory containing .gpx files to parse")

gpx_dir = sys.argv[1]
if not os.path.isdir(gpx_dir):
    print(gpx_dir, "is not a directory")

# Create a dictionary to store the points for each month
points_by_month = {}

# Iterate over each GPX file in the directory
for filename in os.listdir(gpx_dir):
    if not filename.endswith(".gpx"):

    print("Opening file ", sys.argv[1], sep="")
    gpx_file = open(os.path.join(gpx_dir, filename), "r", encoding="utf-8")

    gpx = gpxpy.parse(gpx_file)

    track_counter = 1
    all_tracks = len(gpx.tracks)
    for track in gpx.tracks:
        print("Parsing track ", str(track_counter), " of ", str(all_tracks), " (", str(round((track_counter/all_tracks)*100)), "%)", sep="")
        track_counter += 1
        for segment in track.segments:
            # Creates a set of every month and year the points were recorded in
            months = set()
            for point in segment.points:

            # Add the points to the dictionary for each month
            for month in months:
                if month not in points_by_month:
                    points_by_month[month] = []
                for point in segment.points:
                    if point.time.strftime("%Y-%m") == month:
                        point_json = {'_type': 'location',
                                      't': 'u',
                                      'lat': point.latitude,
                                      'lon': point.longitude,
                                      'tst': int(point.time.timestamp()),
                                      '_http': True,
                                      'alt': point.elevation
                                      # 'topic': 'owntracks/hist/' + unidecode(track.description)
                        points_by_month[month].append((point.time.strftime("%Y-%m-%dT%XZ"), point_json))

# Generates the contents for every month and saves them to a .rec file (eg. 2022-03.rec)
for month, points in points_by_month.items():
    output = ""
    for point in points:
        output += point[0] + "\t*" + (18 * " ") + "\t" + json.dumps(point[1]) + "\n"
    rec_file = open(month + ".rec", "a")
    print("Saving file ", month, ".rec...", sep="")

The script takes the .gpx folder as an argument and creates .rec files in the same directory as the script.

python3 gpx-to-rec.py your_directory

To upload the .rec files to your server, first copy them to a folder on it. I created a folder owntracks/recs in the home directory. Copy all files over and ssh into your server.

scp -6 -P your_custom_port *.rec new_user@\[your:ipv6:is:here::1\]:owntracks/recs/

ssh -p your_custom_port new_user@your:ipv6:is:here::1

Then copy the files from owntracks/recs to the docker container's /tmp/ directory. Here, I also added a dedicated folder but you can also just copy the files to the parent directory /tmp/. Use docker ps to get the name of the recorder's docker container (not the frontend's).

sudo docker cp ./owntracks/recs/ owntracks_recorder_1:/tmp/owntracks/recs

Then access the container's shell, cd into the directory where your .rec files are and move them to the container's store so they can be accessed by the recorder and the frontend. As you can see, I have named my folders after the years of the tracks, but you can use any naming convention you like. Just be aware that the frontend will use the first level (hist) as the user and the second (2015) as the device.

sudo docker exec -it owntracks_recorder_1 sh

cd /tmp/owntracks/recs/2015

mv 2015* /store/rec/hist/2015/

Now navigate to owntracks.example.com and adjust the data range to include the range of the newly uploaded files and you should see them appear on the map. Please note that it may take some time depending on your setup, connection speed etc.

More Privacy: A Local Setup

If you don't want all the tracks you've ever recorded to be save on a server that is accessible from the Internet, you may consider just letting your client do the recording but having the historic view on your local devoce only.

To do that, you can almost mirror the setup on your local machine by using Docker. To do that, I installed Docker Desktop on my Mac, created a directory with the two files docker-compose.yml (which is the exact same like we use for the server above) and the frontend-config.js which has "http://localhost:8083/" as the baseUrl.

In my case, I also needed to select "Enable default Docker socket" in the Advanced settings of Docker Desktop

After that, start your containers using the UI or CLI and verify that they're running. In my case, I had to register the volume mount by sending a mock location using curl and only then the uploaded historic files were available. You can do this using

curl -X POST \
-H "Content-Type: application/json" \
-d '{"_type":"location","t":"u","batt":"75","lat":32.540187,"lon":23.354742,"tid":"abc","tst":1683109800}' \

After that, you can upload your converted files either directly via the UI (by navigating to Volumes >docker_store > recorder > Files and then upload them to the files directory. Or you can use the steps described above by copying to a named Docker volume.

When you navigate to http://localhost:8083, your imported tracks should all be on the map after adjusting the date range accordingly.