Hello Navi

Tech, Security & Personal Notes

qBittorrent-nox Service Setup

This note records a qbittorrent-nox setup for a VPS.

Example values:

  • service user: qbittorrent
  • config home: /var/lib/qbittorrent
  • download directory: /data/downloads
  • Web UI port: 18080
  • BT listen port: 45000

Caddy reverse proxy is covered in the separate Caddy note. This article only covers qBittorrent itself.

Install

1
sudo apt install qbittorrent-nox

On Arch Linux:

1
sudo pacman -S qbittorrent-nox

Do Not Run It As Root

Running a network-facing P2P daemon as root is asking for pain. If qBittorrent ever has an RCE or a bad plugin/script interaction, root turns that bug into full machine compromise.

Create a dedicated user:

1
2
3
sudo useradd -r -m -d /var/lib/qbittorrent -s /usr/sbin/nologin qbittorrent
sudo mkdir -p /data/downloads
sudo chown -R qbittorrent:qbittorrent /data/downloads

systemd Unit

Create /etc/systemd/system/qbittorrent-nox.service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[Unit]
Description=qBittorrent-nox service
Documentation=man:qbittorrent-nox(1)
Wants=network-online.target
After=network-online.target nss-lookup.target

[Service]
Type=simple
User=qbittorrent
Group=qbittorrent
ExecStart=/usr/bin/qbittorrent-nox --profile=/var/lib/qbittorrent --webui-port=18080
TimeoutStopSec=1800
Restart=on-failure

ProtectSystem=full
ProtectHome=true
NoNewPrivileges=true

[Install]
WantedBy=multi-user.target

Enable it:

1
2
3
sudo systemctl daemon-reload
sudo systemctl enable --now qbittorrent-nox
sudo systemctl status qbittorrent-nox

Web UI Binding

Stop the service before editing the config, otherwise qBittorrent may overwrite your changes:

1
2
sudo systemctl stop qbittorrent-nox
sudo find /var/lib/qbittorrent -name "qBittorrent.conf"

Edit the config file, usually under:

1
/var/lib/qbittorrent/qBittorrent/config/qBittorrent.conf

Force the Web UI to localhost:

1
2
3
4
[Preferences]
WebUI\Address=127.0.0.1
WebUI\Port=18080
WebUI\HostHeaderValidation=false

Start it again:

1
sudo systemctl start qbittorrent-nox

On recent qBittorrent versions, the initial random password may appear in logs:

1
journalctl -u qbittorrent-nox -e | grep -i password

Change it immediately after logging in.

Caddy Reverse Proxy Notes

This note records the Caddy setup I keep reusing for small VPS services.

Caddy is good for this use case because the config is short and ACME certificate management is automatic. Most app services can stay on 127.0.0.1, while Caddy is the only public HTTP/TLS entry.

Example domains and ports:

  • newapi.example.com:8445 -> 127.0.0.1:3000
  • ds2api.example.com:8446 -> 127.0.0.1:6011
  • pt.example.com:8443 -> 127.0.0.1:18080
  • bot.example.com:8444 -> 127.0.0.1:6185

Install On Debian

1
2
3
4
5
6
7
8
9
10
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl

curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' \
| sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg

curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' \
| sudo tee /etc/apt/sources.list.d/caddy-stable.list

sudo apt update
sudo apt install caddy

On Arch Linux:

1
sudo pacman -S caddy

Basic Caddyfile

Edit:

1
sudo vim /etc/caddy/Caddyfile

Example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
newapi.example.com:8445 {
reverse_proxy 127.0.0.1:3000
}

ds2api.example.com:8446 {
reverse_proxy 127.0.0.1:6011
}

pt.example.com:8443 {
reverse_proxy 127.0.0.1:18080
}

bot.example.com:8444 {
reverse_proxy 127.0.0.1:6185
}

Validate before reload:

1
2
sudo caddy validate --config /etc/caddy/Caddyfile
sudo systemctl reload caddy

If reload is not enough:

1
2
sudo systemctl restart caddy
sudo journalctl -u caddy -e --no-pager

Ports And ACME

For normal automatic certificates, Caddy needs to complete ACME challenges. Keep these in mind:

  • 80 should be reachable for HTTP-01 challenges.
  • 443 should be reachable for normal HTTPS sites.
  • If another service owns 443, Caddy can still serve HTTPS on another port, but certificate issuance may need 80 or DNS challenge.

Example firewall rules:

1
tcp dport { 80, 443, 8443, 8444, 8445, 8446 } accept

Bind Apps To Localhost

The app should usually listen on localhost:

1
2
3
4
127.0.0.1:3000
127.0.0.1:6011
127.0.0.1:18080
127.0.0.1:6185

This prevents users from bypassing Caddy and hitting the raw app port directly.

Notes

  • Put one service block per tool. Do not mix service-specific config into a generic Caddy note.
  • Caddyfile changes should be validated before reload.
  • If the app rejects proxied requests because of host header validation, fix that in the app config deliberately instead of exposing the app port.
  • Do not publish real internal domains or private ports if they reveal your infrastructure layout.

CLI Proxy API Setup

This note records a CLI Proxy API setup.

Example values:

  • config directory: /root/.cli-proxy-api
  • install directory: /root/cliproxyapi
  • public domain: cpa.example.com
  • HTTPS port: 8317
  • management URL: https://cpa.example.com:8317/management.html
  • API endpoint: https://cpa.example.com:8317/v1
  • API key: sk-example-cpa-key-please-change
  • remote management secret: example_remote_management_secret_change_me

All secrets above are fake.

Sync Config

If you prepare config locally and sync it to the server:

1
rsync -avzP --exclude='logs/*' ~/.cli-proxy-api/ root@198.51.100.20:~/.cli-proxy-api/

Lock down the config directory:

1
2
3
chmod 700 ~/.cli-proxy-api
chmod 600 ~/.cli-proxy-api/config/config.yaml
chmod 600 ~/.cli-proxy-api/*.json

Install

1
curl -fsSL https://raw.githubusercontent.com/brokechubb/cliproxyapi-installer/refs/heads/master/cliproxyapi-installer | bash

Then enter the install directory:

1
cd /root/cliproxyapi

Login Providers

Run only the login flows you need:

1
2
3
4
5
./cli-proxy-api --login           # Gemini
./cli-proxy-api --codex-login # OpenAI
./cli-proxy-api --claude-login # Claude
./cli-proxy-api --qwen-login # Qwen
./cli-proxy-api --iflow-login # iFlow

These login artifacts are credentials. Keep the auth directory private.

config.yaml

Example config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
host: "0.0.0.0"
port: 8317

tls:
enable: true
cert: "/root/.cli-proxy-api/config/certs/fullchain.pem"
key: "/root/.cli-proxy-api/config/certs/privkey.pem"

remote-management:
allow-remote: true
secret-key: "example_remote_management_secret_change_me"
disable-control-panel: false
disable-auto-update-panel: true
panel-github-repository: "https://github.com/router-for-me/Cli-Proxy-API-Management-Center"

auth-dir: "/root/.cli-proxy-api"

api-keys:
- "sk-example-cpa-key-please-change"

debug: false
pprof:
enable: false
addr: "127.0.0.1:8316"

commercial-mode: true
logging-to-file: true
logs-max-total-size-mb: 0
error-logs-max-files: 10
usage-statistics-enabled: true
proxy-url: ""
force-model-prefix: false
passthrough-headers: false
request-retry: 3
max-retry-credentials: 2
max-retry-interval: 30
disable-cooling: false
auth-auto-refresh-workers: 2

routing:
strategy: "round-robin"
session-affinity: false
session-affinity-ttl: "1h"

ws-auth: false
enable-gemini-cli-endpoint: false
nonstream-keepalive-interval: 0

Service Management

Console mode:

1
./cli-proxy-api

User systemd service:

1
2
3
systemctl --user enable cliproxyapi.service
systemctl --user start cliproxyapi.service
systemctl --user status cliproxyapi.service

Restart after config changes:

1
systemctl --user restart cliproxyapi.service

Access

1
2
Management Center: https://cpa.example.com:8317/management.html
API Endpoint: https://cpa.example.com:8317/v1

Test:

1
2
curl https://cpa.example.com:8317/v1/models \
-H "Authorization: Bearer sk-example-cpa-key-please-change"

Notes

  • api-keys, login files, and provider refresh tokens are secrets.
  • If TLS is handled inside CLI Proxy API, renew and deploy certificate files consistently.
  • If Caddy handles TLS instead, bind CLI Proxy API to localhost and disable internal TLS.
  • Keep pprof on 127.0.0.1 only.

DS2API Docker Compose Setup

This note records a DS2API deployment with Docker Compose.

Example values:

  • public domain: ds2api.example.com
  • public HTTPS port: 8446
  • app port inside container: 5001
  • host port bound to localhost: 6011
  • admin key: example_ds2api_admin_key_change_me
  • API key: sk-example-ds2api-key-please-change

All keys above are fake.

Clone

1
2
3
4
git clone https://github.com/CJackHwang/ds2api.git
cd ds2api
cp .env.example .env
cp config.example.json config.json

Environment

Edit .env:

1
2
3
4
5
PORT=5001
DS2API_HOST_PORT=6011
LOG_LEVEL=INFO
DS2API_ADMIN_KEY=example_ds2api_admin_key_change_me
DS2API_CONFIG_PATH=/app/config.json

The app listens on PORT inside the container. DS2API_HOST_PORT is the host-side port used by Docker Compose.

Prefer binding the host port to localhost:

1
2
ports:
- "127.0.0.1:6011:5001"

config.json

Minimal example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
"keys": [
"sk-example-ds2api-key-please-change"
],
"api_keys": [
{
"key": "sk-example-ds2api-key-please-change",
"name": "main",
"remark": "for OpenAI-compatible clients"
}
],
"accounts": [
{
"name": "main-account",
"email": "deepseek-user@example.com",
"password": "example-password-please-change"
}
],
"model_aliases": {
"gpt-4o": "deepseek-v4-flash",
"gpt-5": "deepseek-v4-pro"
},
"runtime": {
"account_max_inflight": 2,
"account_max_queue": 5,
"token_refresh_interval_hours": 6
}
}

DeepSeek account passwords and refresh tokens are credentials. Treat config.json as a secret file.

Start

1
2
docker compose up -d
docker compose logs -f

Check logs:

1
docker logs --tail 200 -f ds2api

Caddy Reverse Proxy

1
2
3
ds2api.example.com:8446 {
reverse_proxy 127.0.0.1:6011
}

Reload:

1
2
sudo caddy validate --config /etc/caddy/Caddyfile
sudo systemctl reload caddy

Open the firewall port if needed:

1
sudo nft add rule inet filter input tcp dport 8446 accept

Test

1
2
curl https://ds2api.example.com:8446/v1/models \
-H "Authorization: Bearer sk-example-ds2api-key-please-change"

Chat completion:

1
2
3
4
5
6
7
8
curl https://ds2api.example.com:8446/v1/chat/completions \
-H "Authorization: Bearer sk-example-ds2api-key-please-change" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-v4-flash",
"messages": [{"role": "user", "content": "ping"}],
"stream": false
}'

Notes

  • Change DS2API_ADMIN_KEY; do not leave a default admin password.
  • Keep config.json private because it stores upstream account credentials.
  • Bind the service to 127.0.0.1 and publish it through Caddy.
  • Do not paste real sk-... keys into public docs.

New API Docker Compose Setup

This note records a minimal New API deployment with Docker Compose.

Example values:

  • public domain: newapi.example.com
  • public HTTPS port: 8445
  • container app port: 3000
  • repo: https://github.com/QuantumNous/new-api.git

The domain and port are examples. Replace them with your own values.

Install Docker

If Docker is not installed yet:

1
2
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

Check it:

1
2
docker version
docker compose version

Clone

1
2
git clone https://github.com/QuantumNous/new-api.git
cd new-api

Review the compose file before starting it:

1
vim docker-compose.yml

For a reverse-proxy setup, bind the app to localhost if the compose file allows it:

1
2
ports:
- "127.0.0.1:3000:3000"

If the upstream compose file also starts MySQL and Redis, keep their ports internal unless you have a reason to expose them.

Start

1
docker compose up -d

Check status:

1
2
3
docker compose ps
docker compose logs -f new-api
docker compose logs --tail=100 new-api

For first-time initialization, open the web page and follow the setup wizard:

1
https://newapi.example.com:8445

If you test without Caddy first, use:

1
http://198.51.100.20:3000

198.51.100.20 is a documentation IP address.

Caddy Reverse Proxy

Caddy config:

1
2
3
newapi.example.com:8445 {
reverse_proxy 127.0.0.1:3000
}

Reload Caddy:

1
2
sudo caddy validate --config /etc/caddy/Caddyfile
sudo systemctl reload caddy

Open the firewall port if needed:

1
sudo nft add rule inet filter input tcp dport 8445 accept

Update

1
2
3
4
5
cd /opt/new-api
git pull
docker compose pull
docker compose up -d
docker image prune -f

If you modified tracked files directly and git pull complains, either move local changes into override files or clean the tree intentionally. Do not blindly reset --hard unless you know what you are throwing away.

Logs

1
2
3
4
5
6
7
8
9
10
# all services
docker compose logs -f

# selected services
docker compose logs -f new-api
docker compose logs -f mysql
docker compose logs -f redis

# foreground debug
docker compose up new-api

AstrBot Service Setup

This note records a minimal AstrBot deployment on a VPS.

Example values:

  • service user: astrbot
  • working directory: /opt/astrbot
  • local port: 6185
  • public domain: bot.example.com
  • bot token: example-bot-token-please-change
  • callback path: /callback/astrbot

Replace all example secrets before using the config.

Install uv

1
2
curl -LsSf https://astral.sh/uv/install.sh | sh
uv --version

Install AstrBot

1
2
3
uv tool install astrbot
command -v astrbot
astrbot --help

If astrbot is installed under the service user's home directory, use the absolute path in the systemd unit.

Environment File

1
2
sudo install -d -m 0750 -o astrbot -g astrbot /opt/astrbot
sudo vim /opt/astrbot/.env

Example .env:

1
2
3
4
BOT_TOKEN=example-bot-token-please-change
CALLBACK_URL=https://bot.example.com/callback/astrbot
ADMIN_ID=10000001
PORT=6185

systemd Unit

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Unit]
Description=AstrBot Service
After=network.target

[Service]
Type=simple
User=astrbot
Group=astrbot
WorkingDirectory=/opt/astrbot
EnvironmentFile=/opt/astrbot/.env
ExecStart=/home/astrbot/.local/bin/astrbot
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Enable it:

1
2
3
sudo systemctl daemon-reload
sudo systemctl enable --now astrbot
sudo systemctl status astrbot

Reverse Proxy

1
2
3
bot.example.com {
reverse_proxy 127.0.0.1:6185
}

Logs

1
2
journalctl -u astrbot -e --no-pager
journalctl -u astrbot -f

Cowrie Honeypot Setup

Cowrie is an SSH/Telnet honeypot. It should be treated as hostile-facing software, not as a normal trusted application.

This note records a minimal deployment using a dedicated user, Python venv, and systemd.

Example values:

  • honeypot user: cowrie
  • internal Cowrie SSH port: 2222
  • public SSH trap port: 22 or 2222, depending on firewall/NAT design
  • fake hostname shown to attackers: backup-server

Install

Create a dedicated user:

1
2
sudo adduser --disabled-password --gecos "" cowrie
sudo -iu cowrie

Clone and install dependencies:

1
2
3
4
5
6
git clone https://github.com/cowrie/cowrie.git
cd cowrie
python -m venv cowrie-env
source cowrie-env/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

Configure

1
2
cp etc/cowrie.cfg.dist etc/cowrie.cfg
vim etc/cowrie.cfg

Minimal config:

1
2
3
4
5
[honeypot]
hostname = backup-server

[ssh]
listen_endpoints = tcp:2222:interface=0.0.0.0

Do not run Cowrie as root just to bind port 22. Keep Cowrie on a high port and forward traffic if needed.

systemd Unit

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Unit]
Description=Cowrie SSH Honeypot
After=network.target

[Service]
User=cowrie
Group=cowrie
WorkingDirectory=/home/cowrie/cowrie
ExecStart=/home/cowrie/cowrie/cowrie-env/bin/python /home/cowrie/cowrie/src/cowrie/start.py --nodaemon
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Enable it:

1
2
sudo systemctl daemon-reload
sudo systemctl enable --now cowrie

Logs

1
2
journalctl -u cowrie -e --no-pager
sudo -iu cowrie tail -f ~/cowrie/var/log/cowrie/cowrie.json

The JSON log is usually the most useful file for later analysis.

Firewall Or Port Forwarding

If Cowrie listens on 2222, a simple redirect can expose it as port 22:

1
2
3
4
5
6
table ip nat {
chain prerouting {
type nat hook prerouting priority dstnat;
tcp dport 22 redirect to :2222
}
}

Only do this if the real SSH service is moved somewhere else, such as 22222.

Notes

  • Do not reuse real hostnames, banners, usernames, or internal paths.
  • Logs can contain malicious payloads. Treat them as untrusted input.
  • Keep the honeypot isolated from important credentials and services.

SSL Certificate Renewal Notes

This note records a simple SSL certificate workflow with certbot: acquire, verify, renew, and deploy.

The example domain is api.example.com. Replace it with your own domain.

Acquire A Certificate

For a standalone HTTP-01 challenge, port 80 must be reachable and not occupied by another process:

1
sudo certbot certonly --standalone -d api.example.com

If Caddy or Nginx is already using port 80, stop it temporarily:

1
2
3
sudo systemctl stop caddy
sudo certbot certonly --standalone -d api.example.com
sudo systemctl start caddy

Check The Certificate

1
2
3
4
sudo certbot certificates
sudo openssl x509 \
-in /etc/letsencrypt/live/api.example.com/fullchain.pem \
-noout -dates

Enable Renewal

1
2
sudo systemctl enable --now certbot.timer
systemctl list-timers | grep certbot

Run a dry test:

1
sudo certbot renew --dry-run

Deploy Hook

Some services cannot read directly from /etc/letsencrypt/live/..., or they need certificate files copied into a service-owned directory.

Example hook for a service called myapp:

1
2
3
4
5
6
7
8
9
10
11
#!/usr/bin/env bash
set -euo pipefail

DOMAIN="api.example.com"
TARGET_DIR="/etc/myapp/cert"

install -d -m 0750 "$TARGET_DIR"
install -m 0644 "/etc/letsencrypt/live/$DOMAIN/fullchain.pem" "$TARGET_DIR/fullchain.pem"
install -m 0600 "/etc/letsencrypt/live/$DOMAIN/privkey.pem" "$TARGET_DIR/privkey.pem"

systemctl reload myapp

Install it as a deploy hook:

1
2
sudo install -m 0755 deploy-myapp-cert.sh \
/etc/letsencrypt/renewal-hooks/deploy/myapp.sh

Xray Reality / VLESS Setup

This note records a minimal Xray Reality / VLESS setup.

The example values are fake but shaped like real values, so the config is easier to read than a wall of placeholders.

Example environment:

  • server: vpn.example.com
  • listen port: 443
  • client UUID: 11111111-2222-4333-8444-555555555555
  • Reality private key: example_private_key_do_not_use
  • Reality public key: example_public_key_do_not_use
  • Reality short id: a1b2c3d4e5f60708
  • camouflage target: www.microsoft.com:443

Replace all of them.

Service Commands

Check the service:

1
2
sudo systemctl status xray
sudo journalctl -u xray -e --no-pager

After editing the config, test it before restart:

1
2
sudo xray run -test -config /usr/local/etc/xray/config.json
sudo systemctl restart xray

Generate IDs And Keys

1
2
3
xray uuid
xray x25519
openssl rand -hex 8

Keep the private key on the server. Put the public key in the client config.

Server Config Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
"inbounds": [
{
"listen": "0.0.0.0",
"port": 443,
"protocol": "vless",
"settings": {
"clients": [
{
"id": "11111111-2222-4333-8444-555555555555",
"flow": "xtls-rprx-vision"
}
],
"decryption": "none"
},
"streamSettings": {
"network": "tcp",
"security": "reality",
"realitySettings": {
"show": false,
"dest": "www.microsoft.com:443",
"xver": 0,
"serverNames": ["www.microsoft.com"],
"privateKey": "example_private_key_do_not_use",
"shortIds": ["a1b2c3d4e5f60708"]
}
}
}
],
"outbounds": [
{ "protocol": "freedom" }
]
}

Firewall

Only the public entry port needs to be open:

1
sudo nft add rule inet filter input tcp dport 443 accept

If SSH uses a custom port such as 22222, keep that rule separate and preferably source-restricted.

Client-Side Fields

A client profile usually needs:

1
2
3
4
5
6
7
8
address: vpn.example.com
port: 443
uuid: 11111111-2222-4333-8444-555555555555
flow: xtls-rprx-vision
security: reality
sni: www.microsoft.com
publicKey: example_public_key_do_not_use
shortId: a1b2c3d4e5f60708

Notes

  • privateKey, UUID, and short id are secrets. Do not publish real values.
  • Time sync matters. If the client and server clocks are too far apart, debugging becomes confusing.
  • Start with a direct connection first. Add extra reverse proxies only after the base config works.

VPS nftables Firewall With Docker

This note records a small nftables firewall setup for a VPS that also runs Docker.

The goal is not to write a clever universal ruleset. The goal is simpler: keep the host input path small, expose only the ports that should be public, and avoid fighting Docker's own NAT rules.

The example uses documentation IP addresses:

  • 203.0.113.10 as the admin's trusted IP
  • 198.51.100.20 as the server IP
  • 22222 as the SSH port

Replace them with your own values.

Basic Rules

Use the inet family so the same table can handle IPv4 and IPv6:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#!/usr/sbin/nft -f

flush ruleset

table inet filter {
set trusted_ipv4 {
type ipv4_addr
flags interval
elements = {
203.0.113.10,
}
}

chain input {
type filter hook input priority filter; policy drop;

iif "lo" accept
ct state established,related accept
ct state invalid drop

ip protocol icmp accept
ip6 nexthdr icmpv6 accept

# SSH: only allow the trusted admin IP.
ip saddr @trusted_ipv4 tcp dport 22222 accept

# Public web entry.
tcp dport { 80, 443 } accept
}

chain forward {
type filter hook forward priority filter; policy accept;
}

chain output {
type filter hook output priority filter; policy accept;
}
}

Docker Notes

Docker manages its own forwarding and NAT rules. If you blindly flush everything and then set forward to drop, containers may lose network access or published ports may stop working.

For a small VPS, I usually keep the boundary simple:

  • protect the host through the input chain
  • let Docker manage container NAT
  • expose public services through Caddy on 80/443
  • bind internal app ports to 127.0.0.1 when possible

Example Docker port mapping:

1
2
ports:
- "127.0.0.1:8080:8080"

Then let Caddy publish it over HTTPS.

Apply Safely

Validate before loading:

1
sudo nft -c -f /etc/nftables.conf
+ + +
SYSTEM STATUS: ACTIVE ENCRYPTED SECTOR 7 PRTS_TERMINAL_V2.0 PROTOCOL: 0x2A ENCRYPTED DATA STREAM SYSTEM: ONLINE