• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • This is an annoying quirk in the way docker handles networking between containers and I couldn’t find a good solution for this issue when I was trying out network_mode. I just couldn’t find a way to set docker up to automatically restart the dependent container. You can achieve this with services defined in the same stack (using depends_on), but I don’t know if it’s possible with your current setup.

    That’s why I mentioned manual routing in my other reply. It’s annoying to set up, but more convenient because you avoid having to manage restarts (or figuring out how to get docker to do it, which may not be possible in this case).


  • Uhh, I think you might be confused. Let me explain a bit more:

    1. Services and Containers aren’t the same thing. The distinction usually doesn’t matter in typical self-hosting scenarios, but in this case it does.

    In short: Services are what you define in a compose file; Containers are what you spin up based on those service definitions.

    1. network_mode is a service attribute and it can be defined for each service separately.
    2. network_mode: "service:{name}" requires the service being referenced to be part of the same stack. This is probably what you were thinking of when you wrote this reply.
    3. network_mode: "container:{name}" can freely reference any preexisting container. This helps you achieve what you want. You can define your gluetun container independently, along with any services you might want to be part of the same stack, and give it a unique identifier using container_name: myIndependentGluetun. After spinning it up, run your Qbittorrent container or whatever service you want to route through the gluetun container after adding network_mode: "container:myIndependentGluetun".

    You could also route it manually. That’s a more advanced solution, but it’s more convenient than the network_mode approach. More on this here: https://discuss.tchncs.de/post/19039498



  • Even though I live in a studio apartment, when my doorbell goes off (dumb doorbell with a tiny mic next to it) I have HA set the alarm volume to max on both my phone and tablet and send an alarm trigger. It sounds like a bunch of sirens going off whenever someone rings the doorbell. But it’s not entirely pointless, because I wear noise-cancelling headphones all the time whenever I’m home, so there’s always a chance I won’t hear the doorbell going off.



  • They do block Wireguard. They use DPI (Deep Packet Inspection) at the national level (it’s as expensive as it sounds). They filter and monitor all traffic. Once you have something as invasive as DPI in place, Wireguard becomes rather easy to detect, because it doesn’t hide the fact that you’re establishing a tunnel (its purpose is just to obscure the data being tunneled).

    According to the specification, a specific sequence of bytes (Handshake Initiation packet) is sent by the “client” to negotiate a connection, and a Handshake Response is sent back by the “server”. The handshake packets used to negotiate a connection are basically a recognizable signature of the Wireguard protocol, so if you are able to analyze all outgoing and incoming packets (which DPI enables you to do), you can monitor for these signature packets and block the connection attempt.

    There are variants of the Wireguard protocol that can circumvent this method of censorship (Amnezia Wireguard is one example), but they only work as long as they stay under the radar and don’t see mass adoption. Their own “signatures” would also just get blocked in that case.

    Ultimately, bypassing this level of censorship just isn’t something Wireguard was created for. Wireguard assumes you are only concerned with obscuring your traffic, not hiding the fact that you’re using a VPN. There are better tools for this job, like this: https://www.v2fly.org/en_US/

    Edit: Better link with the language set to English



  • There’s a workaround for Denuvo: buying a copy of the game with pooled funds and sharing the game with all the participants using online activation. It’s not exactly cracking, but it is one way around it. The issue is knowing where to find such groups, or starting one yourself. I can get you into one, If anyone is interested. Just send me a PM asking to join.

    You can get older stuff for free as well. Practically everything is free, but you’ll have to wait longer with the newer titles because people who donated funds take priority.

    Note: Unfortunately, this takes place in a Discord group. You’ll have to use Discord and you’ll have to have an account that is at least one-month old to be able to participate.







  • I think you already have a kill-switch (of sorts) in place with the two Wireguard container setup, since your clients lose internet access (except to the local network, since there’s a separate route for that on the Wireguard “server” container") if any of the following happens:

    • “Client” container is spun down
    • The Wireguard interface inside the “client” container is spun down (you can try this out by execing wg-quick down wg0 inside the container)
    • or even if the interface is up but the VPN connection is down (try changing the endpoint IP to a random one instead of the correct one provided by your VPN service provider)

    I can’t be 100% sure, because I’m not a networking expert, but this seems like enough of a “kill-switch” to me. I’m not sure what you mean by leveraging the restart. One of the things that I found annoying about the Gluetun approach is that I would have to restart every container that depends on its network stack if Gluetun itself got restarted/updated.

    But anyway, I went ahead and messed around on a VPS with the Wireguard+Gluetun approach and I got it working. I am using the latest versions of The Linuxserver.io Wireguard container and Gluetun at the time of writing. There are two things missing in the Gluetun firewall configuration you posted:

    • A MASQUERADE rule on the tunnel, meaning the tun0 interface.
    • Gluetun is configured to drop all FORWARD packets (filter table) by default. You’ll have to change that chain rule to ACCEPT. Again, I’m not a networking expert, so I’m not sure whether or not this compromises the kill-switch in any way, at least in any way that’s relevant to the desired setup/behavior. You could potentially set a more restrictive rule to only allow traffic coming in from <wireguard_container_IP>, but I’ll leave that up to you. You’ll also need to figure out the best way to persist the rules through container restarts.

    First, here’s the docker compose setup I used:

    networks:
      wghomenet:
        name: wghomenet
        ipam:
          config:
            - subnet: 172.22.0.0/24
              gateway: 172.22.0.1
    
    services:
      gluetun:
        image: qmcgaw/gluetun
        container_name: gluetun
        cap_add:
          - NET_ADMIN
        devices:
          - /dev/net/tun:/dev/net/tun
        ports:
          - 8888:8888/tcp # HTTP proxy
          - 8388:8388/tcp # Shadowsocks
          - 8388:8388/udp # Shadowsocks
        volumes:
          - ./config:/gluetun
        environment:
          - VPN_SERVICE_PROVIDER=<your stuff here>
          - VPN_TYPE=wireguard
          # - WIREGUARD_PRIVATE_KEY=<your stuff here>
          # - WIREGUARD_PRESHARED_KEY=<your stuff here>
          # - WIREGUARD_ADDRESSES=<your stuff here>
          # - SERVER_COUNTRIES=<your stuff here>
          # Timezone for accurate log times
          - TZ= <your stuff here>
          # Server list updater
          # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
          - UPDATER_PERIOD=24h
        sysctls:
          - net.ipv4.conf.all.src_valid_mark=1
        networks:
          wghomenet:
            ipv4_address: 172.22.0.101
    
      wireguard-server:
        image: lscr.io/linuxserver/wireguard
        container_name: wireguard-server
        cap_add:
          - NET_ADMIN
        environment:
          - PUID=1000
          - PGID=1001
          - TZ=<your stuff here>
          - INTERNAL_SUBNET=10.13.13.0
          - PEERS=chromebook
        volumes:
          - ./config/wg-server:/config
          - /lib/modules:/lib/modules #optional
        restart: always
        ports:
          - 51820:51820/udp
        networks:
          wghomenet:
            ipv4_address: 172.22.0.5
        sysctls:
          - net.ipv4.conf.all.src_valid_mark=1
    

    You already have your “server” container properly configured. Now for Gluetun: I exec into the container docker exec -it gluetun sh. Then I set the MASQUERADE rule on the tunnel: iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE. And finally, I change the FORWARD chain policy in the filter table to ACCEPT iptables -t filter -P FORWARD ACCEPT.

    Note on the last command: In my case I did iptables-legacy because all the rules were defined there already (iptables gives you a warning if that’s the case), but your container’s version may vary. I saw different behavior on the testing container I spun up on the VPS compared to the one I have running on my homelab.

    Good luck, and let me know if you run into any issues!

    EDIT: The rules look like this afterwards:

    Output of iptables-legacy -vL -t filter:

    Chain INPUT (policy DROP 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    10710  788K ACCEPT     all  --  lo     any     anywhere             anywhere
    16698   14M ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
        1    40 ACCEPT     all  --  eth0   any     anywhere             172.22.0.0/24
    
    # note the ACCEPT policy here
    Chain FORWARD (policy ACCEPT 3593 packets, 1681K bytes)
     pkts bytes target     prot opt in     out     source               destination
    
    Chain OUTPUT (policy DROP 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    10710  788K ACCEPT     all  --  any    lo      anywhere             anywhere
    13394 1518K ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
        0     0 ACCEPT     all  --  any    eth0    dac4b9c06987         172.22.0.0/24
        1   176 ACCEPT     udp  --  any    eth0    anywhere             connected-by.global-layer.com  udp dpt:1637
      916 55072 ACCEPT     all  --  any    tun0    anywhere             anywhere
    

    And the output of iptables -vL -t nat:

    Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    
    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    
    Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
        0     0 DOCKER_OUTPUT  all  --  any    any     anywhere             127.0.0.11
    
    # note the MASQUERADE rule here
    Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
        0     0 DOCKER_POSTROUTING  all  --  any    any     anywhere             127.0.0.11
      312 18936 MASQUERADE  all  --  any    tun+    anywhere             anywhere
    
    Chain DOCKER_OUTPUT (1 references)
     pkts bytes target     prot opt in     out     source               destination
        0     0 DNAT       tcp  --  any    any     anywhere             127.0.0.11           tcp dpt:domain to:127.0.0.11:39905
        0     0 DNAT       udp  --  any    any     anywhere             127.0.0.11           udp dpt:domain to:127.0.0.11:56734
    
    Chain DOCKER_POSTROUTING (1 references)
     pkts bytes target     prot opt in     out     source               destination
        0     0 SNAT       tcp  --  any    any     127.0.0.11           anywhere             tcp spt:39905 to::53
        0     0 SNAT       udp  --  any    any     127.0.0.11           anywhere             udp spt:56734 to::53
    
    

  • Gluetun likely doesn’t have the proper firewall rules in place to enable this sort of traffic routing, simply because it’s made for another use case (using the container’s network stack directly with network_mode: "service:gluetun").

    Try to first get this setup working with two vanilla Wireguard containers (instead of Wireguard + gluetun). If it does, you’ll know that your Wireguard “server” container is properly set up. Then replace the second container that’s acting as a VPN client with gluetun and run tcpdump again. You likely need to add a postrouting masquerade rule on the NAT table.

    Here’s my own working setup for reference.

    Wireguard “server” container:

    [Interface]
    Address = <address>
    ListenPort = 51820
    PrivateKey = <privateKey>
    PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    PostUp = wg set wg0 fwmark 51820
    PostUp = ip -4 route add 0.0.0.0/0 via 172.22.0.101 table 51820
    PostUp = ip -4 rule add not fwmark 51820 table 51820
    PostUp = ip -4 rule add table main suppress_prefixlength 0
    PostUp = ip route add 192.168.16.0/24 via 172.22.0.1
    PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del 192.168.16.0/24 via 172.22.0.1
    
    #peer configurations (clients) go here
    

    and the Wireguard VPN client that I route traffic through:

    # Based on my VPN provider's configuration + additional firewall rules to route traffic correctly
    [Interface]
    PrivateKey = <key>
    Address = <address>
    DNS = 192.168.16.81 # local Adguard
    PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE #Route traffic coming in from outside the container (host/other container)
    PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
    
    [Peer]
    PublicKey = <key>
    AllowedIPs = 0.0.0.0/0
    Endpoint = <endpoint_IP>:51820
    

    Note the NAT MASQUERADE rule.


  • Disco Elysium was full of such moments for me. Here’s one:

    You spend a lot of time in the game basically talking to yourself and your inner voices, and one of these voices is volition. If you put enough points into it, it’ll chime in when you’re having an identity crisis or struggling to keep yourself together and it’ll try to cheer you up and keep you going. At the end of Day 1 in the game you, an amnesiac cop, stand on a balcony in an impoverished district reflecting on the day’s events and trying to make sense of the reality you’ve woken up into with barely any of your memories intact. If you pass a volition check, it’ll say the following line:

    “No. This is somewhere to be. This is all you have, but it’s still something. Streets and sodium lights. The sky, the world. You’re still alive.”

    This line in combination with the somewhat retro Euro setting, the faint lighting, and the sombre-yet-somewhat-upbeat music was very powerful. The image it painted was quite relatable for me. I just sat there for a minute staring at the scene and soaking it all in. Even though this is a predominantly text-based game with barely any cinematics/animations, I felt a level of immersion I had rarely, if ever, experienced before.

    Oh, look at that. Someone actually made a volition compilation. 😀 This video will give you a better idea of what I’m describing: https://www.youtube.com/watch?v=ENSAbyGlij0 Minor spoilers alert!