• 0 Posts
  • 31 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle

  • The simplest (really the simplest) would be to do a git init --bare in a directory on one machine, and that way you can clone, push or pull from it, with the directory path as URL from the same machine and using ssh from the other (you could do this bare repo inside a container but really would be complicating it), you would have to init a new bare repo per project in a new directory.

    If a self-hosted server meaning something with a web UI to handle multiple repositories with pull requests, issues, etc. like your own local Github/Gitlab. The answer is forgejo (this link has the instructions to deploy with docker), and if you want to see how that looks like there is an online public instance called codeberg where the forgejo code is hosted, alongside other projects.




  • But I think I’m understanding a bit! I need to literally create a file named “/etc/radicale/config”.

    Yes, you will need to create that config file, on one of those paths so you then continue with any of the configuration steps on the documentation, you can do that Addresses step first.

    A second file for the users is needed as well, that I would guess the best location would be /etc/radicale/users

    For the Authentication part, you will need to install the apache2-utils package with sudo apt-get install apache2-utils to use the htpasswd command to add users

    So the command to add users would be htpasswd -5 -c /etc/radicale/users user1 and instead of user1, your username.

    And what you need to add to the config file for it to read your user file would be:

    [auth]
    type = htpasswd
    htpasswd_filename = /etc/radicale/users
    htpasswd_encryption = autodetect
    

    Replacing the path with the one where you created your users file.


  • I’m trying to follow the tutorial on the radicale website but am getting stuck in the “addresses” part.

    From reading from the link you provided, you have to create a config file on one of two locations if they don’t exist:

    “Radicale tries to load configuration files from /etc/radicale/config and ~/.config/radicale/config

    after that, add what the Addresses sections says to the file:

    [server]
    hosts = 0.0.0.0:5232, [::]:5232
    

    And then start/restart Radicale.

    You should be able to access from another device with the IP of the Pi and the port after that


  • Yeah, I started the same, hosting LAN parties with Minecraft and Counter Strike 1.6 servers on my own Windows machine at the time.

    But what happens when you want to install some app/service that doesn’t have a native binary installer for your OS, you will not only have to learn how to configure/manage said app/service, you will also need to learn one or multiple additional layers.

    I could have said “simple bare metal OS and a binary installer” and for some people it would sound as Alien, and others would be nitpicky about it as they are with me saying docker (not seeing that this terminology I used was not for a newbie but for them), If the apps you want to self-host are offered with things like Yunohost or CasaOS, that’s great, and there are apps/services that can be installed directly on your OS without much trouble, that’s also great. But there are cases where you will need to learn something extra (and for me that extra was Docker).


  • XKCD 2501 applies in this thread.

    I agree, there are so many layers of complexity in self-hosting, that most of us tend to forget, when the most basic thing would be a simple bare metal OS and Docker

    you’ll probably want to upgrade the ram soon

    His hardware has a max ram limit of 4, so the only probable upgrade he could do is a SATA SSD, even so I’m running around 15 docker containers on similar specs so as a starting point is totally fine.


  • I get your point, and know it has its merits, I would actually recommend Proxmox for a later stage when you are familiar with handling the basics of a server, and also if you have hardware that can properly handle virtualization, for OP that has a machine that is fairly old and low specs, and also is a newbie, I think fewer layers of complexity would be a better starting point to not be overwhelmed and just quit, and then in the future they can build on top of that.


  • I have a Dell Inspiron 1545, that has similar specs to yours running Debian with Docker and around 15 services in containers, so my recommendation would be to run Debian server (with no DE), install docker, and start from there.

    I would not recommend proxmox or virtual machines to a newbie, and would instead recommend running stuff on a bare metal installation of Debian.

    There are a bunch of alternatives to manage and ease the management of apps you could choose from like, yunohost, casaOS, Yacht, Cosmos Cloud, Infinite OS, cockpit, etc. that you can check out and use on top of Debian if you prefer, but I would still recommend spending time on learning how to do stuff yourself directly with Docker (using docker compose files), and you can use something like Portainer or Dockge to help you manage your containers.

    My last recommendation would be that when you are testing and trying stuff, don’t put your only copy of important data on the server, in case something break you will lose it. Invest time on learning how to properly backup/sync/restore your data so you have a safety net in case that something happens, you have a way to recover.


  • I have no experience with this app in particular, but most of the time there is an issue like this that you can’t reach an app or any other path besides the index, is because the app itself doesn’t work well with path redirection of subfolders, meaning the app expects paths to be something like domain.tld/index.html instead of domain.tld/subfolder/index.html for all its routes.

    Some apps let you add a prefix to all its routes they can work, so you not only have to configure nginx but the app itself to work with the same subfolder.

    Other apps will work with the right configuration in nginx if they do a new full page load every time the page changes its path/route.

    If it is a PWA that doesn’t do a page load every time the path is changed, it’s not going to work with subfolders as they don’t do any page refresh that goes through nginx, and just rewrite the visible URL on the browser

    What I can recommend is to switch to a subdomain like 2fa.domain.tld instead of a subfolder and test if it works, as subdomains are the modern standard for this kind of thing these days, to avoid this type of issues.

    Edit: looking at the app demo, it seems to be a vue.js PWA that doesn’t do any full page refreshes on a path change, so as stated you will probably have to switch to a subdomain to make it work.




  • As I said on the first line, no ranking of any kind can be trusted 100%, I pointed out an alternative to distrowatch, and why I would trust it a bit more, not saying I really trust it, or that I believe every result.

    As I said, it is less popular so it could be a case like OpenMandriva has it integrated to upload automatically for all its users by default, or they found another way to game that ranking.

    When I see any ranking, I do research when I see a distro that is suspiciously positioned, and I haven’t heard about outside the place I saw it referenced, and even so I always stick to mainline distros.

    Honest results would need a standard way that every distro adopts and make an opt-out (not opt-in) regular upload thing similar to what linux-hardware.org does, and be actively trying to mitigate or deny certain distros or specific actors from tampering with the results, and we don’t have that.

    Page rankings or clicks are not enough if every device doesn’t ping it in a legitimate way (fake user agent or other means), and there is always the case of people that will opt-out or block this as they don’t want to be tracked.

    On your point of something like Alexa Page Rankings, the thing I would add is that, at least for me, if it is a ranking shown by a corporation, it is not trustworthy.


  • I think there is no ranking site that can be 100% trusted.

    That said, I trust linux-hardware.org a bit more than distro watch, even if it’s not as popular, because you have to intentionally download an app/script for it to scan and upload your distro/hardware data (so no page clicks or just traffic, you must have the distro installed), and if you repeatedly try to upload the same distro/hardware data, it doesn’t count multiple uploads on its statistics, if they are not at least a month apart.

    Edit: and even on linux-hardware you have strange results like OpenMandriva and ROSA as Distros on top 15, and I have never heard of them outside there, and from what I can find they are somewhat popular in Russia and some parts of Europe



  • Well, if you are forwarding the ports from your home router, and you can’t reach, it’s the most probable cause, if you are, that means that there is no public IP reaching your home router.

    You could contact your ISP and confirm if this is the case, they could offer to assign a public IP for an extra fee, your only other option is to rent a cheap VPS and tunnel traffic between it and your home, but at this point you could also decide to host stuff on the VPS.


  • If your ISP (Internet service Provider) doesn’t have you behind CGNAT or Double NAT (meaning that multiple homes share the same public IP), some ISP block the first block of 1024 ports, so any port below that number is blocked.

    If the problem is that ports below 1024 are blocked, but you do have a public IP reaching your home router, you could contact your ISP so they unblock these ports for you (I had to do that once, so at least with my ISP it was as simple as asking).

    The way you could test if your public IP reaches your home router is by exposing something on a higher port than 1024 like let’s say 8080, if you can reach a simple web or caddy or any other service from 8080, you can at least confirm, that is the issue.

    Be aware that most ISP even if they assign a single IP per house, this IP can be dynamic and can rotate on a regular basis, like daily or weekly


  • As others have already commented, what you need is a Dynamic DNS service, where you register a subdomain, and setup a small program or script on your computer that pings the DDNS server every few minutes, that way you leave that running on the background, and if the program detects that the IP with the request changes, it will update the subdomain to point to it automatically.

    You could access the blog from the subdomain of the DDNS directly or if you get your own domain, you can point it to the DDNS.

    If you want a recommendation, I have been using DuckDNS for years, and it has been pretty reliable.


  • what is a good solution to keep a music folder backed up

    syncthing (file sync, update: removed this, not needed, actually need a backup solution)

    Backup solution, you could use Borg or Restic, they are CLI, but there are also GUI for them

    how can I back up my Docker setup in case I screw it up and need to set it all up again?

    learn to use Dockage to replace Portainer (done, happy with this)

    If you did the switch to Dockge, it might be because you prefer having your docker compose files accessible easily on the filesystem, the question is if you have the persistent data of your containers in bind mounts as well, so they are easy to backup.

    I have a git repo of my stacks folder, with all my docker compose files (secrets on env files that are ignored), so that I can track all changes made to them.

    Also, I have a script that stops every container while I’m sleeping and triggers backups of the stacks folder and all my bind mount folders, that way I have a daily/weekly backup of all my stuff, and in case something breaks, I can roll back from any of these backups and just docker compose up, and I’m back on track.

    An important step, is to frequently check that backups are good, I do this by stopping my main service and running from a different folder with the backed up compose file and bind mounts