I have a folder of MP3s, some of which date back to 1999, just a few years after the format was popularised. Most of them have utterly terrible names (think RIDEONAM.MP3). I think at this point they might even survive the heat death of the universe. And they’ll still be terribly-organised.
That can be fixed easily* with programs like
beets
* = the program itself is easy to use, but installing and configuring it, requires a PhD in Linux-Arch-ology
No I’m sure there will be an obscure shell script that someone wrote to do all of the install for you that will suddenly fail on a broken python dependency (because why not) and then leave your system in semi-altered state that doesnt really work wrong but its never quite right again
I 100% learnt to use docker specifically to avoid the exact situation you described.
Got any good resources for learning?
In my (limited) experience Docker is just “run some script from a random GitHub that loads more stuff from a random GitHub… now you have a blob of code on your PC somewhere that’s unmodifiable and inaccessible unless it’s a web app in which case it’s listening on a random port with no access to any system resources”
I assume there’s something more I need to be doing but all the learning resources just kinda assume you understood wtf it’s doing.
Switch “some script” to “docker compose” and you are a subject matter expert.
Welcome to the linux community.
I mean I’d rather get told to “rtfm” than hear “it just works” with no explanation
I tend to think of docker containers like light virtual machines.
You can start with an image of a very simple bare operating system, or from an OS with a few things installed (in my case I have lately been using images from dockerhub under nvidia/cuda-ubuntu so that my container spins up with ubuntu and the drivers and SDK for my GPU).
Then essentially the Dockerfile becomes the sandbox from which to test installation scripts, see what works by trial and error if necessary, to install the programs you want – if you make a mistake or the install script fails as in the comment above, you can just kill the container and spin up a new one without the “doesn’t really work wrong but its never quite right again” issue :)
I know this does sound like ‘rtfm’ but I definitely have made a lot of use of the Docker manuals: https://docs.docker.com/manuals/
These manuals, plus stack overflow searching for Dockerfile tips, and github repos for the software I want to use that sometimes do contain Dockerfiles, have been enough to get me acquainted with spinning up my own containers and installing what I need, and use docker compose to run multiple containers on a single host that can talk to each other. Beyond that, I had to search a bit harder (mostly on StackOverflow, but also a bit of tail-chasing using ChatGPT) to learn how to configure overlay networks to allow containers to talk to one another from on different servers, and using docker stack to spin up a swarm of containers as services on a cluster.
Yeah… that all makes sense and those docks seem decent. The piece of the puzzle that’s missing for me is: how does docker turn a yaml config that says like … (from their example):
> frontend: > image: example/webapp > ports: > - "443:8043" > networks: > - front-tier > - back-tier > configs: > - httpd-config > secrets: > - server-certificate
… into actual operating, functioning container blobs? e.g. How does it know that “secrets: server-certificate means that it should take an ssl cert and place it in the container? How does it know where to place that certificate?
I haven’t used secrets but I would go through the docker compose secrets docs
https://docs.docker.com/compose/how-tos/use-secrets/
At a glance it seems to be informative, but I’m not sure if it explains in depth how it is doing things under the hood.
Musicbrainz Picard is a lot easier than beets, although it does require some introductory concepts to make sense (e.g. terminology like “release”, “release group”). And it makes it too easy to accidentally poison datasets in an attempt to be helpful. Harder to automate than beets, too.
Both of them also benefit from a decent knowledge of where your files came from, not as good for a random pile of mp3s.
Picard is very manual, I fucking love it though
I used MusicBrainz Picard when I stopped paying for Spotify. Went over my old library, audio tag matched all my songs, added all metadata, sorted everything. I moved it to Nextcloud and using the Music Player plugin, I have my own Spotify using any supersonic/ampache client. Life is good.
What client do you recommend on Android
I use Symfonium and an easily happy with it, if it helps. Not foss - you have a one time fee (aka buying - not a subscription), however. I found it worth it, and use it in conjunction with a Navidrome instance.
I’ve been using Tempo with Navidrome and it’s really good!
+1 for Feishin if you want a desktop client as well.
Sorry I am on iOS. I use Play:sub and I love it. Maybe there’s an android version?
I fucking love beets