Developers: I will never ever do that, no one should ever do that, and you should be ashamed for guiding people to. I get that you want to make things easy for end users, but at least exercise some bare minimum common sense.

The worst part is that bun is just a single binary, so the install script is bloody pointless.

Bonus mildly infuriating is the mere existence of the .sh TLD.

Edit b/c I’m not going to answer the same goddamned questions 100 times from people who blindly copy/paste the question from StackOverflow into their code/terminal:

WhY iS ThaT woRSe thAn jUst DoWnlOADing a BinAary???

  1. Downloading the compiled binary from the release page (if you don’t want to build yourself) has been a way to acquire software since shortly after the dawn of time. You already know what you’re getting yourself into
  2. There are SHA256 checksums of each binary file available in each release on Github. You can confirm the binary was not tampered with by comparing a locally computed checksum to the value in the release’s checksums file.
  3. Binaries can also be signed (not that signing keys have never leaked, but it’s still one step in the chain of trust)
  4. The install script they’re telling you to pipe is not hosted on Github. A misconfigured / compromised server can allow a bad actor to tamper with the install script that gets piped directly into your shell. The domain could also lapse and be re-registered by a bad actor to point to a malicious script. Really, there’s lots of things that can go wrong with that.

The point is that it is bad practice to just pipe a script to be directly executed in your shell. Developers should not normalize that bad practice.

    • felbane@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      7
      ·
      4 days ago

      Common or not, it’s still fucking awful and the people who promote this nonsense should be ashamed of themselves.

      • perishthethought@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        Yah, when I read this, I was like, pretty sure pi-hole started this as a popular option. I dig it though, so I guess OP and I are not on the same page. (I do usually look over the bash scripts before running them piped to bash, though.

    • PlexSheep@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      For rust at least, those are packaged in Debian and other distros too. I think rustup is in Debian Trixie too.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      --proto ‘=https’ --tlsv1.2

      That’s how you know they care, no MIMing that stuff without hijacking the CA at which point you have a whole another set of problems, and if you trust rustc to not delete your sources when they fail a typecheck, then you can trust their installer. -f is important to not execute half-downloaded scripts on failure, -s and -S are verbosity options, -L follow redirects.

      • tgt@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        So I was wondering what the flags do too, to check if this is any safer. My curl manual does not say that -f will not output half downloaded files, only that it will fail on HTTP response codes of 400 it greater… Did you test that it does not emit the part that it got on network error? At least with the $() that timing attack won’t work, because you only start executing when curl completes…

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          With the caveat that I’m currently blanking on the semantics of sub-shells yes I think you’re right, -f is about not executing <hmtl><h1>404 Not Found</h1></html>. Does curl output half-transferred documents to stdout in the first place, though, and also bash -c is going to hit the command line length limit at some point.

          And no I haven’t tried anything of this. I use a distribution, I have a package installer.

          • tgt@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            See the proof of concept for the pipe detection mentioned elsewhere in the thread https://github.com/Stijn-K/curlbash_detect . For that to work, curl has to send to stdout without having all data yet. Most reasonable scripts won’t be large enough, and will probably be buffered in full, though, I guess.

            Thanks for the laugh on the package installer, haha.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 day ago

              Just skimmed through rustup-init.sh and executing half-downloaded things is not an issue, it’s all function declarations, one set -u and one variable declaration (without side effects) before the last line of the script kicks off everything with main "$@" || exit 1. It’s also a dash/bash/ksh/zsh/whatever-polyglot, someone put a lot of thought in this. Also it’s actually just figuring out the architecture and OS to know what binary installer to download. So don’t worry, it won’t accidentally rm -rf /usr.

    • ChaoticNeutralCzech@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 days ago

      There is even a Windows (Powershell) example for Winutil:

      Stable Branch (Recommended)

      irm "https://christitus.com/win" | iex
      

      Better than explaining how to make a .ps file trusted for execution (thankfully, one of the few executable file extensions that Windows doesn’t trust by default) but why not just use some basic .exe builder at this point?

      Obligatory “they better make it a script that automatically creates a medium for silent Linux Mint installation, modifies the relevant BIOS settings and restarts” to prevent obvious snarky replies

  • badbytes@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    5
    ·
    3 days ago

    I wouldn’t call anyone who does this, a developer. No offense, but its a horrible practice, that usually come from hacky projects.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    6
    ·
    4 days ago

    I’ll die on the hill that curl | bash is fine if you’re installing software that self updates - very common for package managers like other comments already illustrated.

    If you don’t trust the authors, don’t install it (duh).

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 days ago

      There was a malicious website on Google pretending to be the brew package manager. It didn’t leave any trace but when you ran the command it ran a info stealer and then installed brew.

      If this was rare I could understand but it is fairly common.

    • moonpiedumplings@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      If you don’t trust the authors, don’t install it (duh).

      Just because I trust the authors to write good rust/javascript/etc code, doesn’t mean I trust them to write good bash, especially given how many footguns bash has.

      Steam once deleted a users home directory.

      But: I do agree with you. I think curl | bash is reasonable for package managers like nix or brew. And then once those are installed, it’s better to get software like the Bun OP mentions from them, rather than from curl | bash.

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    I’m curious, op, do you think it’s bad to install tools this way in an automated fashion, such as when developing a composed docker image?

    • Moonrise2473@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Protect from accidental data damage: for example the dev might have accidentally pushed an untested change where there’s a space in the path

      rm -rf / ~/.thatappconfig/locatedinhome/nothin.config

      a single typo that will wipe the whole drive instead of just the app config (yes, it happened, I remember clearly more a decade ago there was a commit on GitHub with lots of snarky comments on a script with such a typo)

      Also: malicious developers that will befriend the honest dev in order to sneak an exploit.

      Those scripts need to be universal, so there are hundreds of lines checking the Linux distro and what tools are installed, and ask the user to install them with a package manager. They require hours and hours of testing with multiple distros and they aren’t easy to understand too… isn’t it better to use that time to simply write a clear documentation how to install it?

      Like: “this app requires to have x, y and z preinstalled. [Instructions to install said tools on various distros], then copy it in said subdirectory and create config in ~/.ofcourseinhome/”

      It’s also easier for the user to uninstall it, as they can follow the steps in reverse

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Yes I understand all of that, but also in the context of my docker containers I wouldn’t be losing any data that isn’t reproducible

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 days ago

      Very much yes

      You want to make your Dockerfile be as reproducible as possible. I would pull a specific commit from git and build from source. You can chain together containers in a single Dockerfile so that one container builds the software and the other deploys it.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I mean, you’re not op. But your method requires all updates to be manual, while some of us especially want updates to be as automated as possible.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          4 days ago

          I don’t think it is that hard to automate a container build. Ideally you should be using the official OCI image or some sort of package repo that was been properly secured.

        • moonpiedumplings@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          You can use things like dependabot or renovate to update versions in a controlled manner, rather than automatically using the latest of everything.

          On the other side, when it comes to docker containers, you can use github actions or some other CI/CD system to automate the container build.

    • AnyOldName3@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      PowerShell has a system to sign scripts, and with its default configuration, will refuse to execute scripts, and with the more sensible configuration you should switch to if you actually use PowerShell, refuses to execute unsigned scripts from the Internet.

      I suspect that most of the scripts you’re referring to just set -ExecutionPolicy Bypass to disable signature checking and run any script, though.

  • aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    That’s becoming alarmingly common, and I’d like to see it go away entirely.

    Random question: do you happen to be downloading all of your Kindle books? 😜

  • lastweakness@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    What’s a good package manager right now for stuff like this if i don’t want to use the distro package manager though? I want up to date versions of these tools, ideally shipped by the devs themselves, with easy removal and updates. Is there any right now? I think Homebrew is like that? But I wish it didn’t need creating an entire new user and worked on a user account basis.

    In an ideal world, i would want to use these tools in such a way that I can uninstall them, including any tool data (cache, config, etc), and update them in a reliable manner. Most of these tools are also hellbent on creating a new “.<tool-name>” folder or file in the home folder ignoring the XDG spec.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      3 days ago

      if i don’t want to use the distro package manager

      I’m stunned you don’t understand why this is a problem.

      This was absolutely trivial stuff before the great Y2K layoffs, so if you can’t figure it out, ask someone who was releasing software professionally back then.

      And please, if you learn something from this, try to help others.

      • lastweakness@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I don’t want to use a distro package manager for certain software because nearly every distro except Arch requires adding third party repositories which can stop getting updates at any second.

        Don’t worry, I understand the intricacies of these problems a lot more deeply than you probably realise. As a developer, it can suck when your “hotfix” cools down by the time a distro gets around to packaging it. And as a packager, you’re human in the end. As a user though, you just want stuff to work.

        As a longtime Linux user, this isn’t really a problem for me, none of this is. But what about a new user? We need to address these issues at some point if we want Linux to be truly user-friendly.

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Nix. I use it for everything, including all of my tools I use on my work MacBook.

      There are many ways to use nix for this stuff, but personally I use home-manager in a flake-based setup. Versions of tools are all pinned in a lockfile which is committed to source control, so it’s easy to get my config and all my tools on a new machine without any breakage (it does require installing first, though).

      It’s a great tool and has largely solved the pain of dealing with having to work on MacOS, for me.

      • lastweakness@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Nix is a great suggestion and I think i will be using it moving forward as well. Thanks. Ideally I want to use NixOS, do you know if secure boot is still a pain point with NixOS?

      • PartiallyApplied@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Do you know of any Nix projects which are basically nix-but-as-if-was-brew?

        I get that this violates the Nix philosophy, but it’s hard convincing collabs to install a root package manager, which has install commands like:

        nix profile install nixpkgs/nixos-24.11#hello

        I get that it’s flexible, but I would like something more like:

        nix install hello

        I want three things:

        1. rootless
        2. can manage “casks”
        3. global cli with support for per-project flakes

        Do you know if this exists / is being developed?

        • expr@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago
          1. Supposedly there’s a way to install nix without root access, but I can’t speak to it as I’ve never tried. Ofc it doesn’t require sudo to install packages or anything, though.
          2. I don’t think it does this right now, largely because it’s super fucking complicated (as is basically everything Apple) and homebrew casks themselves have had a ton of headaches around it. But nevertheless, I think home-manager has some workarounds it uses itself to enable many common GUI apps on MacOS.
          3. Not sure exactly what you mean, but I think it does that?

          If you want to install packages purely by name, you can use nix-env -i hello or whatever. But it’s pretty janky and not really a recommended way of doing things.

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    4 days ago

    Can you actually explain what concerns you have, that wouldnt be any more of a concern if you downloaded and installed a binary directly?

    At least a shell script you can read in plaintext, a binary can just do who the fuck knows what.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      4 days ago

      If they expected you to read the install script, they’d tell you to download and run it. It’s presented here for lazy people in a “trust me, bro, nothing could ever go wrong” form.

      • There are SHA256 checksums of each binary file available in each release on Github. You can confirm the binary was not tampered with by comparing a locally computed checksum to the value in the release’s checksums file.

      • Binaries can also be signed (not that signing keys have never leaked, but it’s still one step in the chain of trust)

      • The install script is not hosted on Github. A misconfigured / compromised server can allow a bad actor to tamper with the install script that gets piped directly into your shell. The domain could also lapse and be re-registered by a bad actor to point to a malicious script. Really, there’s lots of things that can go wrong with that.

      • ozymandias117@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        I’ve gone through and responded to the other top level comments as well, but another massive issue you could add to your edit is that servers can detect curl <URL> | sh rather than just curl <URL> and deliver a malicious payload only if it’s being piped directly to a shell.

        There’s a proof-of-concept attack showing its efficacy here: https://github.com/Stijn-K/curlbash_detect

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        On Github you can look at the CLI to see if the build process looks reasonable.

        I would still get packages from a distro though

  • treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    4 days ago

    I’m with you, OP. I’ll never blindly do that.

    Also, to add to the reasons that’s bad:

    • you can put restrictions on a single executable. setuid, SELinux, apparmor, etc.
    • a simple compromise of a Web app altering a hosted text file can fuck you
    • it sets the tone for users making them think executing arbitrary shell commands is safe

    I recoil every time I see this. Most of the time I’ll inspect the shell script but often if they’re doing this, the scripts are convoluted as fuck to support a ton of different *nix systems. So it ends up burning a ton of time when I could’ve just downloaded and verified the executable and have been done with it already.

  • rustymitt@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    4 days ago

    I assume your concern is with security, so then whats the difference between running the install script from the internet and downloading a binary from the internet and running it?

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        4 days ago

        You’re already installing a binary from them, the trust on both the authors and the delivery method is already there.

        If you don’t trust, then don’t install their binaries.

        • johntash@eviltoast.org
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          4 days ago

          You aren’t just trusting the authors though. You’re trusting that no other step in the chain has been tampered with or compromised somehow.

    • DuckWrangler9000@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      I think you and a lot of others are late to the idea that mildly is kinda like a joke. Many things are majorly infuriating. On the reddit, many of their top posts aren’t even major. They’re catastrophic, just absurd. I’ve yet to find anything mild

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    4 days ago

    I’ve seen a lot of projects doing this lately. Just run this script, I made it so easy!

    Please, devs, stop this. There are defined ways to distribute your apps. If it’s local provide a binary, or a flatpak or exe. For docker, provide a docker image with well documented environments, ports, and volumes. I do not want arbitrary scripts that set all this up for me, I want the defined ways to do this.

  • Godort@lemm.ee
    link
    fedilink
    English
    arrow-up
    107
    arrow-down
    1
    ·
    4 days ago

    It’s bad practice to do it, but it makes it especially easy for end users who already trust both the source and the script.

    On the flip side, you can also just download the script from the site without piping it directly to bash if you want to review what it’s going to do before you run it.

    • Deello@lemm.ee
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      4
      ·
      4 days ago

      It’s bad practice to do it, but it makes it especially easy for end users who already trust both the source and the script.

      You’re not wrong but this is what lead to the xz “hack” not to long ago. When it comes to data, trust is a fickle mistress.

    • thebestaquaman@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 days ago

      Would have been much better if they just pasted the (probably quite short) script into the readme so that I can just paste it into my terminal. I have no issue running commands I can have a quick look at.

      I would never blindly pipe a script to be executed on my machine though. That’s just next level “asking to get pwned”.

      • WolfLink@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 days ago

        These scripts are usually longer than that and do some checking of which distro you are running before doing something distro-specific.

        • zalgotext@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          Doing something distro-specific in an install script for a single binary seems a bit overcomplicated to me, and definitely not something I want to blindly pipe into my shell.

          The bun install script in this post determines what platform you’re on, defines a bunch of logging convenience functions, downloads the latest bun release zip file from GitHub, extracts and manually places the binary in the right spot, then determines what shell you’re using and installs autocompletion scripts.

          Like, c’mon. That’s a shitload of unnecessary stuff to ask the user to blindly pipe into their shell, all of which could be avoided by putting a couple sentences into a readme. Bare minimum, that script should just be checked into their git repo and documented in their Readme/user docs, but they shouldn’t encourage anyone to pipe it into their shell.

  • gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    3 days ago

    tbf, every time you’re installing basically anything at all, you basically trust whoever hosts the stuff that they don’t temper with it. you’re already putting a lot of faith out there, and i’m sure a lot of the software actually contains crypto-mineware or something else.