• eltimablo@kbin.social
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    10 months ago

    Ok then, how about considering that this will only serve to benefit the big tech companies because they’re the ones that can afford the fines? A breach is usually enough to make a smaller company go out of business already between cleanup and lawsuits. Why make it easier for the big tech companies to maintain power?

    • demesisx@infosec.pub
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      10 months ago

      Did you even read my comment? I specifically mentioned that the size of the fine could be tied to their market cap.

      If you work in cyber security, you’d know that there are best practices in place for cybersecurity and it is a WELL UNDERSTOOD FIELD. The main advice everyone gives is to never roll your own cryptography…and that is EXACTLY what many of the hacked companies did.

      Taking a shortcut and hiring shitty devs who just use some random NPM package for security and call it a day is exactly why there are so many breaches. Just as bridges need to be built to withstand double or triple their weight, there should be STANDARDS in place that if violated are subject to fines.

      Companies like Google would basically have to build SUPER SECURE technologies lest they be bankrupted by a breach.

      In conclusion, please try to remove your tongue from your exploitive employer’s back side.

      • eltimablo@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        I did miss that, but again, it’s additional fines on top of an almost guaranteed lawsuit for something that may not even be their fault. If they got owned by a Heartbleed exploit back when it was first announced and a fix wasn’t available yet, should a company be responsible for that? What about when they get hit by a vuln that’s been stockpiled for a couple years and purposely has no fix due to interference from bad actors? There are a lot of situations where fining someone for getting breached doesn’t make sense.

        • demesisx@infosec.pub
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          10 months ago

          You make great points but my final point is this: if a company simply cannot guarantee protection of user data, it shouldn’t be trusted with user data in the first place.

          • eltimablo@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            And I’ll counter with this: no system is perfect, especially when major parts are made by non-employees. Mistakes can and do happen because corporations, regardless of size, are made up of humans, and humans are really good at fucking up.

            • demesisx@infosec.pub
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              10 months ago

              I’m not trying to get the last word, I swear! 🤣

              Go back to my bridge analogy and test that against what you just said.

              Your comment equates to: “oh well, that bridge falling killed thousands of people. At least we were able to allow them to fail in the crucible of the free market!”

              • eltimablo@kbin.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                10 months ago

                Your bridge analogy falls apart because there already are standards (FIPS, among others) that are shockingly insecure despite having been updated relatively recently, and yet we still have breaches. If the standards were effective, places like AmerisourceBergen, the country’s largest pharmaceutical distributor, wouldn’t be supplementing them with additional safeguards. No standard is going to work perfectly, or even particularly well, for everyone. Bridges still fall down.

                EDIT: Alternatively, there would need to be a provision that allows companies to sue the government if they get breached while following their standards, since it was the government that said they were safe.

                • demesisx@infosec.pub
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  10 months ago

                  Anyone who says, “think of the corporations” before they think of the people being PERMANENTLY compromised is a lost soul indeed. You are blaming the inadequacy of standards rather than the demagogues working for the corporations that enabled these lax standards. Of course there are going to be 0 day exploits that no one could protect for but that is a red herring. That’s something that could easily come out and be considered when that company is brought in front of a civil court to decide the fines, obviously!

                  I think we’re too dissimilar for this conversation to bear any fruit. Thanks for the well constructed devil’s advocate stance but you certainly haven’t convinced me.

                  • eltimablo@kbin.social
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    10 months ago

                    When you say “corporations,” it seems like you’re exclusively counting companies like Google, Meta, etc, whereas I’m also including the mom and pop, 15-person operations that would be impacted by the same regulations you suggest. Those underdogs are the ones I want to protect, since they’re the only chance the world has at dethroning the incumbents and ensuring that the big guys don’t outlive their usefulness.