• FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Unless the evil maid is also capable of time travel there’s no way for them to mess with the timestamps of things once they’ve been published. She could take some pictures with the camera but not tamper with ones that have already been taken.

    • BitSound@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      The evil maid could take a copy of a legitimate image, modify it, publish it, and say that the original image was faked. If there’s a public timestamp of the original image, just say “Oh, hackers published it before I could, but this one is definitely the original”. The map is not the territory, and the blockchain is not what actually happened.

      Digital signatures and public signatures via blockchain solve nothing here.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        The evil maid could take a copy of a legitimate image, modify it, publish it, and say that the original image was faked.

        No she could not, the original image’s timestamp has already been published. The evil maid has no access to the published data.

        “Oh, hackers published it before I could, but this one is definitely the original”

        And then the evil maid is promptly laughed out of the building by everyone who actually understands how this works. Your evil maid is depending on “trust me, bro” whereas the whole point of this technology is to remove the need for that trust.

        • BitSound@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 year ago

          original image’s timestamp has already been published

          “Oh the incorrect information was published, here’s the correct info”. Again, the map is not the territory.

          the whole point of this technology is to remove the need for that trust.

          And it utterly fails to achieve that here. I’ll put it another way: You have this fancy camera. You get detained by the feds for some reason. While you’re detained, they extract your private keys and publish a doctored image, purportedly from your camera. The image is used as evidence to jail you. The digital signature is valid and the public timestamp is verifiable. You later leave jail and sue to get your camera back. You then publish the original image from your camera that proves you shouldn’t have been jailed. The digital signature is valid and the public timestamp is verifiable. None of that matters, because you’re going to say “trust me, bro”. Introducing public signatures via the blockchain has accomplished absolutely nothing.

          You’re trying to apply blockchain inappropriately. The one thing that publishing like this does is prove that someone knew something at that time. You can’t prove that only that person knew something. You can prove that someone had a private key at time X, but you cannot prove that nobody else had it. You can prove that someone had an image with a valid digital signature at time X, but you cannot prove that it is the unaltered original.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            “Oh the incorrect information was published, here’s the correct info”. Again, the map is not the territory.

            And again, your “attack” relies on the evil maid saying “just trust me bro” and people taking her word on that. The “incorrect information” is provably published before the supposed “correct information” was.

            The whole point of building this stuff into the camera is so that the timestamp can be published immediately. Snap the photo and within seconds the timestamp is out there. If the photographer doesn’t have that enabled then he’s not actually using the system as designed, so he shouldn’t be surprised if it doesn’t work right. If he uses it as designed then it will work.

            The one thing that publishing like this does is prove that someone knew something at that time. You can’t prove that only that person knew something.

            So? That’s not the goal here.

            • BitSound@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              The “incorrect information” is provably published before the supposed “correct information” was.

              Rephrased, some information was published before some other information. Sure, that’s provable, but what of it? How do you know which is correct and which isn’t? You’re back to trust.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                1 year ago

                The labels “incorrect” and “correct” are what the evil maid is claiming. That’s the “just trust me bro” part of your “attack.” It’s implausible in the extreme. If you’re taking photos with a camera that’s designed to publish a timestamp within seconds of the photo being taken, and days later some random person is claiming that the first photo was a “fake” but this new one they’re just posting now is the real one they just didn’t get around to posting until now, who in their right mind will believe that?

                Sure, you can posit a situation where everyone is stupid and doesn’t believe what the tech is telling them. The tech doesn’t matter in a situation like that. Doesn’t mean the tech is poorly designed, it just means that everyone in your posited scenario is stupid.

                • BitSound@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  It doesn’t have to be a random person claiming that the first image is fake. You could get your private keys leaked, and then the attacker waits until you’re on vacation in a remote area without wifi/cell, and then they publish an image and say “oh, i got wifi for a bit and published this”. You then get back from vacation, see the fake image and claim that you didn’t have any wifi/cell service the whole time and couldn’t have published an image. Why should people trust you?

                  Alternatively as I put in another comment, if it’s got the ability to publish stuff straight from the camera, it’s got the ability to be hacked and publish a fake image, straight from the camera.

                  Publishing things on the blockchain adds nothing here. The tech isn’t telling anyone anything useful, because the map is not the territory.

                  These are not implausible scenarios. They wouldn’t happen every day because they’re valuable attack vectors, but they’re 100% possible and would be saved to be used at the right time, like when it really matters, which is the worst possible time to incorrectly trust something.

                  • FaceDeer@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    1 year ago

                    It doesn’t have to be a random person claiming that the first image is fake.

                    Then we’re no longer talking about an “evil maid” attack. I’m not going to engage in further goalpost-shifting, you’re just adding and removing from the scenario arbitrarily and demanding that this system must satisfy every constraint you throw at it.

                    If you don’t want to use this system, fine, don’t use it. It’s not for you.