A U.K. woman was photographed standing in a mirror where her reflections didn’t match, but not because of a glitch in the Matrix. Instead, it’s a simple iPhone computational photography mistake.

  • e0qdk@kbin.social
    link
    fedilink
    arrow-up
    236
    arrow-down
    14
    ·
    1 year ago

    This story may be amusing, but it’s actually a serious issue if Apple is doing this and people are not aware of it because cellphone imagery is used in things like court cases. Relative positions of people in a scene really fucking matter in those kinds of situations. Someone’s photo of a crime could be dismissed or discredited using this exact news story as an example – or worse, someone could be wrongly convicted because the composite produced a misleading representation of the scene.

    • falkerie71@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      61
      arrow-down
      15
      ·
      1 year ago

      I see your point, though I wouldn’t put it that far. It’s an edge case that has to happen in a very short duration.
      Similar effects can be acheived with traditional cameras with rolling shutter.
      If you’re only concerned of relative positions of different people during a time frame, I don’t think you need to be that worried. Being aware of it is enough.

      • Odelay42@lemmy.world
        link
        fedilink
        English
        arrow-up
        61
        arrow-down
        4
        ·
        1 year ago

        I don’t think that’s what’s happening. I think Apple is “filming” over the course of the seconds you have the camera open, and uses the press of the shutter button to select a specific shit from the hundreds of frames that have been taken as video. Then, some algorithm appears to be assembling different portions of those shots into one “best” shot.

        It’s not just a mechanical shutter effect.

        • falkerie71@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          6
          ·
          1 year ago

          I’m aware of the differences. I’m just pointing out that similar phenomenon and discussions have been made since rolling shutter artifacts have been a thing. It still only takes milliseconds for an iPhone to finish taking it’s plethora of photos to composite. For the majority of forensic use cases, it’s a non issue imo. People don’t move that quick to change relative positions substantially irl.

          • Odelay42@lemmy.world
            link
            fedilink
            English
            arrow-up
            21
            arrow-down
            7
            ·
            1 year ago

            Did you look at the example in the article? It’s clearly not milliseconds. It’s several whole seconds.

            • falkerie71@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              5
              ·
              edit-2
              1 year ago

              You don’t need a few whole seconds to put an arm down.

              Edit: I should rephrase. I don’t think computational photography algorithms would risk compositing photos that are whole seconds apart. In well lit environments, one photo only needs 1/100 seconds or less to expose properly. Using photos that are temporally too far apart risk objects moving too much in the frame, and thus fail to composite.

              • Odelay42@lemmy.world
                link
                fedilink
                English
                arrow-up
                16
                arrow-down
                10
                ·
                1 year ago

                There’s three different arm positions in a single picture. That doesn’t happen in the blink of an eye.

                The camera is taking many frames over a relatively long time to do this.

                This is nothing at all like rolling shutter, and it’s very obvious from looking at the example in the article.

                • LifeInOregon@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  arrow-down
                  1
                  ·
                  1 year ago

                  Those arm positions occur over the course of a fluid motion in a single second. How long does it take for you to drop your hands to your side or raise them to clasped from the side? It doesn’t take me more than about half a second as a deliberate movement.

                • llii@feddit.de
                  link
                  fedilink
                  English
                  arrow-up
                  11
                  arrow-down
                  5
                  ·
                  1 year ago

                  It takes you several seconds to move your arm? I hope you don’t do manual work.

                  Also did you use the iOS camera app before? You can see how long it takes for the iPhone to take multiple shots for the always-on hdr feature, and it isn’t several seconds.

                • Decoy321@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  arrow-down
                  3
                  ·
                  1 year ago

                  There’s three different arm positions in a single picture. That doesn’t happen in the blink of an eye.

                  It’s a lot faster than you might be expecting. I found it helps to visualize it in person. Go to a mirror and start with your hands together like in the right side mirror. Now let your arms down naturally, to the position in the left side mirror. If you don’t move your arms at the same exact time, one elbow will still be parallel to the floor while the other elbow has extended already, just like in the middle position.

                  Thus, we can tell that the camera compiled the image from right to left.

                • falkerie71@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  3
                  ·
                  1 year ago

                  I can also see the three arm positions being a single motion, just in three different time frames. If it really takes seconds to complete a composite, then it should also be very easy to reproduce, and not something so rare it makes it into the news. If I still can’t convince you, I guess we agree to disagree then.

      • Decoy321@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 year ago

        We might be exaggerating the issue here. Fallibility has always been an issue with court evidence. Analog photos can be doctored too.

        • curiousaur@reddthat.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Sure, but smartphones now automatically doctor every photo you take. Someone who took the photo could not even know it was doctored and think it represents truth.

          • Decoy321@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Fair point, but I still think we’re exaggerating the amount of doctoring that’s being done by the phones. There’s always been some level of discrepancy between real life subjects and the images taken of them.

            It’s just a tool creating media from sensor data. Those sensors aren’t the same as our eyes, and their processors don’t hold a candle to our own brains.

            In the interest of not rambling, let’s look back at early black and white cameras. When people looked at those photos, did they assume the world was black and white? Or did they acknowledge this as a characteristic of the camera?

      • ElderWendigo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        1 year ago

        All digital photography is computational. I think the word you’re looking for is composite, not computational.

          • ElderWendigo@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            Film is also subject to manipulation in the development stage, even if you avoid compositing e.g. dodging and burning. Photographic honesty is an open and active philosophic debate that has been going on since its inception. It’s not like you can really draw a line in the sand and blanketly say admissible or not. Although I’m sure established guidelines would help. Ultimately, it’s an argument about the validity of evidence that needs to be made on a case by case basis. The manipulations involved need to be fully identified and accounted for in those discussions.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        With all the image manipulation and generation tools available to even amateurs, I’m not sure how any photography is admissible as evidence these days.

        At some point there’s going to have to be a whole bunch of digital signing (and timestamp signatures) going on inside the camera for things to be even considered.

      • Ook the Librarian@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        1 year ago

        This was important in the Kyle Rittenhouse case. The zoom resolution was interpolated by software. It wasn’t AI per se, but the fact that a jury couldn’t be relied upon to understand a black box algorithm and its possible artifacts, the zoomed video was disallowed.

        (this in no way implies that I agree with the court.)

        • Rob T Firefly@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          The zoom resolution was interpolated by software. It wasn’t AI per se

          Except it was. All the “AI” junk being hyped and peddled all over the place as a completely new and modern innovation is really just the same old interpolation by software, albeit software which is fueled by bigger databases and with more computing power thrown at it.

          It’s all just flashier autocorrect.

          • Ook the Librarian@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            As far as I know, nothing about AI entered into arguments. No precedents regarding AI could have been set here. Therefore, this case wasn’t about AI per se.

            I did bring it up as relevant because, as you say, AI is just an over-hyped black box. But that’s my opinion, with no case law to cite (ianal). So to say that a court would or should feel that AI and fancy photoediting is the same thing is misleading. I know that wasn’t your point, but it was part of mine.

        • wagoner@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I watched that whole court exchange live, and it helped the defendant’s case that the judge was computer illiterate.

          • Ook the Librarian@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            As it usually does. But the court’s ineptitude should favor the defense. It shouldn’t be an arrow in a prosecutor’s quiver, at least.

    • Jarix@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      This isn’t an issue at all it’s a bullshit headline. And it worked.

      This is the result of shooting in panorama mode.

      In other news, the sky is blue

  • slaacaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    20
    ·
    edit-2
    1 year ago

    Uhm, ok?

    The way the girl’s post is written, it’s like she found out Apple made camera lenses from orphans’ retinas (“almost made me vomit on the street”). I assumed it was well known that iPhone takes many photos and stitches the pic together (hence the usually great quality). Now the software made a mistake, resulting in a definitely cool/interesting pic, but that’s it.

    Also, maybe stop flailing your arms around when you want your pic taken in your wedding dress.

  • jtk@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    17
    ·
    1 year ago

    Who wants photos of a fake reality? Might as well just AI generate them.

    • LifeInOregon@lemmy.world
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      4
      ·
      1 year ago

      Generally the final photo is an accurate representation of a moment. Everything in this photo happened. It’s not really generating anything that wasn’t there. You can sometimes get similar results by exploiting the rolling shutter effect.

      https://camerareviews.com/rolling-shutter/

      It’s not like they’re superimposing an image of the moon over a night sky photo to fake astrophotography or something.

    • hitmyspot@aussie.zone
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      8
      ·
      1 year ago

      A photo is a fake reality. It’s a capture of the world from the perspective of a camera that no person has ever seen.

      Sure we can approximate with viewfinders and colour match as much as possible but it’s not reality. Take a photo of a light bulb, versus look at a light bulb, as one obvious example.

      This is just one other way to get less consistency in the time of different parts of the photos, but overall better capture what we want to see in a photo.

      • Gabu@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        7
        ·
        edit-2
        1 year ago

        Your argument makes literally no sense. You’re, baselessly, assuming a person’s perspective is a prism of reality. There’s no such a thing - in fact, I’d rather trust reality as being detected by the sensors of a camera, with their known flaws, attributes and parameters, than trust the biological sensors at the back of your eyes or the biological wiring to the inside of your skull.

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          Yes, but that’s the reality from the perspective of the camera, which will be slightly different from a perspective of the person operating it.

          If the camera is out of focus, is that more or less accurate than a phone camera choosing the least out of focus frame, even if half a second after you clicked?

          There is no objective reality in pictures or photos or art, only what we perceive. We now value real life activity shots. When cameras needed long exposure, it was still life portrait by necessity. Both show different versions of reality.

          Again, you’re saying that the camera has flaws, ergo it’s imperfect, but in a known way. It’s the same for phone photos. They are imperfect but in a known way that leads to more frequent desirable pics.

      • dan1101@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        However I think most cameras and most people traditionally have wanted the most accurate photos possible. If the camera is outputting fiction that can be a big problem.

        • nyan@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Oh, dear. No, in most cases people seem to want the prettiest photos possible. Otherwise digital filters wouldn’t be so popular.

    • Chozo@kbin.social
      link
      fedilink
      arrow-up
      17
      ·
      edit-2
      1 year ago

      To their credit, it’s not “fake”. This isn’t from generative AI, this is from AI picking from multiple different exposures of the same shot and stitching various parts of them together to create the “best” version of the photo.

      Everything seen in the photo was still 100% captured in-lens. Just… not at the exact same time.

    • LWD@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      I’m conflicted, because sometimes AI is able to enhance a picture in a way that it better represents how you see something versus how the camera actually takes the photo. For example, detecting whether you are indoors or outdoors will cause a very rudimentary tone shift to occur, making colors more accurate to whatever the sensors otherwise take in.

      It’s stuff like the fake moon detail that really starts to weird me out.

    • ByGourou@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      It’s not the case as someone already explained, but also, who care about the photo being fake ? People take photos to show to other people and keep a memory, and that photo looking better than reality is usually not an issue. I would still prefer choice with a toggle somewhere, which we will never get with an Apple product.

    • elint@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      1 year ago

      You think that’s absurd? Have you never gotten married? Wedding photos are extremely important and while “she almost vomited” may be hyperbole, I can definitely understand being very pissed off if that was the only version of the photo. Our wedding photographer whitened our teeth in our photos and we requested that they undo that so we look like ourselves. The sentiment was nice, but we didn’t want that. I would have been pretty unhappy if they hadn’t held onto the originals and were unable to revert our teeth back to their normal shades. Photos of our bridal showers and dress hunting were nearly as important as the wedding photos themselves. I can understand being upset with this undesired result.

  • NaoPb@eviltoast.org
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    1 year ago

    Ah yes, I remember noticing it would make like a short video instead of one picture, back when I had an iPhone. I turned that function off because I didn’t see the benefits.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      1 year ago

      That’s not what this is. I also turned that off, it’s called “Live Photo” or something like that. Honestly I find it to be a dumb feature.

      What this is, is the iPhone taking a large number of images and stitching them together for better results.

      • jol@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        6
        ·
        1 year ago

        It’s not dumb. It let’s you select the best moment within a 1-2 second margin after or before you took the picture.

        • KairuByte@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 year ago

          No, these are literally just short videos. You interact with them like photos, you see them as photos, half the time people sending them think they are photos, but when you tap all the way into them they are a short video. They are absolutely not presented as a “choose your exact frame” pre-photo things, they are presented as photos.

          • Blue and Orange@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Yeah “Live photo” really is just an Apple marketing term. You interact with them in a certain way on iOS and they are presented in a certain way, but anywhere else they’re just very short videos.

          • locuester@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            5
            ·
            1 year ago

            Wrong. Pretty crazy, it does let you change which frame is the photo. Click edit, then hit the Live Photo icon next to “cancel”

            • KairuByte@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              That isnt the point of a Live Photo, that’s just a “feature.” Similar to how YouTube lets you choose a thumbnail for a video, but that’s not really the point of YouTube.

              • locuester@lemmy.zip
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                Per Apple support:

                With Live Photos, your iPhone records what happens 1.5 seconds before and after you take a picture. Then you can pick a different key photo, add a fun effect, edit your Live Photo, and share with your family and friends.

                So it’s actually the first example of what Live Photo is for.

                If you didn’t even know about this, don’t feel bad. I’m an Apple fanboy and my daughter just showed me that it allowed you to do this “different key photo” last month. Kids are good for that.

                • KairuByte@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  1 year ago

                  I’m aware that’s it’s possible, but that isn’t part of the onboarding or anything. What I mean is, it’s an addon. It was never part of the original iteration, which was just “look moving Harry Potter photos.”

                  It’s a gimmick that doesn’t even work cross device, because it’s literally just a short video.

  • aeronmelon@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    7
    ·
    1 year ago

    It’s a really cool discovery, but I don’t know how Apple is suppose to program against it.

    What surprises me is how much of a time range each photo has to work with. Enough time for Tessa to put down one arm and then the other. It’s basically recording a mini-video and selecting frames from it. I wonder if turning off things like Live Photo (which retroactively starts the video a second or two before you actually press record) would force the Camera app to select from a briefer range of time.

    Maybe combining facial recognition with post processing to tell the software that if it thinks it’s looking at multiple copies of the same person, it needs to time-sync the sections of frames chosen for the final photo. It wouldn’t be foolproof, but it would be better than nothing.

    • xantoxis@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      8
      ·
      1 year ago

      Program against it? It’s a camera. Put what’s on the light sensor into the file, you’re done. They programmed to make this happen, by pretending that multiple images are the same image.

      • Nine@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        11
        ·
        1 year ago

        That’s over simplified. There’s only so much you can get on a sensor at the sizes in mobile devices. To compensate there’s A LOT of processing that goes on. Even higher end DSLR cameras are doing post processing.

        Even shooting RAW like you’re suggesting involves some amount of post processing for things like lens corrections.

        It’s all that post processing that allows us to have things like HDR images for example. It also allows us to compensate for various lighting and motion changes.

        Mobile phone cameras are more about the software than the hardware these days

        • cmnybo@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          3
          ·
          1 year ago

          With a DSLR, the person editing the pictures has full control over what post processing is done to the RAW files.

          • Nine@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Correct, I was referring to RAW shot on mobile not a proper DLSR. I guess I should have been more clear about that. Sorry!

            • uzay@infosec.pub
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              You might be confounding a RAW photo file and the way it is displayed. A RAW file isn’t even actually an image file, it’s a container containing the sensor pixel information, metadata, and a pre-generated JPG thumbnail. To actually display an image, the viewer application either has to interpret the sensor data into an image (possible with changes according to its liking) or just display the contained JPG. On mobile phones I think it’s most likely that the JPG is generated with pre-applied post-processing and displayed that way. That doesn’t mean the RAW file has any post-processing applied to it though.

        • randombullet@feddit.de
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Raw files from cameras have meta data that tells raw converters the info of which color profile and lenses it’s taken with, but any camera worth using professionally doesn’t have any native corrections on raw files. However, in special cases as with lenses with high distortion, the raw files have a distortion profile on by default.

          • Nine@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            Correct, I was referring to RAW shot on mobile devices not a proper DSLR. That was my observations based off of using the iPhone raw and android raw formats.

            This isn’t my area of expertise so if I’m wrong about that aspect too let me know! 😃

          • Nine@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            1 year ago

            So what was I wrong about? I’m always happy to learn from my mistakes! 😊

            Do you have some whitepapers I can reference too?

              • Nine@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                Gonna provide more information or is this just a trust me bro situation?

                • SpaceNoodle@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  Not sure what I’d have to gain from just lying on the Internet about inconsequential things.

                  Also not sure I can disclose too many technical details due to NDAs, but I’ve worked on camera stacks on multiple Android-based devices. Yes, there’s tons of layers of firmware and software throughout the camera stack, but it very importantly does not alter consequential elements of images, and concentrates on image quality, not image contents.

                  While the sensors in smartphones might not be as physically large as those in DSLRs - at least, in general - there’s still significant quality in the raw sensor data that does not inherently require the sort of image stitching that Apple is doing.

            • SpaceNoodle@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              edit-2
              1 year ago

              🙄

              Edit: oh, you’re the actual illiterate person from another post. Thanks for stalking me.

              • schmidtster@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                You think too highly of yourself.

                When you comment spam just about every thread you’ll come across people multiple times.

      • ricecake@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        What’s on the light sensor when? There’s no shutter, it can just capture a continuous stream of light indefinitely.

        Most people want a rough representation of what’s hitting the sensor when they push the button. But they don’t actually care about the sensor, they care about what they can see, which doesn’t include the blur from the camera wobbling, or the slight blur of the subject moving.
        They want the lighting to match how they perceived the scene, even though that isn’t what the sensor picked up, because your brain edits what you see before you comprehend the image.

        Doing those corrections is a small step to incorporating discontinuities in the capture window for better results.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      1 year ago

      Or maybe just don’t move your arm for literally less than a second while the foto(s) is/are taken… Moving your arm(s) down happens in less than a second if one just let them fall by gravity. It’s a funny pic nonetheless.

  • restingboredface@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    I may have missed this in the comments already but it is really important to note here that the article says the photo was taken using panorama mode, which is why the computational photography thing is even an issue. If you have used panorama mode ever you should go in expecting some funkiest, especially if someone in the shot is moving, as the bride apparently was when it was shot.

  • orion2145@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    There’s a note at the end of the article that says it was take using pano. So this is doubly unsurprising. Despite the instagram caption reading it wasn’t.

  • kirklennon@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    4
    ·
    edit-2
    1 year ago

    This person is an actress and comedian. This is not an iPhone error; it’s just a manually-edited photo from three separate takes that she pretended came out of the phone as-is. It’s a hoax for laughs/attention.