• REDACTED@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 days ago

    Because the alternative for me is googling the question with “reddit” added at the end half of the time. I still do that alot. For more complicated or serious problems/questions, I’ve set it to only use search function and navigate scientific sites like ncbi and pubmed while utilizing deep think. It then gives me the sources, I randomly tend to cross-check the relevant information, but so far I personally haven’t noticed any errors. You gotta realize how much time this saves.

    When it comes to data privacy, I honestly don’t see the potential dangers in the data I submit to OpenAI, but this is of course different to everyone else. I don’t submit any personal info or talk about my life. It’s a tool.

    • ganryuu@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Simply by the questions you ask, the way you ask them, they are able to infer a lot of information. Just because you’re not giving them the raw data about you doesn’t mean they are not able to get at least some of it. They’ve gotten pretty good at that.

      • REDACTED@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I really don’t have any counter-arguments as you have a good point, I tend to turn a blind eye to that uncomfortable fact. It’s worth it, I guess? Realistically, I’m having a hard time thinking of worst-case scenarious

    • verdigris@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      If it saves time but you still have to double check its answers, does it really save time? At least many reddit comments call out their own uncertainty or link to better resources, I can’t trust a single thing AI outputs so I just ignore it as much as possible.