ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future::AI for the smart guy?

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This is literally the opposite. It’s nerfed to oblivion because of stupid “morals” decided by a huge corporation that we have zero input in. They’ve got to stay advertiser friendly after all.

    Moral/ethics in AI is just bad. It’s also used as an excuse to ban open-source AI since you can run uncensored models on them. Which uncensored models are awesome btw.

    • solstice@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      You’re the first person I’ve ever heard say that morals and ethics in AI is bad. How can you possibly say that? I’ll hear your response before challenging it, beyond my initial skepticism of course.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        It’s a tool that’s not going anywhere. We have to adapt, there is no other choice. Ethics will not stop bad guys from doing bad things. It will stop normal people from doing things because it doesn’t fit what corporations deem acceptable. Competition is banned because other corporations deem them unethical by their standards.

        Did you weigh in on, or ever see a public vote and what OpenAI determined their AI is allowed to do? Is what you deem ethical in line with that advertisers deem ethical? Are people allowed to have unethical questions?

        Again, my point with open source as well. Why would they allow open-source alternatives exist if they can ban them preemptively in the name of ethics, because anyone can inevitably modify the model to be uncensored? (already happens)

        “Ethics” become this ambiguous thing that can be used to stomp out competition and not have to justify their changes. Maybe you’re concerned about someone asking an LLM how to create a bomb. The LLM shouldn’t answer because it shouldn’t have that information in the first place, which is on the topic of data scraping. A lot of the dangerous stuff that could be generated is because this stuff is public and got scraped. It’s already out there.

        You can already have the LLM not tell people to kill themselves without forcing ethics into it by steering it the right direction. This even exist in the already existing uncensored models so it’s clearly not a censorship issue. Maybe this is a moral thing, and my original comment should have omiited morals and just said ethics.

        “Ethics” is a very ambiguous topic. I challenge you to think specifically what are things that should be banned in the name of ethics? Saying ethics in AI is not good does not imply AI should be unethical (looking at you DAN lol). What specific things should be banned that are not from the result of inappropriate data scraping, and if so is that an ethics problem, or because unfettered data scraping unconsentually collecting obscene information it shouldn’t have in the first place?

      • HandwovenConsensus@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        Well, I’ll be the second. Like all tools, generative AI is going to be used for good and evil purposes. Frankly, I’m not comfortable with a large corporation deciding what is and isn’t ethical for all of humanity. Ideally, it would do what the user asked it for, like all other tools, and society would work to control the bad actors, not OpenAI. Any AI doomsday scenario you can picture gets worst when one party has complete control over the AI technology.

        I think it’s important that we support unrestricted open source AI, just as it’s important we support federated social media like lemmy.

        • solstice@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          AGI isn’t just a tool though, it’s theoretically an intelligent entity that could have its own agenda. Armed with intelligence far superior to any human, this is a potential threat. Should we not tightly control it? I know chat gpt is FAR from achieving AGI, but ethics are definitely something that will need to be addressed as the tech develops.