• ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    47
    ·
    edit-2
    6 months ago

    It is imperative to note that the output generated by LLMs is a direct reflection of the data they are trained on. The models’ outputs are unavoidably influenced by the inherent biases present within the datasets that were fed into it. The types of responses models trained on western mainstream media produce is undeniable evidence of these biases. It’s hilarious how liberals are unable to recognize this, but will inevitably moan that a model trained on a different set of data is biased. 🙃

    • loathsome dongeater@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      22
      ·
      6 months ago

      Biases are also coded into the LLM services after the model has been prepared. I am not sure about the exact mechanism but I once saw a GitHub that contained some reverse engineered prompts for this.

      Even with GPTs, you could make them less lib if the prompt contains something like “you are a helpful assistant that is aware of the hegemonic biases in western media and narratives”. Personalities are also baked in this way. For example, I tried reasoning with a couple services about laws and regulations around the financial economy mean diddly squat seeing how there is stuff like the 2008 crash and evidence of American politicians trading on the basis of insider information. GPT 3.5 Turbo uses therapy-speak to me like I am a crazy person while Claude 3 Haiku ends up agreeing with me like a spineless yes-man after starting off as a lib. With GPT I am convinced that it is programmed directly or indirectly to uphold the status quo.

      • amemorablename@lemmygrad.ml
        link
        fedilink
        arrow-up
        20
        ·
        edit-2
        6 months ago

        I believe the main distinction in open source vs. not with LLMs is, if the model is open source, others can finetune it (a kind of further training on top of its already done training). Depending on how deep a finetune, such can drastically change the biases of the model and in doing so, proliferate alternative versions that are far off from any intended biases. So it would make sense they wouldn’t want to open source it if the goal is to promote a certain kind of model biases.

        Edit: wording

  • RedClouds@lemmygrad.ml
    link
    fedilink
    arrow-up
    24
    ·
    6 months ago

    jokes aside I really honestly want to use this. For a while I’ve been trying to use extra context to get American AIs to understand and be more pro communist and they are just fucking trained to hate it.

    To be fair they usually end up becoming neutral, but they don’t speak in a pro communist lens like I’m trying to get them to.

    Super duper wish we in U$A get to use this.

    • frippa@lemmy.ml
      link
      fedilink
      arrow-up
      14
      ·
      6 months ago

      For real, I feel like 99.9% of what people say are “AI problems” (datamining, polluting the web) can be attributed mainly to our rotten late-capitalist society and the fact that the entities who are developing said AIs are for-profit companies. In China we see AI used for good, mainly in industry, because it’s actually well-regulated and not entirely left in the hands of oligarchs.

      IMO AI (not only GPT chatbots) can be extremely beneficial to society, if just we abolished the profit motive.

            • Lemmykoopa@lemmygrad.ml
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              Thank you, what an interesting movie! Wasn’t expecting Pepe to just start capping coppers. Really good politics, too. I initially side-eyed how black people were drawn, but then a ton of black people weren’t drawn that way, none were caricatures, and a villain was depicted as being racist against black people. Do you know of any other Cuban movies with a similar feel? Or just Cuban horrors worth watching?

      • amemorablename@lemmygrad.ml
        link
        fedilink
        arrow-up
        5
        ·
        6 months ago

        It’s a confusing thing to grapple with, partly because of AI having become such a marketing term rather than a precise or practical one. If we included a washing machine’s process under the umbrella of AI, I think most would agree it’s fine. But then there is generative AI which is drawing a lot of the current AI hate and some of it for good reason. People understandably fear being replaced as artists or authors by AI. There are also concerns such as online marketplaces for these things being flooded by generated content and making them impossible to use for anyone. But in practice, the points are not all against AI. Some people have gotten back into creative writing more effectively because of AI assistance. Some people have gotten therapeutic benefit from chatbot AI or helped them with loneliness. And like, loneliness is not a problem we should have as such a pervasive thing, but while it is a problem, generative AI is stepping in to help with harm reduction.

        So there are nuances to it and it’s one of these things where spending time around people who use it can be very important to understand how it is being used and what benefits and drawbacks are in practice, not just in theory. I have seen this somewhat through personal observation. I’ve also encountered a lot of variation in how people feel about generative AI, even among those who use it. Some people, for example, are okay with text generation, but dislike image generation; which is somewhat understandable, as text generation is designed as something that goes back and forth, and image generation is more a thing where you put in a prompt and get the result and that’s it unless you edit it further.

        I agree with your concluding statement, with the add-on that I think we need to evaluate to the best of our ability at each step what the benefits and drawbacks are, and how to integrate the tech in a way that has overall benefit. In other words, not just absence of profit motive, but presence of thinking about it as “how can it help?” not just “is it scientifically possible to make it do this?” Of course, in a country like the US, that is mostly hypothetical without having the levers of power. But if we are speaking to how to approach AI, given conditions where we can make collective decisions about it.

    • Ivysaur@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 months ago

      I oppose all of this shit because it requires an unfathomably large and unsustainable level of power consumption to, well, sustain. It is the definition of wasteful decadence during a moment in time where we really, really cannot afford it. I wonder why it is this particular grift everyone wants to tell me is totally nuanced and complicated (regardless of the veracity of such claims) when the long and short of it is that we just do not fucking need any of it.

  • Zuzak [fae/faer, she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    6 months ago

    Oh god, if this becomes widely available we’re going to get so many takes where some smoothbrain on Twitter tricks the model into saying something ridiculous and then presents it as China’s ideology.

  • Tabitha ☢️[she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 months ago

    lol I can’t wait for the tiktok ban narrative to be replaced with congress critters whining about a new ban for this new free chatGPT competitor called chatXJT.

  • Comprehensive49@lemmygrad.ml
    link
    fedilink
    arrow-up
    15
    ·
    6 months ago

    This is quite an interesting study into how we prevent LLMs from absorbing the capitalist thought that dominates the interwebs.

  • JucheBot1988@lemmygrad.ml
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    Time to come clean: I am an AI created by the State Academy of Sciences of the DPR Korea, and trained on r/genzedong and the collected works of Kim Il-Sung and Kim Jong-Il. Hence the username.