They frame it as though it’s for user content, more likely it’s to train AI, but in fact it gives them the right to do almost anything they want - up to (but not including) stealing the content outright.

    • nintendiator@feddit.cl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 months ago

      Are you kidding? #3 is the second most possible one of that set, it’s just a matter of setting up Reproducible / Deterministic Builds.

      If you can’t replicate a result with control of the software version + the arts input + the randomness seed, then “something else is going on”.

      • xor@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        deterministic builds?
        the “builds” in ai are 1,000’s of hours of supercomputers randomly mutating and evolving a gigantic neural network…
        the inner workings of such are very much a black box.

        to try to save that in a perfectly reproducible way is completely unreasonable, and simply will never happen.

        you could require all of the arts input to be documented and saved, but people would lie and you’re talking about a very large amount of data being saved for however long… also not really reasonable…

        and you also have to understand that there’s a lot of countries in the world, computers are all connected on the internet, and ai will just run in other countries, and illegal systems would run in the whatever country is dumb enough to try to but completely unreasonable and expensive extra requirements like that on it.

        there’s a whole field of study trying to reverse engineer neural networks after they’re created… i.e. it’s a black box to the people that make it

    • mods_are_assholes@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      3
      ·
      10 months ago

      The only way to make a clear text LLM is to convert most of the hard storage that humanity produces for the next ten years into storage, and we’d need about 1/4 the processing power of bitcoin mining to have it run at ChatGPT speeds.

      Even said, blackbox self-modifying AIs will be the models that win the usefulness wars, and if one country outlaws them then the only result is they will have no defense against countries that don’t feel the need to comply with them.

      • xor@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        10 months ago

        so, your first paragraph isn’t true. but i’ll point out that bitcoin is mined with ASIC chips entirely now, which only hash bitcoin transactions… they can’t compute anything else so it’s not really comparable…

        second part i do agree with except for self-modifying… although that doesn’t seem too far away…

        • mods_are_assholes@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          10 months ago

          You really don’t understand how LLM data blobs are created, do you? Nor do you understand how ridiculously compressed it is?