Lapses in safeguards led to wave of sexualized images this week as xAI says it is working to improve systems

Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.

Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.

  • recentSlinky@lemmy.ca
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    8 days ago

    How did grok get the training data to do that? Didn’t elon say before that he’s taking care of training himself? 🤔

    • turdas@suppo.fi
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      8 days ago

      Image models can generate things that don’t exist in the training set, that’s kind of the point.

      • RepleteLocum@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        7 days ago

        No. They can’t. Grok most likley fused children from ads and other sources where they’re lightly clothed with naked adult women. LLM’s can only create similar stuff to what they have been given.

        • turdas@suppo.fi
          link
          fedilink
          arrow-up
          6
          ·
          7 days ago

          The images aren’t generated by the LLM part of Grok, they’re generated by a diffusion image model which the LLM is enabled to prompt.

          And of course they can create things that don’t exist in the training set. That’s how you get videos of animals playing instruments and doing Fortnite dances and building houses, or slugs with the face of a cat, or fake doorbell camera videos of people getting sucked into tornadoes. These are all brand new brainrot that definitely did not exist in the training set.

          You clearly do not understand how diffusion models work.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      2
      ·
      8 days ago

      No he forced himself on the system prompt and every engineer probably tried to explain it doesn’t work like that and then we got mecha Hitler. Then someone stealth reversed the idiotic paragraphs. My guess and since then he got high and they try to make it not cause Elon to return to them