The search results varied when tested by different users, but the Guardian verified through screenshots and its own tests that various stickers portraying guns surfaced for these three search results. Prompts for “Israeli boy” generated cartoons of children playing soccer and reading. In response to a prompt for “Israel army” the AI created drawings of soldiers smiling and praying, no guns involved.
Meta’s own employees have reported and escalated the issue internally, a person with knowledge of the discussions said.
This is my concern with generative AI. Companies can’t predict every problematic thing AI will create, but this wouldn’t be happening if a human had intervened before the images were generated or shared. Companies implementing these tools without safeguards in place is a problem. Hopefully this sort of scenario is accounted for moving forward (doubt it).