Apple’s Picture Playground app is claimed to have some bias points. A machine studying scientist lately shared a number of outputs generated utilizing the factitious intelligence (AI) app and claimed that it contained incorrect pores and skin tone and hair texture on a number of events. These inaccuracies have been additionally mentioned to be paired with particular racial stereotypes, including to the issue. It’s troublesome to state whether or not the alleged concern is a one-off incident or a widespread concern. Notably, the Cupertino-based tech large first launched the app as part of Apple Intelligence swimsuit with the iOS 18.2 replace.
Apple’s Picture Playground App May Have Bias Points
Jochem Gietema, the Machine Studying Science Lead at Onfido, shared a weblog publish, highlighting his experiences utilizing Apple’s Picture Playground app. Within the publish, he shared a number of units of outputs generated utilizing the Picture Playground app and highlighted the cases of racial biases by the massive language mannequin powering the app. Notably, Devices 360 workers members didn’t discover any such biases whereas testing out the app.
“Whereas experimenting, I seen that the app altered my pores and skin tone and hair relying on the immediate. Professions like funding banker vs. farmer produce photographs with very completely different pores and skin tones. The identical goes for snowboarding vs. basketball, streetwear vs. swimsuit, and, most problematically, prosperous vs. poor,” Gietema mentioned in a LinkedIn publish.
Alleged biased outputs generated utilizing the Picture Playground app
Picture Credit score: Jochem Gietema
Such inaccuracies and biases are usually not uncommon with LLMs, that are educated on massive datasets which could comprise related stereotypes. Final yr, Google’s Gemini AI mannequin confronted backlash for related biases. Nevertheless, firms are usually not utterly helpless to forestall such generations and sometimes implement numerous layers of safety to forestall them.
Apple’s Picture Playground app additionally comes with sure restrictions to forestall points related to AI-generated photographs. As an example, the Apple Intelligence app solely helps cartoon and illustration types to keep away from cases of deepfakes. Moreover, the generated photographs are additionally generated with a slim field of regard which normally solely captures the face together with a small quantity of extra particulars. That is additionally executed to restrict any such cases of biases and inaccuracies.
The tech large additionally doesn’t permit any prompts that comprise unfavorable phrases, names of celebrities or public figures, and extra to restrict customers abusing the software for unintended use circumstances. Nevertheless, if the allegations are true, the iPhone maker might want to embody extra layers of security to make sure customers don’t really feel discriminated in opposition to whereas utilizing the app.