fbpx

Google Play’s New Policy Update Targets Offensive AI Apps and Disruptive Notifications

Google’s Play Store Takes Aim at Offensive AI Content with New Reporting Policy

In a proactive move to address concerns related to potentially problematic generative AI apps, Google is implementing a new policy aimed at enhancing user safety and content moderation. Starting early next year, developers of Android applications published on Google’s Play Store will be required to incorporate a mechanism for users to report or flag offensive AI-generated content. This policy seeks to empower users to report and flag content within the app itself, and developers are encouraged to use these reports to inform their filtering and moderation efforts.

The decision to implement this policy follows a surge in AI-generated apps, some of which have been associated with issues related to explicit or offensive content. For instance, there have been instances where users manipulated AI apps into creating NSFW imagery. One app, Remini, faced backlash for subtly altering images, enhancing certain physical features while distorting others. There have also been concerns about open-source AI tools being misused to create child sexual abuse material (CSAM). Additionally, as the use of AI in creating deepfakes and misinformation escalates during elections, the new policy aims to mitigate these risks.

The new policy covers various AI-generated content, such as text-to-text conversational AI chatbots, where interactions with the chatbot are central to the app, and apps that generate images based on text, image, or voice prompts.

Google’s announcement emphasizes that all apps, including AI content generators, must adhere to existing developer policies, including restrictions on content like CSAM and actions that promote deceptive behavior.

In addition to addressing AI content apps, Google’s new policy introduces extra scrutiny for certain app permissions, particularly those requesting extensive photo and video access. Apps will only be granted access to photos and videos if such access is integral to their functionality. Apps with occasional or one-time needs for access will need to employ system pickers, such as Android’s new photo picker.

The c also seeks to limit full-screen notifications to high-priority situations. This is in response to apps misusing these notifications to promote subscriptions or offers. Under the new policy, apps targeting Android 14 and above will need special permission, referred to as “Full Screen Intent permission,” to utilize full-screen functionality. This permission will be granted only to apps with genuine needs for such notifications.

While it’s unusual for Google to be at the forefront of introducing policies for AI apps, as Apple has historically led in this regard, this move demonstrates Google’s commitment to proactive content regulation. Although Apple has yet to develop a formal AI or chatbot policy for its App Store Guidelines, both companies are likely to be closely monitored as they address emerging concerns in the AI app ecosystem. The policy updates take effect today, and AI app developers have until early 2024 to implement the necessary flagging and reporting changes.

Total
0
Shares
1 comment
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

7 Questions That Will Help You Find Your Online Niche

Next Post
Twitter-Blue-Verification-

How to pay for Twitter Blue in Nigeria

Related Posts
Verified by MonsterInsights