Google threatens to ban developers over AI content on Play Store
Google has updated its Play Store policy and now requires app developers to add the ability for users to report offensive AI-generated content without exiting the app to remain in its marketplace.
AI is everywhere now. From search engines to customer-facing chatbots, it is added in a lot of different ways for different tasks. It has also begun appearing in mobile applications on both Android and iOS. Now, noticing the potential risks with generative AI, Google has updated its Play Store policy to push for moderation of AI-generated content in apps. As per the new policy, developers will have to add an option to report when AI generates offensive content, and then developers must use these reports to build safe filters and moderation tools to protect users.
Google posted in its Android Developers Blog to highlight the new changes in its policy. It said, “Early next year, we'll be requiring developers to provide the ability to report or flag offensive AI-generated content without needing to exit the app. You should utilize these reports to inform content filtering and moderation in your apps – similar to the in-app reporting system required today under our User Generated Content policies”.
Google focuses on AI moderation in Play Store apps
Google stated that the need for moderation is high as the Android community expects safe and high-quality experiences, which directly influences the long-term success of the app or game in terms of installs, user ratings, and reviews. Safety is also a big parameter for Google which is tasked with moderating the Android app store, and ensuring that users, especially the youth are not exposed to harmful content, generated by AI.
Alongside, Google also expanded privacy protection protocols on the Play Store. It highlighted that some app permissions set up by developers will require additional review by the Google Play team, to ensure that it does not violate any privacy standards set by the company.
“Under our new policy, apps will only be able to access photos and videos for purposes directly related to app functionality. Apps that have a one-time or infrequent need to access these files are requested to use a system picker, such as the Android photo picker,” Google further explained.
Finally, Google also alerted the developers that it will be limiting disruptive notifications from apps even further. Under this, full-screen notifications will not get direct approval from Google. “For apps targeting Android 14 and above, only apps whose core functionality requires a full-screen notification will be granted Full-Screen Intent permission by default and all others will need to request consent for use of this permission,” it said, adding that these permissions are limited to high-priority use cases, such as alarm.