Meta explains the AI ​​behind its social media algorithms

Photo of author

By Webdesk


Meta has published an in-depth dive into the company’s social media algorithms in an effort to demystify how content is recommended to Instagram and Facebook users. In a blog post published Thursday, Meta’s President of Global Affairs Nick Clegg said the info dump on the AI ​​systems behind its algorithms is part of the company’s “broader ethos of openness, transparency and accountability,” outlining what Facebook and Instagram users can do to better control what content they see on the platforms.

“With rapid advancements taking place with powerful technologies like generative AI, it’s understandable that people are both excited about the possibilities and concerned about the risks,” Clegg said in the blog. “We believe openness is the best way to respond to those concerns.”

There are now 22 “service cards” available that indicate how content is ranked and recommended for Facebook and Instagram users

Most of the information is contained in 22 “index cards” covering the feed, stories, roles, and other ways people discover and consume content on Meta’s social media platforms. Each of these cards provides detailed, yet accessible information about how the AI ​​systems behind these features rank and recommend content. For example, the Feed in Instagram Explore — a feature that shows users photos and Reels content from accounts they don’t follow — explains the three-step process behind the feature AI system.

The card states that Instagram users can influence this process by saving content (indicating that the system should show you similar things), or by marking it as “not interested” to encourage the system to show you similar content filter out in the future. Users can also see reels and photos not specifically selected for them by the algorithm by selecting “Not personalized” in the Explore filter. More information about Meta’s predictive AI models, the input signals used to drive them, and how often they are used to rank content is available through the Transparency Center.

Instagram is testing a feature that will allow users to mark roles as “interested” to see similar content in the future

In addition to the index cards, the blog post lists a few other Instagram and Facebook features that can inform users why they see certain content and how to adjust their recommendations. Meta extends the “Why am I seeing this?” added to Facebook Reels, Instagram Reels and Instagram’s Explore tab in “the coming weeks”. This allows users to click on an individual role to find out how their previous activity may have influenced the system to show it to them. Instagram is also testing a new Reels feature that will allow users to mark recommended reels as “Interested” to see similar content in the future. The ability to mark content as “Not interested” has been available since 2021.

Meta also announced that in the coming weeks it will begin rolling out its Content Library and API, a new set of tools for researchers, which will include a ton of public data from Instagram and Facebook. Data from this library can be searched, explored, and filtered, and researchers can request access to these tools through approved partners, starting with the University of Michigan Interuniversity Consortium for Political and Social Research. Meta claims these tools will provide “the most comprehensive access to publicly available content on Facebook and Instagram of any investigative tool we’ve built to date,” in addition to helping the company meet its data sharing and disclosure obligations. transparency.

Those transparency commitments are possibly the biggest factor pushing Meta to better explain how it uses AI to shape the content we see and interact with. The explosive development of AI technology and its subsequent popularity in recent months has caught the attention of regulators around the world who have raised concerns about how these systems collect, manage and use our personal data. Meta’s algorithms aren’t new, but the way it mismanaged user data during the Cambridge Analytica scandal is probably a motivating reminder to over-communicate.



Source link

Share via
Copy link