Meta expects recommendation models that are “orders of magnitude” larger than GPT-4. Why?

Photo of author

By Webdesk


Meta made a remarkable claim in an announcement published today that was intended to shed more light on its content recommendation algorithms. It’s gearing up for behavior analysis systems that are “orders of magnitude” larger than the biggest major language models out there, including ChatGPT and GPT-4. Is that really necessary?

Occasionally, Meta decides to brush up on its commitment to transparency by explaining how some of its algorithms work. Sometimes that’s revealing or informative, sometimes it just leads to more questions. This opportunity is a bit of both.

In addition to the “index cards” that explain how AI is used in a particular context or app, the social and advertising network posted an overview of the AI ​​models it uses. For example, it might be worth knowing if a video represents roller hockey or roller derby, even if there is some visual overlap, so it can be recommended appropriately.

Indeed, Meta is one of the more prolific research organizations on multimodal AI, which combines data from multiple modalities (e.g., visual and auditory) to better understand a piece of content.

Few of these models are released publicly, although we often hear how they are used internally to improve things like “relevance,” which is a euphemism for targeting. (They do allow some researchers access to it.)

Then comes this interesting little tidbit as it describes how it builds up its computing resources:

To thoroughly understand and model people’s preferences, our recommendation models can have tens of trillions of parameters — orders of magnitude larger than even the largest language models in use today.

I hit Meta to get a little more specific about these theoretical tens of trillion models, which is exactly what they are: theoretical. In a clarifying statement, the company said, “We believe our recommendation models have the potential to reach tens of trillions of parameters.” This phrasing is a bit like saying your burgers “may” have 16-ounce patties, but then admitting they’re still at the quarter-pound stage. Nevertheless, the company clearly states that it aims “to ensure that these very large models can be efficiently trained and deployed at scale”.

Would a company build a costly infrastructure for software it has no intention of making or using? It seems unlikely, but Meta refused to confirm (although they also didn’t deny) that they are actively pursuing models of this size. The implications are clear, so while we cannot consider this tens of trillion scale model to exist, can treat it as really ambitious and probably in the works.

“Understand and model people’s preferences”, by the way, should be understood as user behavior analysis. Your actual preferences can probably be represented by a hundred word plain text list. It can be hard to understand on a fundamental level why you need a model this large and complex to process recommendations, even for a few billion users.

The truth is that the problem space is indeed huge: there are billions and billions of pieces of content, all with associated metadata, and no doubt all sorts of complex vectors that show that people who follow Patagonia are also inclined to donate to the World Wildlife Federation, buying more and more expensive bird feeders, and so on. So perhaps it’s not too surprising that a model trained on all this data would be quite large. But “greater orders” than even the greatest out there, something trained in practically every written work accessible?

There is no reliable parameter count on GPT-4, and leaders in the AI ​​world have also found it to be a reductive performance measure, but ChatGPT is around 175 billion and GPT-4 is believed to be higher than that, but lower than the wanted 100 trillion claims. Even if Meta exaggerates a bit, this is still scary big.

Think about it: an AI model as big or bigger than any model made to date… what goes into it on the one hand is every action you take on Meta’s platforms, what comes out on the other end is a prediction of what you will do or like next. Pretty creepy, isn’t it?

Of course they are not the only ones doing this. Tiktok was a leader in algorithmic tracking and recommendations, and has built its social media empire on its addictive feed of “relevant” content designed to keep you scrolling until your eyes hurt. His competitors are openly jealous.

Meta is clearly focused on dazzling advertisers with science, both with a stated ambition to create the biggest model around, and with passages like the following:

These systems understand people’s behavioral preferences using very large-scale attentional models, graphical neural networks, few-shot learning, and other techniques. Recent key innovations include a new hierarchical deep neural retrieval architecture, which has allowed us to significantly outperform several state-of-the-art baselines without reducing inference latency; and a new ensemble architecture that uses heterogeneous interaction modules to better model factors relevant to people’s interests.

The paragraph above is not intended to impress researchers (they all know this) or users (they don’t understand and don’t care). But put yourself in the shoes of an advertiser who is beginning to question whether their money is well spent on Instagram ads over other options. This tech talk is meant to amaze them, to convince them that Meta is not just a leader in AI research, but that AI really excels at “understanding” people’s interests and preferences.

In case you’re in doubt, “more than 20 percent of the content in someone’s Facebook and Instagram feeds is now recommended by AI from people, groups, or accounts they don’t follow.” Exactly what we asked for! So that’s that. AI works great.

But all of this is also a reminder of the hidden device at the heart of Meta, Google and other companies whose main motivating principle is to sell ads with increasingly granular and precise targeting. The value and legitimacy of that targeting needs constant reiteration, even as users revolt and ads multiply and insinuate instead of improving.

Never did Meta do anything sensible like presenting me with a list of 10 brands or hobbies and asking which one I like. They’d rather look over my shoulder as I scour the web for a new raincoat and pretend it’s a feat of advanced artificial intelligence when they show me ads for raincoats the next day. It is not entirely clear that the latter approach is superior to the former, and if so, how superior? The entire web has been built around a collective belief in accurate ad targeting, and now the latest technology is being deployed to support it for a new, more skeptical wave of marketing spending.

Of course you need a model with ten trillion parameters to tell you what people like. How else could you justify the billions of dollars spent training it!



Source link

Share via
Copy link