What Is Meta AI? Everything to Know About the Social Network's AI Tools

2 weeks ago 7

If you use Facebook, Messenger, Instagram or WhatsApp, you've probably come across something called Meta AI without even realizing it. It's woven into how you interact on these apps, from helping you with posts to editing images.

AI Atlas art badge tag

The company's goal for Meta AI is to become your ultimate personal virtual assistant with free unlimited access to its AI models integrated into Meta's app family. Meta's whole shtick at late September's Connect 2024 event was to make AI tools more fun, accessible and user-friendly.

With the latest upgrades, Meta AI aims to go beyond basic chatbot functions and offer a multimodal, multilingual AI assistant that can handle complex tasks. Here's what to know about the social network giant's artificial intelligence tools.

Meta AI, and where you use it

Beyond its use in apps, Meta AI also refers to Meta's academic research laboratory. It was formerly known as Facebook Artificial Intelligence Research before Facebook's rebranding to Meta (Facebook, the company, not the social media platform) in October 2021. It focuses on the metaverse -- hence the name Meta -- and develops AI technology to power everything from chatbots to virtual reality and augmented reality experiences. 

Meta AI isn't the only player in the race to integrate artificial intelligence into everyday life. Google has its own AI tools, like Google Assistant and Gemini, its free chatbot, akin to ChatGPT

While Google's AI focuses more on productivity like search results or managing schedules, Meta AI is embedded into your social interactions, offering assistance without you having to ask. With Meta AI, you can snap a photo and ask it to identify its details or edit the images with prompting. 

A screenshot of Meta AI editing a photo via text requests
Screenshot by CNET

Similarly, Amazon's Alexa and Apple's Siri are task-oriented assistants, and ChatGPT or Snapchat's My AI help with conversational experience.

But Meta AI goes a step further blending all those features to make it the "everyday experience," rather than a standalone tool. And so, while those other utility tools feel more like something you consciously use, Meta AI quietly shapes how you connect with others or create content. 

It's almost sneaky in how it seamlessly integrates into social platforms people use daily, making AI tools harder to avoid. By simply typing "@" followed by Meta AI, you can summon the assistant in chats (even group chats) to offer suggestions, answer questions or edit images. 

This AI integration also extends to the search functions within Meta's apps, making it more intuitive and easier to find content and explore topics based on what you see in your feed -- what Meta calls a "contextual experience." 

Following ChatGPT's path, Meta AI now has natural voice conversations. It is multilingual, speaking English, French, German, Hindi, Hindi-Romanized script, Italian, Portuguese and Spanish. Soon, you'll also be able to choose from various celebrity voices for the assistant, including John Cena and Kristen Bell. 

Meta AI is currently available in 21 countries outside of the US: Argentina, Australia, Cameroon, Canada, Chile, Colombia, Ecuador, Ghana, India, Jamaica, Malawi, Mexico, New Zealand, Nigeria, Pakistan, Peru, Singapore, South Africa, Uganda, Zambia and Zimbabwe. 

Though Meta AI isn't available in the EU, the company says it might later join the EU's AI Pact. The AI Act requires companies to provide "detailed summaries" of the data used to train their models -- a requirement Meta has been hesitant to meet, likely due to its history with data privacy lawsuits.

Glasses as a new AI device 

Meta CEO Mark Zuckerberg introduced new multimodal features, powered by its open-source Llama 3.2 models, during the Connect event in September, where Meta's team emphasized the future of computing and human connection. 

One of the biggest announcements from Connect 2024 was how Meta AI is integrating into everyday products like its Ray-Ban Meta glasses. These glasses can now assist users in various ways, like remembering where you parked your car (woohoo!). 

The glasses can also take actions based on what you're looking at. For example, you can ask AI to make a call or scan a QR code for you.

A screenshot of Meta AI identifying elements on a flyer
Meta

Other presented products include the Meta Quest S3 version of their standalone virtual reality (VR) headset, which, after the upgrades, they now call a mixed reality headset, and Orion, their prototype of holographic AR glasses, which has been in the making for over a decade.

Though Ray-Ban Meta glasses and Quest devices are available across 15 countries, including some European ones, Meta AI is currently available on those devices only in the US and Canada

Live translation

Meta also announced developments in AI translation. Meta glasses will be able to translate for you in real time, so if someone speaks to you in Spanish, French or Italian, you'll be able to hear them in your ear in English.

Another major breakthrough, though still in the experimental phase, is video dubbing in Reels in Spanish and English, with automated lip-syncing. Presumably, if testing goes well, they will expand it to more languages. 

AI Studio

For now, this feature is available only in the US. Users and businesses will be able to create custom AI chatbots without needing extensive programming knowledge. These so-called AI characters will serve as extensions of themselves or their brands, enabling more engaging interactions with followers or customers.

A screenshot of a Meta AI chatbot chatting with a user
Screenshot by Barbara Pazur/CNET

In full transparency, all replies generated by AI will be marked as such.

The power behind Meta AI

Llama (Large Language Model Meta AI) is a family of LLMs designed to understand and generate human-like text, answer questions, write and even hold conversations. 

Llama 3.2 is the latest version of this LLM and Meta's first open-source multimodal model, which will enable many applications that require visual understanding. Meta claims that it is "its most advanced open-source model, with flexibility, control and state-of-the-art capabilities that rival the best closed-sourced models."

The new Llama 3.2 models come in two multimodal variants with 11B and 19B parameters, and text-only models with 8B and 70B parameters. Parameters are measured in billions and define how the model processes inputs, like words or images, and generates outputs by adjusting relationships between them.

Meta also plans to release models with smaller parameters optimized for mobile devices and wearables like glasses

What's next?

According to the company, Meta AI is set to become the world's most widely used AI assistant by the end of the year. Over 400 million people interact with Meta AI monthly, with 185 million using it across Meta's products each week. 

Read Entire Article