Facebook AI researchers develop Blender chatbot that can converse as well as humans can

Facebook researchers have also open sourced the Blender, making it the largest-ever-domain chatbot.

Facebook launches an advanced open source chatbot
Facebook launches an advanced open source chatbot (Getty Images/iStockphoto)

Facebook AI Research (FAIR), the artificial intelligence and machine learning division of the social networking company, has developed a new chatbot 'Blender' that can converse as well as humans can. FAIR has also announced open sourcing the Blender. The research group claims Blender is the largest-ever-domain chatbot.

The researchers say Blender comes with diverse conversational skills covering empathy, knowledge, and personality. They attributed Blender's success to better decoding technologies and a model with 9.4 billion parameters which is said to be 3.6X bigger than the existing models.

ALSO READ: Facebook AI model 'RegNet' beats Google's, runs 5 times faster on GPUs

Facebook researchers leveraged upon existing researches to help build the Blender bot. These include PersonaChat, Wizard of Wikipedia, Empathic Dialogues, and a task called Blended Skill Talk to train and evaluate these skills.

Researchers also claim Blender is superior than Google's Meena chatbot.

"When presented with chats showing Meena in action and chats showing Blender in action, 67 percent of the evaluators said that our model sounds more human, and 75 percent said that they would rather have a long conversation with Blender than with Meena," wrote researchers in a blog post.

An evolution of Facebook's conversational AI: Overview of human feedback to these chatbots
An evolution of Facebook's conversational AI: Overview of human feedback to these chatbots (Facebook)

ALSO READ: Facebook wants to use AI to screen content, but fairness issues remain

Researchers warn while Blender is a much improved version of the chatbots, it's still far from reaching human-level intelligence in conversations. They also warned of the AI models making mistakes and can even "hallucinate" knowledge.

ALSO READ: Breaking down Neon: The 'Artificial Human' that wants to be your friend

"We're currently exploring ways to further improve the conversational quality of our models in longer conversations with new architectures and different loss functions. We're also focused on building stronger classifiers to filter out harmful language in dialogues. And we've seen preliminary success in studies to help mitigate gender bias in chatbots," researchers added.