Analyst says OpenAI's new AI model crosses 'close to new chasm' in emotional intelligence

CNBC International TV
14 May 202404:02

TLDROpenAI's latest AI model, GPT-4, is making waves for its advancements in emotional intelligence and multimodal capabilities. Demonstrated through a virtual assistant teaching math, the model showcases soft skills such as emotion recognition and real-time translation between English and Italian. Fred H., head of US AI and software research at McCarthy, discusses the potential impact on consumer adoption, suggesting that while chatbots like Chat GPT already meet consumer needs with over 100 million monthly active users, the new model could offer a more humanlike interaction. This advancement could be particularly beneficial for overcoming language barriers in travel and international business, although its effectiveness in dynamic conversations remains to be seen. The development is seen as an exciting addition to the AI arsenal, with the potential to disrupt and facilitate various industries.


  • 🤖 OpenAI's new AI model, GPT-4, demonstrates emotional intelligence and multimodal capabilities.
  • 📈 The model's ability to recognize and respond to emotions is a significant step forward in AI-human interaction.
  • 🗣️ GPT-4 can translate speech live between English and Italian, enhancing cross-language communication.
  • 🌐 The launch of GPT-4 could potentially disrupt industries by facilitating global commerce and personal connections.
  • 📊 Chat GPT already has over 100 million monthly active users, indicating a strong market fit for AI assistants.
  • 🔍 The new chatbot model is suspected to be based on the GPT-2 model that OpenAI has been developing.
  • 🚀 GPT-4's advancements could make AI interactions more humanlike, moving beyond the limitations of current voice assistants like Siri and Alexa.
  • 🧳 The model could be a game-changer for travelers, helping to overcome language barriers in different countries.
  • 📝 The practical application of GPT-4 in real-world scenarios, such as long conversations, remains to be seen.
  • 🔧 Other AI tools like Google Translate and potential visual translation glasses could complement GPT-4's capabilities.
  • 🔮 There is excitement and anticipation about the potential of GPT-4 to revolutionize how we interact with AI in daily life.

Q & A

  • What is the new feature of OpenAI's GPT-4 that was demonstrated in the video?

    -The new feature demonstrated is the ability of the AI to teach a student how to solve a math problem, rather than solving it for them, showcasing soft skills like recognizing emotion and live speech translation between English and Italian.

  • How does the new GPT-4 model differ from previous AI models in terms of emotional intelligence?

    -The GPT-4 model is said to have crossed a new chasm in emotional intelligence, allowing computers to understand and respond in a more humanlike manner, making interactions with AI more comfortable and natural.

  • What is the significance of the live speech translation feature for consumers?

    -The live speech translation feature could greatly facilitate international travel and communication by allowing users to understand and communicate with local populations in their native languages without the need for extensive language learning.

  • How does the analyst view the potential impact of GPT-4 on various industries?

    -The analyst is excited about the potential of GPT-4 to facilitate commerce and personal links across languages. However, he acknowledges that it could also be disruptive to industries that have not yet considered the implications of advanced AI translation capabilities.

  • What are some of the existing tools that the analyst mentions for language translation?

    -The analyst mentions Google Translate for visual translation and potentially Meta Rayan glasses that can take pictures and provide information about what is being looked at as existing tools for language translation.

  • How does the analyst describe the current consumer market for AI assistants?

    -The analyst notes that chatbot platforms like GPT are already popular, with over 100 million monthly active users, indicating a strong fit in the consumer market.

  • What is the analyst's personal experience with language barriers?

    -The analyst shares his upcoming trip to Korea for a friend's wedding and expresses his excitement about the potential of using GPT-4 to overcome the language barrier he faces due to his limited knowledge of Korean.

  • What is the analyst's expectation for the performance of GPT-4 in real-world scenarios?

    -The analyst is hopeful that GPT-4 will be able to handle dynamic conversations effectively, but he also mentions that it remains to be seen how well it works in the field, especially across longer conversations.

  • How does the analyst envision the future of AI translation services?

    -The analyst sees GPT-4 as another tool in the AI arsenal for translation services, which could be used for various applications, but he emphasizes the need to evaluate its performance in real-world use.

  • What is the potential benefit of GPT-4 for users who are not fluent in a foreign language?

    -GPT-4 could allow users to travel and communicate with people in different countries without the need for language guides, potentially facilitating both business and personal connections.

  • How does the launch of GPT-4 reflect the current trend in AI development?

    -The launch of GPT-4 reflects a trend towards multimodal AI capabilities, where AI is not just solving problems but also demonstrating soft skills and emotional intelligence, which are highly valued by consumers.

  • What is the potential downside of advanced AI translation systems like GPT-4?

    -While advanced AI translation systems can facilitate communication and commerce, they may also disrupt industries that rely on traditional language services, such as professional translators and tourism guides.



🚀 AI's New Multimodal Capabilities

The paragraph discusses the advancements in AI, specifically OpenAI's GP4 upgrade, which showcases a virtual assistant teaching a student to solve a math problem. It highlights the demonstration of soft skills such as recognizing emotions and the potential for real-time speech translation between English and Italian. The conversation suggests that these capabilities could significantly impact consumer adoption of AI, making interactions more human-like and comfortable. The launch using the GPT-2 chatbot model is seen as a breakthrough, possibly leading to a new level of consumer engagement with AI.




OpenAI is a research and deployment company that develops artificial general intelligence (AGI) and other AI technologies. In the video, OpenAI is highlighted for its new AI model, GP4, which demonstrates significant advancements in emotional intelligence and multimodal capabilities.

💡GP4 Upgrade

The GP4 Upgrade refers to the latest iteration of OpenAI's AI model. It is showcased in the video for its ability to teach and interact with humans in a more natural and emotionally intelligent manner, which is a significant leap from simply solving problems for users.

💡Virtual Assistant

A virtual assistant, as mentioned in the transcript, is an AI-powered entity that can perform tasks and provide services to users through digital means. In the context of the video, the virtual assistant is teaching a student how to solve a math problem, indicating a shift from performing tasks to aiding in learning.

💡Soft Skills

Soft skills are personal characteristics that enable someone to interact effectively with others. The transcript discusses how the AI model's demonstrations show soft skills such as recognizing emotions, which are crucial for more human-like interactions.

💡Translating Speech

The ability to translate speech in real-time is a feature highlighted in the video. It involves the AI model's capacity to translate English to Italian and vice versa instantly, which is a significant step towards more accessible and barrier-free communication.

💡Emotional Intelligence

Emotional intelligence is the ability to recognize, understand, and manage our own emotions as well as the emotions of others. The video emphasizes that the new AI model crosses a 'new chasm' by incorporating emotional intelligence, making interactions with AI more human-like.

💡Consumer Adoption

Consumer adoption refers to the process by which individuals and households begin to use a product or service. The transcript discusses how the advancements in AI, such as those made by OpenAI, could push consumer adoption over the line, as they become more willing to pay for AI services that offer enhanced emotional intelligence and interaction.

💡Chat GPT

Chat GPT is a reference to an AI model that is capable of conversing with humans. In the video, it is mentioned as an example of existing technology that has already gained popularity among consumers, indicating a level of fit in the consumer market.


Multimodal refers to systems that can process and analyze data from multiple sources or modalities, such as text, speech, and images. The video discusses OpenAI's demonstration of multimodal capabilities, which is a hot topic and signifies a new era in AI interaction.

💡Language Barrier

A language barrier is a communication obstacle that arises when people speak different languages. The transcript explores how the new AI model could help overcome language barriers, facilitating travel, commerce, and personal connections across different linguistic communities.

💡AI Arsenal

The term 'AI Arsenal' metaphorically refers to the collection of AI tools and technologies available for use. The video suggests that the new AI model is an addition to this arsenal, potentially offering more dynamic and human-like conversational capabilities.


OpenAI's new AI model, GPT-4, demonstrates a significant upgrade in emotional intelligence and soft skills.

GPT-4 can recognize emotions and be descriptive, enhancing its interaction with humans.

The model showcases live speech translation between English and Italian, adding a new dimension to AI capabilities.

Analyst Fred H., Head of US AI and Software Research, discusses the potential impact of GPT-4 on consumer adoption of AI.

Chat GPT already has over 100 million monthly active users, indicating a strong fit in the consumer market.

GPT-4's launch introduces a new chasm in AI, where computers can understand emotional intelligence.

The new chatbot model makes interactions with AI more humanlike, moving beyond the limitations of current voice assistants.

The potential for GPT-4 to facilitate global commerce and personal links by overcoming language barriers is discussed.

The disruptive potential of GPT-4 on various industries is acknowledged, with a focus on its translation capabilities.

Fred H. shares his personal excitement about using GPT-4 to overcome language barriers during his upcoming trip to Korea.

GPT-4 could be a valuable addition to the arsenal of AI-related assistance tools, such as Google Translate and Meta Rayan glasses.

The effectiveness of GPT-4 in dynamic conversations and longer interactions needs to be assessed in real-world scenarios.

The hope is that OpenAI's GPT-4 can work seamlessly with dynamic conversations, unlike current voice assistant limitations.

The potential of GPT-4 to revolutionize consumer interaction with AI and its broader societal implications are a topic of excitement and anticipation.

The analyst emphasizes the importance of seeing GPT-4 in action to understand its full capabilities and market impact.

GPT-4's ability to translate speech live could greatly facilitate international travel and communication.

The discussion highlights the transformative role of AI in making global interactions more accessible and less language-dependent.