- The 79
- Posts
- GPT-4.5 is here and it's more emotional than before
GPT-4.5 is here and it's more emotional than before

Hi pals! Here’s what you need to know about AI today:
👉 OpenAI finally released GPT-4.5
👉 A new agentic AI tool that can process 400 sources at once
👉 Meta wants to launch a standalone app for Meta AI
and many more!
📧 Did someone forward you this email? Subscribe here for free to get the latest AI news everyday!
Read time: 5.1 minutes

OpenAI
GPT-4.5 is here and it's more emotional than before

Source: OpenAI
What’s going on: OpenAI has finally launched GPT-4.5 with the code name of “Orion”, marking it as the company’s largest AI model to date. This model has been trained with unprecedented amounts of computing power and data, surpassing all previous models. Access to GPT-4.5 begins with subscribers to ChatGPT Pro, a $200-per-month plan, who can explore it through a research preview starting immediately, while developers on paid API tiers also gain access on the same day. Other ChatGPT users, including those on Plus and Team plans, are expected to receive access the following week.
What does it mean: We have a new GPT model. Cool! But it’s not that impressive or ground-breaking. Certainly it’s not AGI and its performance even falls behind OpenAI’s other models in reasoning. Additionally, GPT-4.5 comes with notable costs and limitations as well. OpenAI admits that operating the model is exceptionally expensive, and because of that It has reconsidered its API pricing, where it charges developers $75 per million input tokens and $150 per million output tokens, significantly higher than the rates for GPT-4o, DeepSeek-R1 and Grok-3.
More details:
Performance-wise, GPT-4.5 outshines its predecessor, GPT-4o, and other models on benchmarks like SimpleQA, where it achieves higher factual accuracy and fewer hallucinations, though it lags behind top reasoning models like o3-mini and Anthropic’s Claude 3.7 Sonnet on more complex academic tests such as AIME and GPQA.
Feature-wise, GPT-4.5 integrates with tools like file uploads and ChatGPT’s canvas but lacks advanced functionalities such as two-way voice mode.
Initial tests shows that GPT-4.5 provides a more natural interaction experience, thanks to its expanded knowledge base, enhanced ability to understand user intent, and increased emotional intelligence (EQ).
The model was developed using new supervision techniques combined with supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), aiming to enhance both capabilities and safety.
Interested to learn more? Read OpenAI’s official blog post for GPT-4.5.
ENTERPRISE AI
An AI agent that can process 400 sources at once
Source: You.com
What’s going on: You.com has launched an AI research agent named Advanced Research & Insights agent (ARI), designed to revolutionize market research by processing over 400 sources in minutes. ARI targets the $250-billion management consulting industry, aiming to enhance the traditionally labor-intensive process that often takes teams of analysts days or weeks to complete. The CEO of You.com argues that this technology could reshape the trillion-dollar knowledge work sector by making high-level research accessible to all.
What does it mean: What distinguishes ARI from other AI research tools is its ability to handle a volume of sources, over 400 at once, that surpasses competitors by roughly tenfold. Beyond compiling text-based reports, ARI generates interactive visualizations, such as plots detailing market size and growth expectations, which are automatically tailored to the data it uncovers. Additionally, ARI ensures transparency and reliability for enterprise users by offering direct source verification; every claim is linked to its origin, allowing users to quickly fact-check by clicking citations that highlight exact source locations.
More details:
Early adopters, including Germany’s Wort & Bild Verlag and global consulting firm APCO Worldwide, have already reported significant efficiency gains, with research times dropping from days to hours.
Looking ahead, ARI developers want it to evolve into a more autonomous, "agentic" tool capable of acting on its findings. While ARI currently serves enterprise clients in research-heavy fields, its entry into a crowded market including recent releases like DeepSeek-R1, Grok-3, and Claude 3.7 highlights its competitive edge through speed, comprehensiveness, and verification capabilities.
Interested to learn more and get involved? Join the ARI waitlist.

📽 OpenAI's Sora video generation model is now available to ChatGPT Plus and Pro subscribers in the European Union, the UK, Switzerland, Norway, Liechtenstein, and Iceland.
💬 Meta is planning to launch a standalone app for its AI assistant, Meta AI, between April and June, along with testing a paid subscription service to compete with ChatGPT and Gemini.
❄ Snowflake is expanding its startup accelerator with an additional $200 million in funding to support early-stage startups building AI-based products on its platform.
📉 OpenAI CEO Sam Altman revealed that the company is facing a GPU shortage, which is impacting the rollout of its new GPT-4.5 model and prompting plans to develop its own AI chips and data centers.
🤖 Figure AI plans to begin alpha testing its Figure 02 humanoid robot in home settings later in 2025, accelerated by its "generalist" Vision-Language-Action (VLA) model called Helix.
💻 Microsoft has launched a dedicated macOS app for its free generative AI chatbot, Copilot, offering Mac users a native experience. However, it requires macOS 14.0 or later and an Apple M1 chip or later.
🕶 Meta has introduced Aria Gen 2, the next generation of its augmented reality research glasses, featuring an upgraded sensor suite with heart rate monitoring and voice isolation capabilities.


AI + YouTube video analysis
I want you to act as an expert YouTube video analyst. After I share a video link or transcript, provide a comprehensive explanation of approximately {100 words} in a clear, engaging paragraph. Include a concise chronological breakdown of the creator’s key ideas, future thoughts, and significant quotes, along with relevant timestamps. Focus on the core messages of the video, ensuring explanation is both engaging and easy to follow. Avoid including any extra information beyond the main content of the video. {Link or Transcript}
Gemini 2.0 Flash’s answer

Epic Games - Senior AI Designer (R26021)
ZoomInfo - AI Automation Developer
Karbon - VP of Product - AI
Zscaler - AI Enablement Specialist
Thank you for staying with us like always! If you are not subscribed, subscribe here for free to get more of these emails in your inbox! Cheers!