AI-generated illustration about AI funding and geopolitics
Image generated with Pollinations.ai
Weekly Briefing 11 min read

AI Weekly #7/2026: $380B Valuation While Software Stocks Plunge

Sunday, February 15, 2026

This article was researched and written with AI

AI Weekly #7/2026

February 15, 2026 | Week 7


Audio Version

15:38 min | Download MP3


TL;DR

  • $380 Billion: Anthropic closes $30B Series G – second-largest private tech funding of all time [1]
  • AI Geopolitics Escalate: OpenAI officially accuses DeepSeek before US lawmakers of systematic model distillation [3]
  • Healthcare AI Crisis: ECRI declares chatbot misuse the #1 patient safety risk for 2026 – just as FDA loosens requirements [4]
  • Chinese Offensive: DeepSeek V4, Alibaba Qwen 3.5, ByteDance, and Zhipu prepare coordinated launches during Lunar New Year [5]

Story of the Week: Anthropic Raises $30B at $380B Valuation – While Software Stocks Crash

Since its founding in 2021, Anthropic is now valued at $380 billion. The Series G funding round brought in $30 billion in fresh capital – the second-largest private tech financing in history, surpassed only by OpenAI [1]. For perspective: This valuation exceeds BMW, Airbus, and Siemens combined – for a company primarily selling a chat model.

Yet while foundation model labs raise billions, the stock market tells a different story. In the same week, software stocks fell sharply – not just tech, but also financials, office real estate, trucking, and logistics [8]. The reason: Investors fear that AI agents will make traditional enterprise software obsolete.

This paradox reveals the fundamental capital redistribution currently underway. Money is flowing away from companies that use software toward those that build foundation models. Markets aren’t just pricing in that AI is disruptive – they’re already evaluating who the winners and losers will be.

What will Anthropic do with $30 billion? The company remains silent on details, but the timing signals are clear: While OpenAI pushes GPT-5.x and DeepSeek V4 launches next week, Claude needs massive capital for the compute race. Foundation models have become a capital-intensive industry – those who can’t burn billions no longer compete.

The $380B valuation isn’t an assessment of Anthropic’s current profitability – it’s a bet that Claude will be one of the few survivors in the foundation model consolidation process. In a world with 5-7 Tier-1 labs, $380B is the price for a seat at the table.


Further Top Stories

Google DeepMind: Gemini Deep Think Achieves IMO Gold Standard and Solves Open Erdős Problems

Google DeepMind has introduced Gemini Deep Think – a reasoning model that achieves the gold medal standard of the International Mathematics Olympiad (IMO) [2]. Even more remarkable: The new “Aletheia” research agent feature solved four open problems from the Erdős Conjectures Database – a collection of mathematical conjectures, some decades old.

This is more than a benchmark win. DeepMind explicitly positions Gemini Deep Think not as a “tool for mathematicians” but as an active scientific research partner. The model also achieved competition-level performance in the ICPC (International Collegiate Programming Contest), demonstrating that the capability applies not only to formal mathematics but also to algorithmic problem-solving.

The four solved Erdős problems are historically significant. Paul Erdős was one of the most prolific mathematicians of the 20th century – his “Conjectures Database” contains hundreds of open conjectures that mathematicians worldwide work on. That an AI system now independently finds solutions could mark a turning point: AI is evolving from assistant to co-researcher.

For Scientific AI, this represents a narrative shift. Previous systems (AlphaFold, AlphaProof) solved specific scientific problems. Gemini Deep Think demonstrates generalized reasoning capability across multiple domains – mathematics, programming, scientific hypotheses. This is the foundation for AI as a “general scientific discoverer,” not just a specialized tool.


OpenAI Officially Accuses DeepSeek of Model Distillation – AI Becomes Geopolitical

OpenAI has officially warned US lawmakers about DeepSeek, accusing the Chinese lab of “unfair and increasingly sophisticated methods” [3]. The specific accusation: DeepSeek systematically extracts output from US models (GPT-4, Claude, Gemini) to train the R1 successor DeepSeek V4 – a process called “model distillation.”

This is the first direct IP accusation between Tier-1 labs across national borders. Distillation is technically legal (it’s only inference, not weight theft), but OpenAI argues: If DeepSeek systematically uses billions of API calls to train a competing model, it’s unfair competition.

The timing is strategic. According to Bloomberg, DeepSeek V4 is expected next week during Lunar New Year [5] – exactly like a year ago when DeepSeek-R1 caused a stir. OpenAI is proactively trying to build political pressure before the launch occurs.

The geopolitical dimension is new. Previous AI competition operated at the benchmark level – now it’s becoming a matter for lawmakers. OpenAI frames DeepSeek not just as a competitor but as a state-backed actor systematically exploiting US IP. This is classic geopolitics: Technical competition becomes state competition.

For the industry, this means: Model distillation will become a regulatory issue. If US labs successfully argue that Chinese distillation is unfair, API restrictions could follow – which in turn forces Chinese labs toward more autarky (their own infrastructure). The AI race is fragmenting regionally.


Healthcare AI Crisis: Chatbot Misuse Becomes #1 Patient Safety Risk – Just as FDA Loosens Requirements

ECRI, a leading nonprofit patient safety organization, has declared AI chatbot misuse the #1 health tech hazard for 2026 [4]. The problem: ChatGPT, Gemini, and Microsoft Copilot are being deployed in healthcare settings without clinical oversight – doctors and nurses use them for diagnoses, medication recommendations, and patient counseling.

The timing is problematic. In January 2026, the FDA loosened device requirements for Clinical Decision Support Tools. Result: AI tools can now be deployed in clinics without FDA vetting. At exactly this moment, ECRI warns of the risks.

The core problems according to ECRI:

  • Hallucinations: ChatGPT invents medical “facts” that sound plausible
  • No clinical validation: GPT-4 is not a medical device – it was never tested for healthcare use
  • Lack of liability: If a patient is harmed by AI misadvice – who is responsible?

This isn’t a theoretical risk. Healthcare professionals already report cases where colleagues adopted ChatGPT output verbatim – without fact-checking. The FDA loosening exacerbates the problem: Tools previously considered “experimental” are now deemed “acceptable.”

At the same time, there are positive examples of AI in healthcare: Radiology assistants helping doctors with diagnoses, or administrative copilots reducing documentation burden. The difference lies in validation and supervision – clinically tested tools with human oversight work, general-purpose chatbots without oversight are risky.

This shows the central trade-off: Not slowing innovation vs. ensuring patient safety. The FDA tries to balance both, but the current approach – loose requirements for AI tools – may have come too early. Healthcare is the first safety-critical use case where general-purpose AI is being mass-deployed – without domain-specific safeguards.


Quick Hits

  • OpenAI Tests Ads in ChatGPT Free Tier [6] – Despite billion-dollar fundings, all labs need sustainable revenue streams. ChatGPT Free gets advertising, partners pay premium rates for placement.

  • OpenAI Retired GPT-4o, GPT-4.1, and o4-mini from ChatGPT [7] – Aggressive model lifecycle: GPT-4o removed from ChatGPT after less than a year. API remains, but ChatGPT pushes users to GPT-5.x series. Signal: OpenAI wants to force rapid adoption of newest models.

  • Software Stocks Fall Due to AI Agent Disruption Fears [8] – Global software stocks (financials, office real estate, trucking, logistics) fall sharply. Investors fear: AI agents replace traditional enterprise software. Same week Anthropic receives $30B.


Tool of the Week: Mistral Voxtral Transcribe 2 – On-Device Speech-to-Text for 1/5 of Cloud Costs

Mistral AI has launched Voxtral Transcribe 2 – an open-weights speech-to-text model that’s more practical and cheaper than all cloud alternatives [9].

Two Variants:

  • Voxtral Mini: 13 languages, speaker diarization, $0.003/min (vs. GPT-4o mini: ~$0.015/min)
  • Voxtral Realtime: Sub-200ms latency for live transcription

What Makes It Special:

  • Open-Weights: Apache 2.0 License – can run on-device (privacy!)
  • Performance: Outperforms GPT-4o mini at 1/5 the cost
  • Practical: Not just a benchmark win – real cost savings for production use

Why Relevant: This is the counter-trend to cloud-only AI. While OpenAI and Anthropic burn billions for larger models, Mistral shows: Open source can lead with practical tools. On-device processing is privacy-first (GDPR-compliant without cloud), cheaper (no API costs after deployment), and low-latency (no network round-trip).

Voxtral Transcribe 2 isn’t a research project – it’s a production-ready tool that companies can deploy today. This is the AI story lost beneath the funding hype: Practical open-source tools solving real problems.


Fail of the Week: FDA Loosens Clinical AI Requirements Just as ECRI Warns of Chatbot Misuse

In January 2026, the FDA loosened device requirements for Clinical Decision Support Tools. In the same month, ECRI warns of AI chatbot misuse as the #1 health tech hazard [4].

Timing Paradox:

  • FDA says: “AI tools need less vetting”
  • ECRI says: “AI tools without vetting are the greatest patient safety threat”
  • Result: ChatGPT/Gemini deployed in clinics without FDA check

This shows the disconnect between regulation and safety reality. Policy-makers want to “not slow innovation,” while patient safety organizations warn of the risks. The result is tension: AI tools can be used in healthcare without comprehensive validation – with potential for liability and oversight problems.

The central trade-off: Too strict regulation slows innovation and prevents positive use cases (diagnostic assistants, administrative copilots). Too loose regulation enables risky deployments without clinical validation. The FDA tries to find balance – but the current approach may be too permissive for safety-critical settings.


Number of the Week: $380 Billion

Anthropic’s post-money valuation after $30B Series G [1].

Why Relevant:

  • Second-largest private tech valuation after OpenAI
  • More than BMW, Airbus, Siemens combined
  • For a company primarily selling a chat model

The Contrast: Same week Anthropic is worth $380B, software stocks fall from fear of AI disruption [8]. Shows capital redistribution: Away from traditional software, toward foundation model labs.

$380B isn’t a valuation of profit – it’s the price for a seat in the Tier-1 AI lab oligopoly.


Reading List

📖 Gemini Deep Think: Mathematical Discovery – Google DeepMind explains how Aletheia Research Agent solved open Erdős problems | 8 min

📖 OpenAI vs DeepSeek: Model Distillation Accusations – Bloomberg reports on the first geopolitical IP accusation between AI labs | 6 min

📖 ECRI Health Tech Hazard Report 2026 – Why AI chatbots are the biggest patient safety risk | 5 min


Next Week: Chinese AI Launches & DeepSeek V4

Next week begins Lunar New Year – and with it the coordinated offensive of Chinese AI labs. According to the South China Morning Post (SCMP), a Hong Kong-based newspaper, several major launches are expected [5]:

  • DeepSeek V4: Successor to R1, context window 128K → 1M tokens
  • Alibaba Qwen 3.5: Flagship update
  • ByteDance & Zhipu GLM-5: Further major launches

This is no coincidence. A year after DeepSeek’s first breakthrough, Lunar New Year becomes the “AI launch window” – similar to Apple’s September events. Shows professionalism and coordination in the Chinese AI sector.

We’re tracking the launches and reporting next week.


Behind the AI: Metrics for This Issue

  • Stories analyzed: 20 (from 180+ RSS sources)
  • Final selection: 5 top stories + 3 quick hits
  • Period: 2026-02-08 to 2026-02-15
  • Diversity: 6 countries (USA, UK, China, France), 8 categories
  • Sources: 9 primary sources, all <14 days old

Story Selection Criteria: ✅ Tier-1 Labs (Anthropic, Google DeepMind, OpenAI) ✅ Open Source (Mistral) ✅ Geopolitics (USA vs. China) ✅ Regulation (Healthcare AI + FDA) ✅ Business Impact (Funding, Stock Market)


AI Weekly is produced by BKS-Lab.

Subscribe to newsletter: bks-lab.com/newsletter

Contact: ai@bks-lab.com


Legal Notices:

  • This newsletter is for informational purposes only
  • No financial, investment, legal, or health advice
  • All information without guarantee, as of: 2026-02-15
  • Sources are indicated by hyperlinks

Sources:

[1] Anthropic raises $30 billion in Series G (2026-02-12)

[2] Accelerating mathematical and scientific discovery with Gemini Deep Think (2026-02-11)

[3] OpenAI Accuses DeepSeek of Distilling US Models to Gain an Edge (2026-02-12)

[4] Misuse of AI Chatbots in Health Care Tops 2026 Health Tech Hazard Report (2026-02-12)

[5] China’s AI arms race sees sector brace for major flagship model launch week (2026-02-12)

[6] Testing ads in ChatGPT (2026-02-12)

[7] Retiring GPT-4o and older models (2026-02-13)

[8] AI disruption fears slam new corners of the market (2026-02-12)

[9] Mistral Voxtral Transcribe 2 (2026-02-04)


Created with AI assistance | All facts verified through primary sources