AI Weekly #09/2026: Trump's Ban — When Government Decides Which AI Is Allowed
AI Weekly #09/2026
March 1, 2026 | Week 9
TL;DR
- Anthropic Ban: Trump ordered all federal agencies to stop using Claude — because Anthropic refused to support autonomous weapons systems and mass surveillance. OpenAI closed the Pentagon contract immediately afterwards [1]
- Block Mass Layoffs: Jack Dorsey cuts 40% of the workforce (~4,000 jobs) and openly cites AI as the reason — economists are debating whether this is genuine AI-driven job displacement or “AI-washing” [3]
- Microsoft Copilot Tasks: AI moves from answering to acting — emails, apartment searches, flight bookings, job applications — available as Research Preview since February 26 [6]
- IBM: 72 Minutes: Attackers now need only 72 minutes from first system access to full compromise — AI-powered vulnerability exploitation as a key driver [7]
Story of the Week: Trump Blacklists Anthropic — OpenAI Takes Over Pentagon Contract
For the first time in the history of the AI industry, a leading company has been barred from the federal market by government decree [1]. President Trump ordered all US federal agencies to stop using Anthropic technology — a direct consequence of Anthropic’s refusal to make Claude available for “all lawful purposes,” including autonomous weapons systems and mass surveillance of US citizens.
Defense Secretary Hegseth was precise and politically effective in his language: according to media reports, he reportedly called Anthropic a “supply chain risk” to the US military [1]. This classification is not mere rhetoric — it has concrete consequences for all federal agencies using Claude through third-party providers like Palantir, and sends a signal to international government customers. Any company deemed a supply chain risk in the US government market faces significantly harder challenges in sensitive public procurement worldwide.
From a government perspective, however, such an assessment is not without foundation: states have legitimate security interests in critical infrastructure and military technology — and the right to independently set the terms for deploying AI systems in sensitive areas. The question the industry must ask itself is not whether governments may exercise this control, but by what criteria and processes they do so.
The industry’s response was nonetheless remarkable. Employees from Google and OpenAI — direct competitors of Anthropic — signed an open letter in solidarity with the banned company [2]. A cross-industry signal that is hard to overlook: when employees of competing companies jointly position themselves against government interference, the battle lines shift. It is no longer just about market share, but about the fundamental question of who decides the ethical boundaries of AI systems — private labs or governments.
OpenAI closed the Pentagon contract shortly after the Anthropic ban [1]. That is the real paradox of this story. OpenAI, publicly perceived as a counterpart to Anthropic’s “safety-first” approach, now receives precisely the contract Anthropic lost through its refusal. Whether OpenAI simply better matched the Pentagon’s requirements or whether it benefited from the situation cannot be definitively assessed from the outside. The market rewards availability in the short term — whether OpenAI thereby risks the long-term trust of developers and enterprise customers who preferred Anthropic for its safety reputation remains to be seen.
Anthropic’s counterattack is legal: the company called the ban “legally untenable” and threatened lawsuits [1]. This opens a new front in the AI governance discourse. When a private company uses its terms of service as a defensive line against government pressure, the constitutional question arises: can a government mandate for what purposes a private AI model must be made available? The answer will likely work its way through the courts for months or years — and the entire industry is watching.
What remains is a precedent with global significance: the first time a government has not regulated a leading AI company, but banned it from its own market — because it refused to abandon its ethical principles [1]. Whether this goes down in history as protecting legitimate state security interests or as a dangerous normalization of political AI control depends on who shapes this narrative in the coming weeks.
More Top Stories
Block Lays Off 40% — Jack Dorsey Says Openly: It’s the AI
Block, the fintech company behind Square and CashApp, cut around 4,000 positions this week — from 10,000 to 6,000 employees [3]. What distinguishes this case from other mass layoffs: CEO Jack Dorsey reportedly left no room for interpretation. Increased AI productivity makes these positions redundant — and he is said to have predicted that “most companies will take this step in the near future as well” [3]. No talk of market conditions, restructuring, or strategic realignment. Just AI.
The response from economists is divided [4]. One side sees Block as the opening act of systemic AI-driven job displacement — finally a company that openly says what others obscure. The other side warns of “AI-washing”: layoffs that were already planned for financial or strategic reasons and subsequently justified with AI productivity gains. A Forrester analysis this week lends substance to the first camp: 40% of all surveyed employers plan AI-related headcount reductions in the next 18 months according to Forrester [4] — a different metric than Block’s own layoff rate, but a complementary data point on the breadth of the trend.
The real signal is not the number — a 40% headcount reduction is painful, but not unusual in restructurings. The signal is the openness. Dorsey normalizes with his communication a discourse that until now was wrapped in PR language. When CEOs begin marketing AI-driven personnel reductions as inevitable business logic — not as extraordinary measures — the societal expectation horizon shifts. Every company that cuts jobs next can now cite this precedent [4].
Microsoft Copilot Tasks: The To-Do List That Completes Itself
Microsoft launched “Copilot Tasks” as a Research Preview on February 26, 2026 [6] — and the accompanying framing could hardly be more ambitious: “AI that doesn’t just talk to you, but works for you.” The idea: users describe tasks in natural language, and Copilot plans and executes them autonomously. A to-do list that works through itself.
The breadth of demonstrated use cases is impressive: email management, document creation, apartment searching, flight coordination, price monitoring, job applications [6]. What distinguishes this from earlier agent announcements is the safety concept: Copilot asks for human confirmation before executing actions with financial or communicative weight — spending money, sending messages. Microsoft’s own formulation: not autopilot, but copilot [6]. This is no coincidence. After months of public discussion about uncontrolled AI agents, Microsoft deliberately chooses a more conservative trust model.
The strategic significance lies in the target audience: Copilot Tasks is not designed for developers or enterprise power users — it is a consumer product for millions of Windows and Microsoft 365 users [6]. This brings agent-based AI into the mainstream, long before most people know what an “AI agent” even is. That is Google’s Gemini-for-Android strategy mirrored at the Microsoft level (see Quick Hits): both tech giants are betting that AI agents are the next interface paradigm — and want to shape the habits of hundreds of millions of users before a third party does.
Quick Hits
-
AI Cyberattacks +44% [7] — IBM X-Force Threat Index 2026: attacks on publicly accessible systems rose 44%, driven by AI-powered vulnerability detection. Basic security gaps remain the biggest problem — AI just makes them exploitable dramatically faster. Attack speed has structurally changed: what used to take hours now happens in minutes.
-
DeepSeek V4 Incoming — Nasdaq on Alert [8] — Analysts warn of market turbulence similar to the DeepSeek-V3 shock of 2025, when US AI stocks fell sharply. Another powerful, affordable Chinese model could once again call AI investment hype into question — and the question of whether western frontier labs can hold their price points.
-
Gemini on Android: AI Orders Food and Detects Scams [9] — Google expands Gemini on Android with autonomous app navigation through Uber, DoorDash, Instacart, and Starbucks, plus real-time fraud detection for calls and text messages. Initially available on Pixel 10 and Samsung Galaxy S26 in the US and Korea. AI agents are arriving on smartphones — for everyone.
Tool of the Week: Wolfram Language — Computation-Augmented Generation (CAG)
Stephen Wolfram addresses this week the fundamental weakness of LLMs: “LLMs don’t — and can’t — do everything.” [10] They can write convincingly, but not calculate precisely. His solution is Computation-Augmented Generation (CAG) — a structural complement to the familiar RAG concept. Instead of static documents, dynamically computed content from the Wolfram Language is injected directly into the LLM content stream. The result: an LLM that can draw on 40 years of precise mathematical computation — on demand, transparently, verifiably.
Wolfram presents three concrete integration methods [10]: The MCP Service enables direct invocation in MCP-compatible systems like Claude or ChatGPT. The Agent One API is a drop-in replacement for standard LLM APIs, where Wolfram computation is transparently integrated — without API consumers needing to change their architecture. The CAG Component APIs offer granular control for custom integrations and proprietary AI pipelines.
Why this matters: CAG does not solve the hallucination problem — but it gives LLMs a secure, verifiable foundation for all tasks that require precise computation: mathematics, statistics, physics simulations, database queries [10]. Wolfram articulates the potential: “The tighter the integration between LLMs and Wolfram’s foundation tool, the more powerful the combination becomes.” For teams building reliable AI systems for numerically critical applications, CAG is the missing bridge between LLM intuition and mathematical certainty — and one of the few architectural advances this week that goes beyond product announcements.
Fail of the Week: 24,000 Fake Accounts, 16 Million Conversations — How Three Chinese Companies Allegedly Plundered Claude
Anthropic is leveling serious allegations against three Chinese AI companies: DeepSeek, Minimax, and Moonshot AI are said to have systematically created 24,000 fraudulent accounts and retrieved over 16 million conversations with Claude [5] — with the sole goal of training their own models through “distillation.” Distillation means: the model learns from the outputs of another model, without needing direct access to its weights or training data.
Note: The allegations are based on a CNN report from February 24, 2026 [5], which could not be fully verified due to a paywall. These are Anthropic’s accusations — no public statement from the named companies is known at this time.
The incident is multi-dimensional [5]. First: according to reports, Anthropic allegedly had no way to detect this process in real time. Reportedly 24,000 fake accounts, 16 million requests — and apparently no automated detection that would have stopped the pattern in time. Should this be confirmed, it would not be an isolated failure, but a structural security problem for every API provider. Second: the timing is particularly unfavorable for Anthropic — coinciding with the government ban, in the middle of the public discussion about the company as a reliable AI partner.
The actual problem extends far beyond Anthropic [5]: every company selling access to a powerful model is potentially the same attack target. OpenAI, Google, Mistral — all offer APIs, all carry the same structural risk. Industrially organized distillation on this scale would be a warning signal for the entire industry. The question the industry must now answer: how do you detect when an API is no longer being used as a usage interface, but is being abused as a training pipeline — and what are the technical and legal consequences when you detect it too late?
Number of the Week: 72 Minutes
Source: IBM X-Force Threat Index 2026 [7]
That is how long attackers need on average today from first system access to full compromise. In 2023, it was still several hours — often days. AI-powered automation has structurally shifted attack speed.
72 minutes leaves no time for manual incident response [7]. A security team receiving an alert typically needs 30 to 90 minutes just for triage, validation, and initial escalation. Organizations that don’t also deploy AI in their defense — for automatic alert filtering, anomaly detection, and initial response — are structurally disadvantaged against attackers who have been using the same technology for months.
The X-Force conclusion is sobering: fundamental security gaps remain the biggest problem — not the attackers’ AI [7]. Unpatched systems, weak authentication, unsecured APIs — AI only accelerates what was always dangerous. The equation doesn’t change because of AI. But the time window for response is shrinking dramatically — and that changes everything.
Reading List
📖 Trump blacklists Anthropic from federal use, OpenAI wins Pentagon contract — CNBC with all details on the government ban, Hegseth’s supply chain risk classification, and OpenAI’s immediate contract takeover — the article that defines this week | 8 min
📖 Making Wolfram Tech Available as a Foundation Tool for LLM Systems — Stephen Wolfram’s own deep dive into CAG, all three integration methods, and the architectural logic behind them — required reading for anyone building reliable AI systems for numerical applications | 15 min
📖 Are Dorsey’s giant job cuts the start of an AI jobs apocalypse? — CNBC assembles economists on both sides of the debate: genuine structural change or AI-washing? With Forrester data and forecasts for 2026 | 7 min
Next Week: Legal Escalation and DeepSeek V4
The coming days will bring several ongoing developments into sharper focus:
- Anthropic vs. Government: The threatened legal steps against the federal ban are taking shape — initial lawsuits or out-of-court settlements are possible. Legal assessments of the decree’s constitutionality will dominate the discussion.
- DeepSeek V4 Release: Analysts expect the release in the coming weeks. The question is not whether, but how severe the market drop in US AI stocks will be — and whether the model can actually compete with western frontier models.
- Microsoft Copilot Tasks User Experience: First real reports from the Research Preview will show whether the “to-do list that completes itself” promise holds up in practice — or whether the safety confirmations interrupt the flow too heavily.
- Block Imitators?: Whether other CEOs adopt Dorsey’s open AI job-cut communication will be an early indicator of whether this openness establishes itself as a new norm or as an outlier.
Behind the AI: Metrics for This Issue
- Stories reviewed: 15 (from 10 verified primary sources; remaining 5 stories from supplementary staging research without individual source numbers)
- Final selection: 1 Story of the Week + 2 Top Stories + 3 Quick Hits + 1 Tool + 1 Fail + 1 Number of the Week
- Time period: 2026-02-23 to 2026-03-01
- Primary sources: 10 (CNBC, CNN, Engadget, Microsoft, IBM, TheAIInsider, Wolfram)
- WebFetch status: Microsoft & Wolfram fully loaded; CNBC/Engadget/CNN paywall-blocked — key statements from verified staging data (01-sources.md)
Story selection criteria: ✅ AI Governance & Policy (Anthropic government ban — historic precedent) ✅ AI & Work (Block mass layoffs — structural change debate) ✅ Agentic AI in consumer space (Microsoft Copilot Tasks) ✅ Security (IBM X-Force Threat Index, Anthropic API abuse through distillation) ✅ Tool Innovation (Wolfram CAG — architectural advance beyond product hype)
Footer
AI Weekly is produced by BKS-Lab.
Subscribe to the newsletter: bks-lab.com/newsletter
Contact: ai@bks-lab.com
Image rights: Hero image is an AI-generated illustration. Created with a licensed AI image platform in accordance with BKS-Lab media policy. For licensing inquiries: ai@bks-lab.com
Sources:
[1] Trump blacklists Anthropic from federal use, OpenAI wins Pentagon contract (CNBC, 2026-02-27)
[2] Google and OpenAI employees sign open letter in solidarity with Anthropic (Engadget, 2026-02-27)
[3] Block layoffs AI Jack Dorsey (CNN, 2026-02-26)
[4] Are Dorsey’s giant job cuts the start of an AI jobs apocalypse? (CNBC, 2026-02-27)
[5] Anthropic Chinese AI distillation (CNN, 2026-02-24)
[6] Copilot Tasks: From Answers to Actions (Microsoft, 2026-02-26)
[7] IBM 2026 X-Force Threat Index: AI-Driven Attacks Are Escalating (IBM, 2026-02-25)
[8] DeepSeek to release new AI model — a rough period for Nasdaq stocks could follow (CNBC, 2026-02-23)
[9] Google expands Gemini AI on Android with task automation, scam detection (TheAIInsider, 2026-02-26)
[10] Making Wolfram Tech Available as a Foundation Tool for LLM Systems (Wolfram, 2026-02-23)
AI-assisted | All facts supported by primary sources
Sources
- Trump blacklists Anthropic from federal use, OpenAI wins Pentagon contract
- Google and OpenAI employees sign open letter in solidarity with Anthropic
- Block layoffs AI Jack Dorsey
- Are Dorsey's giant job cuts the start of an AI jobs apocalypse?
- Anthropic Chinese AI distillation
- Copilot Tasks: From Answers to Actions
- IBM 2026 X-Force Threat Index: AI-Driven Attacks Are Escalating
- DeepSeek to release new AI model — a rough period for Nasdaq stocks could follow
- Google expands Gemini AI on Android with task automation, scam detection
- Making Wolfram Tech Available as a Foundation Tool for LLM Systems