AI Leaves the Chat Window
16 Apr 2026, 08:39 · by hunaruhub
AI news this week did not feel like a parade of flashy demos. It felt more serious than that. The main story was that AI is becoming infrastructure: something that sits inside work, security, software, procurement and decision-making. Anthropic pushed forward on secure autonomy, OpenAI stressed enterprise and agentic workflows, Meta deepened its compute strategy, Google made AI more project-based and stateful, and the Bank of England said it is actively testing AI-related financial risks.
A clear pattern is emerging across the sector. OpenAI says enterprise now makes up more than 40% of its revenue, Codex has reached 3 million weekly active users, and its APIs process more than 15 billion tokens per minute. That is not the language of a novelty product. It is the language of deployed workflow infrastructure.
At the same time, the hardware and governance layers are becoming impossible to ignore. Meta announced an expanded partnership with Broadcom to co-develop multiple generations of custom AI silicon, with an initial commitment exceeding 1GW. Meanwhile, the Bank of England said it is running scenario analysis and simulations to test how AI could affect financial stability, especially if AI agents amplify herd behaviour in stressed markets. The message is simple: useful AI is no longer just about prompts. It is about chips, risk controls, memory, integration and oversight.
This week in AI
- Anthropic released Claude Opus 4.7 on 16 April, positioning it as a stronger model for coding, vision and long-running multi-step work, with a 1 million token context window.
- Anthropic also launched Project Glasswing on 7 April, using Claude Mythos Preview to help defend critical software with partners including AWS, Apple, Google, Microsoft, NVIDIA and JPMorganChase.
- Meta announced an expanded Broadcom partnership on 14 April to co-develop next-generation MTIA chips, part of a custom-silicon strategy that begins with more than 1GW of deployment.
- OpenAI said enterprise now accounts for more than 40% of revenue, with Codex at 3 million weekly active users and GPT-5.4 driving record engagement in agentic workflows.
- Google introduced notebooks in Gemini on 8 April, syncing project context across Gemini and NotebookLM so users can organise files, chats and instructions around longer tasks rather than single prompts.
- A Thomson Reuters report published this month found that 40% of professionals say their organisations now use generative AI, up from 22% last year, while only 15% say their organisations use agentic AI and another 53% are planning or considering it.
What these developments mean
The first key idea is agents. A chatbot gives an answer. An agent handles a job: it plans, calls tools, checks state, revises, and keeps going until the task is done. That is why so many announcements now emphasise coding, workflow execution and long-running tasks rather than simple Q&A.
The second key idea is persistent context. Google’s notebooks in Gemini matter because real work is not one prompt at a time. Real work has files, earlier decisions, constraints, deadlines and revisions. If an AI system forgets all of that between interactions, it will feel clever but not reliable.
The third key idea is risk-aware deployment. Anthropic’s Glasswing and the Bank of England’s testing both point to the same lesson: as models get better, the cost of naïve deployment rises. Stronger systems need stronger controls.