AI in the Enterprise
News, analysis, and insights for IT leaders navigating the risks and rewards of AI
June 02, 2025
What ROI? AI misfires spur CEOs to rethink adoption
FOMO still drives AI investments, but with only 25% of projects meeting expectations, chief executives may be shifting from âfail fastâ to a slower, more intentional approach.
Read more
Network bloat: AI-driven data movements cause cloud overspend
A significant percentage of enterprise cloud network spending is wasted, due to preventable mistakes and manual processes, and the recent spike in AI deployments is adding to the problem.
Poisoned models in fake Alibaba SDKs show challenges of securing AI supply chains
Fake Alibaba Labs AI SDKs hosted on PyPI included PyTorch models with infostealer code inside. With support for detecting malicious code inside ML models lacking, expect the technique to spread.
The tough task of making AI code production-ready
With AI introducing errors and security vulnerabilities as it writes code, humans still have a vital role in testing and evaluation. New AI-based review software hopes to help solve the problem.
AI agents mean the death of the web
What Google and Microsoft see ahead is something completely different, not just on the surface but fundamentally. And itâs not going to end well for the webâs ability to connect, engage, and form community.
Oracle to spend $40B on Nvidia chips for OpenAI data center in Texas
Move signals OpenAIâs break from Microsoft exclusivity as enterprise AI infrastructure costs surge to unprecedented levels.
Most LLMs don't pass the security sniff test
CISOs are advised to apply the same evaluation discipline to AI as they do to any other app in the enterprise.
When AI fails, who is to blame?
I don't even know why people are asking this question. Of course the user is to blame. Here's why.
AI and economic pressures reshape tech jobs amid layoffs
Tech layoffs have continued as AI adoption and economic pressures drive a major shift toward new roles and skills in the workforce.
How âdark LLMsâ produce harmful outputs, despite guardrails
Study finds how easy it is to persuade most AI chatbots to generate harmful or illegal information, despite vendor guardrails.