Laden...
Hundreds of Big Technology readers have joined our premium tier for exclusive content, access to our events, and to support independent tech journalism like today’s story. Try it for 20% off in year 1, or just $8 per month. Apple + OpenAI Math: Notebook From a Week in Silicon ValleyThoughts and observations from a week inside (and around) Silicon Valley's tech campuses.
The scene at Apple’sWWDC this week was, in a way, emblematic of the times. The tech giant’s AI announcements were massively hyped ahead of the show. The products themselves were interesting, but not the “next iPhone” some expected. Still, the market loved it, adding hundreds of billions to Apple’s market cap in just a few days. A good part of this AI moment is built on anticipation: There’s a belief that models will keep improving, products will get better, and people and companies will buy in. We don’t know for sure where it’s all leading, but it’s heading somewhere, and that seems to be good enough. The demos should work eventually. I spent the week in Silicon Valley visiting sources and tech companies — starting at Apple on Monday and ending at NVIDIA on Thursday — to get a sense of where we are on the continuum, who’s poised to lead, and how power is shifting. Much of what I learned will land in future stories and Big Technology Podcast episodes. So stay tuned. But this week, I’ll share what stands out in my notebook: Room for AI models to improve It doesn’t appear generative AI will hit the resource wall anytime soon — at least according to those closest to the work. AI research houses are focusing on constraints like compute, data, and energy. But they also realize there’s room to improve the current set of models by getting better at selecting the right data, fine-tuning the models, and building new capabilities like reasoning. Meanwhile, incoming compute improvements should lead to more powerful and efficient training and inference. The next 18 months will be interesting. Expectations might still be unrealistic Still, the popular conversation around AI tends to portray human-level artificial intelligence as right around the corner. It’s not. The next generation of models will be impressive, but the release of ChatGPT (launched on a version of OpenAI’s GPT-3) followed soon after by the release of GPT-4 made the pace of AI development seem, to many, faster than it is. Training and fine-tuning these models takes a long time. So, while GPT-5 and its peers will be hyped and hotly anticipated, the push toward reasoning and AI agents may be more tangible in the short term vs. sheer model size. OpenAI might be a placeholder in Apple Intelligence Sam Altman is a master dealmaker, but what if he’s just keeping the seat warm within the new ‘AI iPhone’ for Google? On Wednesday, Bloomberg’s Mark Gurman reported that Apple is not paying OpenAI for use of ChatGPT in its next generation of iPhones. Apple has also been negotiating with Google for 4-5 months for a similar placement, he told me. The key question then becomes, who gets the default position? Google pays Apple $20 billion per year to be its products’ default search engine and, if it can figure out the economics, a similar (or smaller) deal may supplant ChatGPT with Gemini as Apple’s AI default in time. (Gurman talks more about this on Big Technology Podcast today). NVIDIA’s key ratio I spent a wild day inside NVIDIA on Thursday, speaking with company leaders from morning till evening about the technology powering this moment. There will be plenty more to come on NVIDIA in the next few months here, but here’s one fun fact: NVIDIA has more software engineers than hardware engineers. There’s so much more to NVIDIA’s dominance than chips, starting with the fact that its software is core to training AI models, and the company’s headcount reflects it. Apple tries small language models The real surprise at WWDC was that Apple used its many of its own models to power Apple Intelligence, and not OpenAI’s. In fact, ChatGPT was mostly a plugin in the company’s demos. To make its AI experience work, Apple built a series of small language models that reside on device. These more focused, less compute-intensive AI models are good at specific tasks like proofreading and run in concert with each other (The Verge has a good writeup). To those watching, this demonstrated A) Apple’ can indeed make real progress on AI model development and, B) These smaller models can be useful to bring big ideas to life, even for the largest companies. Now, we’ll wait to see if those demos work in real life. Free Masterclass: Automate Your Workflow Using AI Tools (sponsored)This incredible 3-hour Masterclass on AI & ChatGPT (worth $399) will make you a master of 25+ AI tools, hacks, and prompting techniques to save hours per week and do more with your time. This masterclass will teach you how to: Do AI-driven data analysis to make quick business decisions Make stunning PPTs and write content for email, social, and more Build AI assistants and custom bots Solve complex problems, research 10x faster, and make your life simpler Register now & save your seat! (valid for next 24 hours) Advertise on Big Technology? Reach 170,000+ plugged-in tech readers with your company’s latest campaign, product, or thought leadership. To learn more, write [email protected] or reply to this email. What Else I’m Reading, Etc.Microsoft delays Recall [The Verge] Elon gets his money [CNBC] BeReal sold for half a million Euros [Business Insider] Why can’t AI solve the Spelling Bee? [Engadget] How to handle criticism in the online age [The Atlantic] My CNBC appearance from WWDC [YouTube] Quote Of The Week“I don’t think we should take away from this that we shouldn’t take risks. We should take them thoughtfully. We should act with urgency. When we find new problems, we should do the extensive testing but we won’t always find everything and that just means that we respond.” Google search head Liz Reid in an employee all hands last week. Number of The Week$3.4 Billion OpenAI annualized revenue, per internal communication from Sam Altman This Week on Big Technology Podcast: Apple Fails to Overreact to the AI Revolution — With M.G. SieglerM.G. Siegler of Spyglass is back to recap Apple's big AI-themed WWDC event and look ahead to AI's broader potential moving forward. Tune for an in-depth analysis of Apple's new AI features, and what they say about the strengths and limitations of the current AI models. We cover whether the new features will lead to an iPhone upgrade cycle, the stock market's reaction, Elon Musk getting angry about the event, why OpenAI played a smaller role than many anticipated, Apple's potential robotics future, and where Apple stands after the big reveal. Hit play for a timely conversation that goes beyond the hype to examine the real-world implications of Apple's foray into AI. You can listen on Apple, Spotify, or wherever you get your podcasts. Thanks again for reading. Please share Big Technology if you like it! And hit that Like Button to send good vibes as I head back across the country to Big Technology HQ. My book Always Day One digs into the tech giants’ inner workings, focusing on automation and culture. I’d be thrilled if you’d give it a read. You can find it here. Questions? News tips? Email me by responding to this email, or by writing [email protected] Or find me on Signal at 516-695-8680 Thank you for reading Big Technology! Paid subscribers get our weekly column, breaking news insights from a panel of experts, monthly stories from Amazon vet Kristi Coulter, and plenty more. Please consider signing up here.
© 2024 Alex Kantrowitz |
Laden...
Laden...