On Apple Intelligence

June 11, 2024

Apple has finally revealed the AI features it intends to ship in the next versions of iOS, iPadOS, and macOS. The features are fascinating, and if they work as advertised, they will be a game-changer, clearly boosting productivity for users. However, we won't know until we get our hands on the new devices. It's wise to remain a bit skeptical of beautiful demos, especially when it comes to AI. The reality is often less shiny, as recently demonstrated by the downfall of the Humane AI pin and the Rabbit R1.

Essentially, what Apple plans to do is add AI features to almost all of its productivity apps. Imagine AI rewriting your emails directly from your mail app. Consider AI editing your images, or an AI assistant that takes your instructions in natural language and translates them into actions, such as 'send the latest draft email to Franck' or 'increase the brightness of the image I took on Monday near the Eiffel Tower.' These use cases are intriguing and would definitely change the way we interact with our devices, making us more productive. But, as always, the devil is in the details. No one knows how this will work in practice, especially given the tendency of AI models not to do what we expect them to do or to not do it well (e.g., hallucination, bias). Human-in-the-loop workflows are likely to be the most successful, at least initially. We will see.

Google made similar announcements a few months ago, and Microsoft is also developing comparable features. It seems that the major tech companies are all converging toward the same vision of the future: AI integrated into everything, for everyone. This prospect is both a bit scary and exciting. The future is now, and it is driven by AI. Privacy concerns are inevitable, especially since the most advanced models are too large to be embedded in edge devices. They will have to operate in the cloud, which means your data will need to be sent there. How will people react to this? Will they be willing to trade privacy for productivity? Will assurances from companies like Apple or OpenAI, not to use private data to train models, be sufficient? We will see.

What are the key takeaways from this? First, the trend is to incorporate AI into every aspect of technology, making AI more often a feature rather than a standalone product. This means that the big winners will primarily be NVIDIA for the chips, OpenAI for the models, and other GPU and model providers, along with the major tech companies for the applications. This does not imply that some AI-powered SaaS companies won't also succeed, whether in the short or long term. It simply means there will be fewer independent winners not at the infrastructure level, and incumbents will likely leverage AI more effectively because they already possess the necessary data and distribution channels. The competition will become increasingly fierce for AI startup SaaS, whether VC-funded or bootstrapped. In such an environment, building a moat and having a differentiated product become crucial for survival. You don’t want to mimic everyone else. You need unique insights that others lack and must leverage them quickly. Once you secure your position, you should swiftly build your moat. In the age of AI, the moat lies in the data and the model. Many companies vie for partnerships, but these are only valuable long-term if they yield data (consider OpenAI's partnerships with media companies). Otherwise, it is merely a short-term marketing tactic that won’t withstand the test of time. The AI game is fundamentally an infrastructure game. If you can't be NVIDIA, aim to be OpenAI. If you can't be OpenAI, strive to be a major tech incumbent. If you can't be a big tech incumbent, be a nimble startup with a differentiated product and a data or model moat (consider fine-tuning in some cases). Begin with something novel, then solidify your market presence.

Second, privacy issues will be significant. People are already wary of big tech companies, and I expect them to be even more cautious about an AI that has access to everything on their devices, although I could be wrong. After all, consider what people routinely share on social media without a second thought. However, if I am right—and for humanity's sake, I hope I am—then edge AI will be the next big thing.

The problem is, how do you monetize models that can run on edge devices? The current model is through an API that runs in the cloud. But if the model runs on the edge, how do you charge for it? You can't charge per API call. Once people have access to the model (i.e., they have the weights), they can literally share it or put it anywhere they want. Could it be part of the OS? If so, the only companies able to benefit would be device makers (like Apple, Samsung, etc.). In that scenario, AI would merely be a quality-of-life improvement feature, and I can't yet see how other developers would be able to benefit from it. Perhaps, if AI is deployed on the edge as part of the OS, then applications could be developed on top, using these models, somewhat like a developer can leverage the memory of your device or the camera. Will we receive these productivity improvements 'for free,' or will it translate into higher prices for the devices we buy? We will see.

The tension is between data and privacy. The most successful startups will be the ones that successfully manage this conundrum. They will need to provide the best possible service while respecting user privacy. This is a difficult balance to strike, but it is the only way forward. The future is AI, and the future is now. It is up to us to make it a good one.