Apple may appear behind the curve when it comes to the application of generative AI large language models (LLMs), but as time moves forward, we can see that not every such model is equal and not every deployment so great. It looks as if these models are far more effective when confined to specific domains, such as image manipulation in Photoshop or tech support resources in Jamf Pro. So, ignoring Siri, how could Apple make a big difference with tech like this?
Developers, developers, developers
Developers generate huge quantities of code, but what could they do if Xcode became smart enough to handle basic code-writing tasks and monitor for errors in existing instructions? The model could be trained on Apple’s own internal coding and learn from conversations on Apple’s developer forums.
This would be an incredibly powerful tool that could potentially make developers’ lives a lot easier by handling the mundane tasks so they can do the more challenging ones.
Podcasters, musicians, video editors
Adobe announced huge improvements to Firefly at Adobe Max this week. These improvements mean creatives are already using AI to build and optimize design assets thanks to Generative AI.
Apple could follow suit. Think about smart volume balancing when making podcasts and music in GarageBand or Logic, tools for color consistency and the removal/addition of objects in iMovie/Final Cut.
Think how LLM models could help guide users to do their work better while making it super-easy to access relevant tech support resources.
Applied to mobile devices, think how highly complex multi-step operations could become as easy as a combination of spoken instructions and Shortcuts when editing audio on an iPad or iPhone. Such complex, multi-step challenges are meat and drink to genAI, and would empower mobile creativity with just a few thumb swipes on these devices.
Fitness double plus
Apple CEO Tim Cook has often said his company’s biggest contribution to humanity will in the future be seen in health. So, can generative AI be used to help users achieve better health outcomes? I think it’s possible, and one area in which these technologies may be able to boost results is in use of Fitness+.
If you’re not familiar, Fitness+ is an Apple subscription service that provides customers with access to a rich library of different fitness routines. This stuff gets hard to navigate sometimes, and for beginners it’s also challenging to pick up the exercise routines.
LLM models may be able to draw on that rich library to deliver powerful and personalized access to health tips.
For example, you might request: “Write me a daily 20-minute workout suitable for someone of my age and fitness levels that will improve my respiratory and upper body strength.” The LLM would (privately and securely) seek through the Activity data on your device, its own library of fitness knowledge, and the Fitness+ video catalog to splice together a highly personalized workout you could begin doing right away. Sound far-fetched? People are already thinking about it. And don’t get me started on genAI tech being used to pre-identify signs of various forms of illness.
Turning on Pages
Pages is a great environment for document production. Like many such applications, it ships with a small number of template documents you can use. However, if you compare it to the rich selection of templates and the many generative AI-augmented talents of Adobe Express, it’s pretty clear the application could do more with a little LLM inside.
For example, you might ask it to “Take the assets in my ‘Client Success 24’ folder and turn them into an A4 poster design featuring a nest of six images, the logo item, with space for all the supplied text, including headlines and sub-headlines,” and boom! You’ve got a design to tweak. Or a Keynote presentation to share.
Pie in the sky
One big limitation to generative AI is that it consumes vast amounts of resources to run. Take water. Google consumed 5.6 billion gallons of water across the last year, and Microsoft’s water consumption spiked 34% in support of its AI tools this year.
The consequences of resource consumption at this level are hard to accept against a background of environmental collapse, so it makes sense to build smaller models confined to narrower domains that are less resource-intensive and can, we hope, run on the device itself, rather than demanding vast amounts of cloud processing power.
With that in mind, I think it’s reasonable to anticipate LLM models being rolled into numerous apps in the future as small models built to make life in those apps better.
And I think this is the kind of approach Apple will take to deploying the technologies we know it is already building. Siri can wait. The biggest improvements are incremental benefits that work together to deliver profound change.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.
Copyright © 2023 IDG Communications, Inc.