Commerce

Everything we know about Apple Intelligence – Computerworld



There are many more AI tools, from recognition of addresses and dates in emails for import into Calendar to VoiceOver all the way to Door Detection, even the Measure app on iPhones. What’s changed is that while Apple’s deliberate focus had been on machine-learning applications, the emergence of genAI unleashed a new era in which the contextual understanding available to LLM models uncovered a variety of new possibilities.

The omnipresence of various kinds of AI across the company’s systems shows the extent to which the dreams of Stanford researchers in the 1960s are becoming real today.

An alternative history of Apple Intelligence

Apple Intelligence might appear to have been on a slow train coming, but the company has, in fact, been working with AI for decades.

What exactly is AI?

AI is a set of technologies that enable computers and machines to simulate human intelligence and problem-solving capabilities. The idea is that the hardware becomes smart enough to learn new tricks based on what it learns, and carries the tools needed to engage in such learning.

To trace the trail of modern AI, think back to 1963, when computer scientist and LISP inventor John McCarthy launched the Stanford Artificial Intelligence Laboratory (SAIL). His teams engaged in important research in robotics, machine-vision intelligence, and more.

SAIL was one of three important entities that helped define modern computing. Apple enthusiasts will likely have heard of the other two: Xerox’s Palo Alto Research Center (PARC), which developed the Alto that inspired Steve Jobs and the Macintosh, and Douglas Engelbart’s Augmentation Research Center. The latter is where the mouse concept was defined and subsequently licensed to Apple. 

Important early Apple luminaries who came from SAIL included Alan Kay and Macintosh user interface developer Larry Tesler — and some SAIL alumni still work at the company.

“Apple has been a leader in AI research and development for decades,” pioneering computer scientist and author Jerry Kaplan told me. “Siri and face recognition are just two of many examples of how they have put this investment to work.”

Back to the Newton…

Existing Apple Intelligence solutions include things we probably take for granted, going back to the handwriting recognition and natural language support in 1990’s Newton. That device leaned into research emanating from SAIL — Tesler led the team, after all. Apple’s early digital personal assistant first appeared in a 1987 concept video and was called Knowledge Navigator. (You can view that video here, but be warned, it’s a little blurry.)

Sadly, the technology couldn’t support the kind of human-like interaction we expect from ChatGPT, and (eventually) Apple Intelligence. The world needed better and faster hardware, reliable internet infrastructure, and a vast mountain of research-exploring AI algorithms, none of which existed at that time.  

But by 2010, the company’s iPhone was ascendant, Macs had abandoned the PowerPC architecture to embrace Intel, and the iPad (which cannibalized the netbook market) had been released. Apple had become a mobile devices company. The time was right to deliver that Knowledge Navigator. 

When Apple bought Siri

In April 2010, Apple acquired Siri for $200 million. Siri itself is a spinoff from SAIL, and, just like the internet, the research behind it emanated from a US Defense Advanced Research Projects Agency (DARPA) project. The speech technology came from Nuance, which Apple acquired just before Siri would have been made available on Android and BlackBerry devices. Apple shelved those plans and put the intelligent assistant inside the iPhone 4S (dubbed by many as the “iPhone for Steve,” given Steve Jobs’ death around the time it was released).

Highly regarded at first, Siri didn’t stand the test of time. AI research diverged, with neural networks, machine intelligence, and other forms of AI all following increasingly different paths. (Apple’s reluctance to embrace cloud-based services — due to concerns about user privacy and security — arguably held innovation back.)

Apple shifted Siri to a neural network-based AI system in 2014; it used on-device machine learning models such as deep neural networks (DNN), n-grams and other techniques, giving Apple’s automated assistant a bit more contextual intelligence. Apple Vice President Eddy Cue called the resulting improvement in accuracy “so significant that you do the test again to make sure that somebody didn’t drop a decimal place.”

But times changed fast.

Did Apple miss a trick?

In 2017, Google researchers published a landmark research paper, “Attention is All you Need.” This proposed a new deep-learning architecture that became the foundation for the development of genAI. (One of the paper’s eight authors, Łukasz Kaiser, now works at OpenAI.)

One oversimplified way to understand the architecture is this: it helps make machines good at identifying and using complex connections between data, which makes their output far better and more contextually relevant. This is what makes genAI responses accurate and “human-like” and it’s what makes the new breed of smart machines smart.

The concept has accelerated AI research. “I’ve never seen AI move so fast as it has in the last couple of years,” Tom Gruber, one of Siri’s co-founders, said at the Project Voice conference in 2023.

Yet when ChatGPT arrived — kicking off the current genAI gold rush — Apple seemingly had no response. 

The (put it to) work ethic

Apple’s Cook likes to stress that AI is already in wide use across the company’s products. “It’s literally everywhere on our products and of course we’re also researching generative AI as well, so we have a lot going on,” he said. 

He’s not wrong. You don’t need to scratch deeply to identify multiple interactions in which Apple products simulate human intelligence. Think about crash detection, predictive text, caller ID based on a number not in your contact book but in an email, or even shortcuts to frequently opened apps on your iPhone. All of these machine learning tools are also a form of AI. 

Apple’s CoreML frameworks provide powerful machine learning frameworks developers can themselves use to power up their products. Those frameworks build on the insights Adobe co-founder John Warnock had when he figured out how to automate the animation of scenes, and we will see those technologies widely used in the future of visionOS.

All of this is AI, albeit focused (“narrow”) uses of it. It’s more machine intelligence than sentient machines. But in each AI application it delivers, Apple creates useful tools that don’t undermine user privacy or security.

The secrecy thing

Part of the problem for Apple is that so little is known about its work. That’s deliberate. “In contrast to many other companies, most notably Google, Apple tends not to encourage their researchers to publish potentially valuable proprietary work publicly,” Kaplan said.

But AI researchers like to work with others, and Apple’s need for secrecy acts as a disincentive for those in AI research. “I think the main impact is that it reduces their attractiveness as an employer for AI researchers,” Kaplan said. “What top performer wants to work at a job where they can’t publicize their work and enhance their professional reputation?” 

It also means the AI experts Apple does recruit subsequently leave for more collaborative freedom. For example, Apple acquired search technology firm Laserlike in 2018, and within four years, all three of that company’s founders had quit. And Apple’s director of machine learning, Ian Goodfellow (another a SAIL alumni), left the company in 2022. I imagine the staff churn makes life tough for former Google Chief of Search and AI John Giannandrea, who is now Apple’s senior vice president of machine learning and AI strategy. 

That cultural difference between Apple’s traditional approach and the preference for open collaboration and research in the AI dev community might have caused other problems. The Wall Street Journal reported that at some point both Giannandrea and Federighi were competing for resources to the detriment of the AI team. 

Despite setbacks, the company has now assembled a large group of highly regarded AI pros, including Samy Bengio, who leads company research in deep learning. Apple has also loosened up a great deal, publishing research papers and open source AI software and machine learning models to foster collaboration across the industry.

What next?

History is always in the rear view mirror, but if you squint just a little bit, it can also show you tomorrow. Speaking at the Project Voice conference in 2023, Siri co-founder Adam Cheyer said: “ChatGPT style AI…conversational systems…will become part of the fabric of our lives and over the next 10 years we will optimize it and become accustomed to it. Then a new invention will emerge and that will become AI.”

At least one report indicates Apple sees this evolution of intelligent machinery as foundational to innovation. While that means more tools, and more advances in user interfaces, each those steps leads inevitably toward AI-savvy products such as AR glasses, robotics, health tech — even brain implants

For Apple users, the next step — Apple Intelligence — arrives this fall.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.