Apple Is Finally Bringing Artificial Intelligence to IPhone, IPad and Mac

Stay tuned for Lifehacker’s ongoing coverage ofWWDC 2024 .

Apple is finally ready for its artificial intelligence moment after years of speculation and four generations of devices with largely untapped neural engines inside. At its 2024 WWDC conference, the company officially announced Apple Intelligence (yes, pun intended), which is scheduled to be released in beta for iPhone, iPad, and Mac this fall.

It will be interesting to see how Apple interacts with such nascent technology. As with Apple’s Vision Pro , the company generally prefers to wait for trends until it can release its improved, largely simple approach. Meanwhile, AI still gets things wrong a lot, as Google discovered with its AI Reviews feature earlier this month .

However, Apple is pushing full speed ahead with artificial intelligence in Messages, Mail, Notifications, Texts, Images and, most interestingly, Siri. The company promised that it could maintain its reputation for integrity by paying more attention to privacy and on-device data handling than competitors.

Details on exactly how Apple’s AI works are sparse, but overall the company promises to do more than Google, Rabbit , or virtually any of its competitors. Let’s figure it out.

AI in Siri

1 credit

Perhaps the most innovative Apple Intelligence feature is Siri, which has been completely revamped and includes a new logo. This moment has been a long time coming: since Siri introduced the digital assistant to the world in 2011, it has been overtaken by competitors like Google Assistant and Alexa in many other ways . Now Apple is doubling down on Siri, completely revamping it with artificial intelligence, while Google prepares to replace Google Assistant with Gemini. Result? A much more natural AI assistant than Android.

Right now on Android , replacing Assistant with Gemini will take you to a web app shortcut . Unlike its dumbed-down predecessor, the Gemini can’t set reminders, adjust phone settings, or open apps, meaning promises of more functionality actually come with less functionality.

This shouldn’t be the case with the new Siri, which will retain all of its dumb features but will have a new contextual awareness. Now when you open Siri, it will look at what’s showing on your screen and be able to give advice based on what’s showing. For example, you might look at the Wikipedia page for Mount Rushmore and ask, “What’s the weather like here?” to have Siri tell you the forecast for your trip.

Contextual awareness is also not limited to what you notice in the moment. Apple says Siri will also be able to search your libraries and apps to perform “hundreds of new actions” even in third-party apps. Let’s say you save this article to your reading list right now. Once Apple Intelligence appears on your iPhone, you can ask Siri to “Pull the Lifehacker article about WWDC from my reading list” to access it again.

1 credit

Or, more personally, let’s say you’re texting a friend about a podcast. With the new Siri, you can simply ask, “Play the podcast Dave recommended this weekend,” and Siri will understand what you’re talking about and play it.

The implications here are large, both in terms of utility and privacy . In general, the promised contextual features include:

  • Contextual answers to questions

  • Contextual search for photos and videos (for example, you can ask Siri to show all your photos of a red shirt)

  • The ability to perform contextual actions for you, such as adding an on-screen address to a contact card or applying automatic enhancements to photos.

But Siri also hopes to bring Apple Genius into your home, as Siri comes preloaded with tutorials on how to use your iPhone, iPad, or Mac. Just ask Assistant “How to turn on dark mode” or “How to schedule email,” and Siri will link to its tutorials and send you the answer via an on-screen notification rather than sending you to a help page. . (We’ll still be here for all your technical advice.)

1 credit

One of Siri’s more traditional hint-based features is the ability to create custom video montages using artificial intelligence. Right now, Apple’s memory collages are simply automatically generated in the background, algorithmically linking together photos that the OS thinks are related and setting them to background music that the software thinks will fit. Soon you’ll be able to give Siri specific instructions, citing contacts, an activity or location, or a style of music. Siri will then contextually generate a suitable montage with music taken from Apple Music.

There are also typical AI chatbot features, such as the ability to ask questions. Oddly enough, Apple hasn’t specified whether Siri will be able to answer questions directly (at least those not related to Apple devices), but the company has a backup: Through Siri, you can ask ChatGPT your questions.

Because Apple’s privacy settings are different from ChatGPT’s (more on that later), Siri will prompt you to give permission to use ChatGPT every time you use it. The assistant will then ask you a question, no account required. Like DuckDuckGo, Apple will also hide your IP address when using ChatGPT for you, and the company promises that OpenAI won’t log your requests. ChatGPT subscribers can also link their accounts to Siri to access paid features, although Apple warns that free users will face typical data usage limits.

Siri’s AI features will be usable on iPhone, iPad and Mac, and is a more natural AI assistant than Google’s approach of starting over with Gemini. However, if it seems like it’s still limited compared to what other LLM chatbots can do, it’s because Apple Intelligence is so much more than Siri.

Apple Intelligence goes beyond Google Pixel

Most of the Apple presentation in artificial intelligence this year seemed to be focused on Pixel, in particular, its “functions of artificial intelligence”. Until now, the transcription of pixels and the Magic Editor have been great exclusives of Google, but Apple Intelligence finally gives its largest competitor a chance in the same arena.

First, iOS, iPadOS, and MacOS devices get their own Magic Eraser and Live Transcription capabilities. In the Photos app, users can tap the new Clear icon to circle or tap objects they want to cut out of an image. Photos will delete an insulting object, and then with the help of generative artificial intelligence will fill in its whereabouts. This is not quite the level of Magic Editor, which allows you to move the selected objects, but Google is firmly paying attention to this.

1 credit

In the same way, the application “Notes” will be able to summarize and decipher audio recordings, which is a good for journalists like me. My colleagues chose Pixel only because of its transcription function, and now I can finally keep up with my iPhone. Even better: Notes will also be able to record real -time phone calls.

This really is a legal problem, so the actual use will probably differ from state to state and country to country, since the laws on recording differ depending on where you are. На данный момент Apple утверждает, что приложение «Телефон» предупредит вас о начале записи.

Но помимо функций, аналогичных функциям флагмана Google, Apple также разрабатывает собственные уникальные преимущества. Здесь компания упрощает управление вашими уведомлениями и почтой.

Выдающимися функциями здесь являются приоритетные сообщения и приоритетные уведомления. With Priority Messages, Apple’s AI attempts to find the “most urgent emails” and place them at the top of your inbox. Приоритетные уведомления используют аналогичный подход, но с уведомлениями на экране блокировки от текстов и приложений.

1 credit

В обоих случаях у вас будет возможность попросить ИИ написать вам краткое описание письма или уведомления, а не просматривать его содержимое, что поможет вам быстро просматривать ленту. В Mail вы действительно сможете получать сводки по всему почтовому ящику.

Apple positions this as a great way to stay aware of important information, such as landing coupons. Кроме того, в «Почте» вы сможете использовать функцию «Умный ответ», чтобы ИИ быстро напечатал для вас ответ в зависимости от контекста вашего электронного письма. You’ll also be able to get summary data for the entire conversation, not just the first email.

Благодаря этим обновлениям Apple наконец-то обратилась к программному обеспечению Google, надеясь свергнуть Pixel с позиции «самого умного смартфона». Но эти инновации не лишены риска. Возьмем режим «Уменьшение прерываний», который будет использовать искусственный интеллект для отображения «только тех уведомлений, которые могут потребовать немедленного внимания, например, сообщения о раннем забирании из детского сада». Relying on Apple Intelligence to tell you what you need to show is putting a lot of faith in an untested model, although it’s promising for Apple that it has confidence that it can deliver such a feature at launch.

Apple может помочь вам писать и создавать изображения

Говоря о риске, пришло время поговорить о хлебе с маслом ИИ: генерации изображений и текста.

Несмотря на то, что Google советует людям использовать « пробки для приседаний », Apple, очевидно, чувствует себя достаточно уверенно в своих моделях и доверяет им, чтобы помочь вам проявить творческий подход. Введите Rewrite, Image Playground и Genmoji. В совместимых сторонних и даже сторонних приложениях они позволят вам создавать контент, используя как собственные модели Apple, так и, в некоторых случаях, ChatGPT.

REWRITE is the most famous of them. Здесь Apple обещает помощь ИИ на системном уровне с текстом «почти везде», который вы пишете, в том числе в Notes, Safari, Pages и т. д., через SDK для разработчиков. Из меню стилей, вызываемого правой кнопкой мыши по выделенному тексту, пользователи смогут дать Apple Intelligence индивидуальный запрос или выбрать один из нескольких заранее выбранных тонов, а затем ИИ соответствующим образом перепишет текст.

Не хотите, чтобы ИИ менял ваш текст? Он также сможет вычитать его, чтобы указать на ошибки, обобщить (полезно, если вы читаете, а не писать) или переформатировать его в таблицу или список.

Это похоже на новую возможность Chrome перезаписывать текст при щелчке правой кнопкой мыши , но с гораздо большим количеством опций и предположительно доступной во многих других приложениях. It is also more affordable than Copilot , which is located in a separate menu isolated from the rest of Windows.

Вы также сможете генерировать текст с нуля, хотя Apple будет использовать для этого ChatGPT.

1 credit

Image Playground and Genmoji are where things get even more novel. Instead of entering a specific website, such as Dall-E or Gemini , Apple devices will now generate images directly in the operating system.

Image Playground, доступный как отдельное приложение, встроенный в «Сообщения» или интегрированный в другие совместимые приложения через SDK, выглядит как типичный генератор искусственного интеллекта, но основан на том же типе контекстного анализа, что и Siri. Например, вы можете дать ему подсказку, попросить Image Playground включить в него кого-то из вашего списка контактов и получить изображение с карикатурой на этого человека.

Опять же, Apple здесь очень доверяет своему искусственному интеллекту. Допустим, я отправляю кому-то изображение, созданное с помощью Image Playground, и то, как оно изображено, не обязательно мне лестно: угу.

1 credit

Тем не менее, похоже, что на этом опыте могут быть ограничения. Маркетинговый язык Apple немного расплывчат в отношении того, каковы здесь ограничения, но даже несмотря на то, что в примере видно окно подсказки, Apple постоянно говорит нам, что нам придется «выбирать из ряда концепций», включая «темы, костюмы». , accessories and places. ” It’s possible that Apple won’t allow users to create controversial images, an issue that Bing and Meta have previously struggled with.

But let’s say you don’t need a full image with a lot of detail anyway. Apple is also introducing Genmoji, which are similar to Meta artificial intelligence stickers . Here you can give a clue to Apple artificial intelligence and return your own smiles, designed in the style similar to the official UNICODE options. Again, they can include cartoon images of people from your contact list, but, like smiles, they can also be added to messages or share them in the form of stickers. Again, we do not know the boundaries of what Apple will allow here.

1 credit

We will have to wait until the images of AI from Apple fall to see correctly how well they compete with existing options, but, perhaps, the most interesting here is the opportunity to naturally generate images in existing applications. Although Apple promises that this will go beyond the scope of notes, the company showed one example: it chose a sketch in notes and created on its basis a full -fledged work of art. In another case, AI simply generated a completely new image based on the surrounding text in notes.

This is convenience, especially considering that AI is still divided into dozens of sites and services, will probably become a great advantage here.

Apple promises private artificial intelligence on the device

Apple did not particularly spread about educational materials for its AI, but the company insisted on confidentiality.

Recently, the actions of META and Adobe caused concern about AI access to the data of its users. Apple wants to immediately put an end to such concerns about its own artificial intelligence.

According to Apple, any data that her AI addresses is never saved and used only for requests. In addition, Apple makes the code of its servers available for verification to “independent experts”. But at the same time, the company seeks to reduce the number of times when you need to access the cloud. Enter the A17 Pro chip (presented in the iPhone 15 Pro and Pro Max) and the chips M-series (used in the iPad and Mac since 2020). All devices with these chips have access to neural mechanisms, which, according to Apple, will allow them to execute “many” requests on the device, while your information never leaves your phone.

How exactly the separation between the tasks on the device and in the cloud will be processed is still in question, but Apple states that Apple Intelligence itself will be able to determine which requests your device is powerful enough to cope on your own and which will be processed on your own. We need the help of servers before he decides where to send them.

While this is still a promise, it would be a huge win for Apple since competing features like Magic Editor and Gemini still require a constant Internet connection.

When can I try Apple Intelligence?

Apple has not called the specific dates of the Apple Intelligence launch, instead providing the audience with two windows that can be expected.

Во-первых, компания заявила, что Apple Intelligence будет «доступна для опробования на американском английском этим летом», хотя, учитывая то, что она сказала дальше, это, скорее всего, будет ограниченная демо-версия. Это потому, что полная бета-версия Apple Intelligence запланирована на эту осень . это означает, что он, скорее всего, появится после полных выпусков iOS 18, iPadOS 18 и macOS 15 через обновление.

Возможно, самое большое препятствие, которое Apple должна преодолеть с помощью своего искусственного интеллекта, помимо выполнения своих обещаний в области безопасности и обеспечения того, чтобы создание контента не вызывало раздражения , — это доступность. Хотя обещание иметь большую часть ИИ на устройстве отлично подходит для конфиденциальности и даже для ситуаций, когда подключение к Интернету ограничено, у него есть предостережение: в объявлении Apple о своем ИИ упоминается только о том, что он появится на iPhone 15 Pro, iPhone 15 Pro Max и iPad или Mac с чипом M1 или новее. Аналогично, для начала Siri и язык устройства должны быть установлены на английский (США).

More…

Leave a Reply