Is Apple Intelligence Just Making up Words Now?

As powerful as LLM models are, they all share one common flaw: hallucinations . For reasons unknown to us, AI models have a tendency to make things up out of the blue. An answer might be accurate, with well-cited sources and relevant information; then suddenly, the AI ​​makes a false claim or misinterprets a sarcastic forum comment as fact. (This is how Google’s AI reviews end up recommending adding glue to pizza .) Some LLM models may hallucinate less than others, but none are immune. This is why, when using a chatbot, you’ll see a warning on your screen that the AI ​​may make mistakes.

Apple Intelligence, the artificial intelligence platform, is no exception. When the company first introduced its AI, it included notification summaries as a “bonus.” However, Apple quickly backed down when this feature began incorrectly summarizing news alerts —for example, in one instance , when Apple Intelligence reduced a BBC headline to a report that United Healthcare shooting suspect Luigi Mangione had committed suicide in prison. The company later reinstated the feature but added some additional restrictions, such as italicizing news summaries.

Apple Intelligence may be making up new words.

On Thursday, I came across this post on the r/iOS subreddit that adds an interesting twist to the discussion of AI-related hallucinations. The post reads, “Anyone else experiencing fake words in their AI alerts?” and includes a screenshot of the Acme Weather app’s notification summary. The first sentence reads, “Light rain for an hour.” Ah, rain. At least for an hour. Wait, “light “?

You may also like

While the word “inbixtent” sounds plausible, it’s actually entirely made up. The author of the post didn’t share the exact content of the notification, so we can’t know what words Apple Intelligence is using. What’s known is that the author saw the word “imbixtent” three times, and they’re not alone. Poking fun at the author’s weather app, some comments confirm that other users have also seen Apple Intelligence invent fake words in their notification summaries. One commenter said they saw “flemulating” in one summary and “tranqued” in a Mail summary; another reported seeing “stricively” twice instead of “strictle.”

What do you think at the moment?

I haven’t found any other examples online demonstrating this phenomenon, and I don’t use notification summaries on my iPhone, so I haven’t encountered this issue myself. I can’t say for sure how widespread this issue is, or whether it’s limited to a specific iOS version, a specific device, or one app versus another. However, one commenter has a theory: he believes that when Apple Intelligence’s built-in AI model can’t shorten the original phrase on its own, it comes up with a compound word. He says the AI ​​”uses a word that evokes a certain mood,” like “imbixtent.” He says this happens most often with the Weather app’s summaries.

Does Apple Intelligence make up words on your resume?

Again, it’s impossible to say for sure whether this affects a large number of Apple users or just a small subset. The fact that I only found one post about it, with two commenters sharing similar experiences, leads me to believe it’s the latter, but I’d be happy to hear from anyone who’s encountered a similar issue. If you use Apple Intelligence notification summaries, please let me know if you’ve seen any of these fictitious words. I may need to enable this feature to keep an eye on this.

More…

Leave a Reply