Apple Just Released Details About Its New AI Model MM1

We’ve already learned that Apple may be using Google’s Gemini to implement some new AI features in iOS 18, but that hasn’t stopped the tech giant from working on its own AI models. In a new article, Apple has revealed more details about how it is approaching the development of its new MM1 artificial intelligence model.

Apple says it plans to use a diverse dataset that includes interleaved image-text documents, image-caption pairs, and text-only data to help train and develop MM1. This, Apple says, should allow the MM1 to set a new standard in AI’s ability to create image captions, answer visual questions, and even how it responds to natural language. The hope seems to be to ensure the highest level of accuracy possible.

This research method allows Apple to focus on multiple types of training data and even model architecture, which should give the AI ​​more ability to understand and even generate language based on both linguistic and visual cues.

Apple clearly hopes that by combining the training methods of other AIs with its own methods, it can achieve better pre-training rates and achieve competitive performance that will help it catch up with other companies that are already deep into AI development, such as Google and OpenAI .

Apple has never been a stranger to forging its own path. The company continues to find new ways to solve the same situations as other companies, including in the development of its hardware and software. Whether you think this is a good thing or not is up to you to decide, but the point is that Apple’s ongoing attempts to create reliable and competitive AI have always approached things differently, and based on the information presented in this article, the company has found a unique way to do it .

Of course, this article is just our first real look at what Apple is doing to expand its artificial intelligence capabilities. It will be interesting to see where things go next.

More…

Leave a Reply