Opera Is the First Browser to Support Local LLM Programs in Artificial Intelligence

Large language models (or LLMs) trained on massive amounts of text are the super-smart engines behind generative AI chatbots like ChatGPT and Google’s Gemini, and Opera just became the first web browser to provide local LLM integration.

You may have already read how LLM can be installed locally : this means that the AI ​​models are stored on your computer, so nothing needs to be sent to the cloud. It requires a pretty decent combination of hardware to run, but it’s better from a privacy standpoint – no one will eavesdrop on your clues or use your conversations to train the AI.

We’ve already seen Opera introduce various AI features. This now extends to local LLMs and you have over 150 models to choose from.

Local LLM at Opera

Before you begin exploring local LLM programs at Opera, there are a few considerations to take into account. Firstly, this is still in the experimental stage, so you may notice a couple of bugs. Secondly, you’ll need some free storage space – some LLMs are less than 2GB in size, while others on the list are over 40GB.

A larger LLM gives you better answers, but also takes longer to load and run. To some extent, the model’s performance will depend on the hardware setup you’re running it on, so if you’re using an older machine, you may have to wait a few minutes to get something back (and again, this is still beta test).

Opera already has a chatbot, Aria, and local LLM programs are now available. 1 credit

These local LLMs are a combination of models released by well-known companies (Google, Meta, Intel, Microsoft) and models created by researchers and developers. They are free to install and use, partly because you use your own computer to run the LLM, so there are no running costs for the team who developed it.

Please note that some of these models are designed for specific tasks, such as programming, and may not give you the general knowledge you expect from ChatGPT, Copilot or Gemini. Each of them is accompanied by a description; Before installing any of these models, please read them so you know what you are getting.

Let’s test it on ourselves

At the time of writing, this feature is only available in early test versions of Opera, ahead of wider rollout. If you want to give it a try, you’ll need to download and set up the developer version of Opera One . Once that’s done, open the sidebar on the left by clicking the Aria button (small A symbol) and follow the instructions to set up the built-in AI bot (you’ll need to create or sign in to a free Opera account). .

When Aria is ready to go, you should see a Select Local AI Model box at the top. Click on it, then select Go to Settings , and you will see a list of available LLMs along with some information about them. Select any LLM to see a list of versions (along with their file sizes) and download buttons that will allow you to install them locally.

There are already over 150 LLM programs to choose from. 1 credit

If you want, you can set up multiple LLMs in Opera – just select the one you want to use in each chat via the drop-down menu at the top of the Aria window. If you do not select on-premises LLM, the default Aria chatbot (cloud-based) will be used instead. You can always start a new chat by clicking the big + (plus) button at the top right corner of the chat window.

Use these on-premise LLMs as you would anything running in the cloud: make them create text on any topic and in any style, ask questions about life, the universe and everything else, and get advice on anything you like. Since these models do not have access to the cloud, they will not be able to search the Internet for anything relatively new or relevant.

More…

Leave a Reply