Here’s Why You Should Never Use AI to Generate Passwords.

I’ve repeated the same thing several times before when it comes to personal online security: create strong passwords for each account; never reuse the same passwords; and enable two-factor authentication whenever possible. Combining these three steps practically guarantees your overall security. But how you create these passwords is just as important as ensuring that each one is strong and unique. So please, don’t use artificial intelligence programs to generate passwords.

If you like chatbots like ChatGPT, Claude, or Gemini, you might find it obvious to ask AI to generate passwords for you. You might like how they handle other tasks, so it makes sense that something so high-tech yet accessible could generate strong passwords for your accounts. But large language models (LLMs) aren’t necessarily good at everything, and generating strong passwords is one of their shortcomings.

Passwords generated by artificial intelligence are insecure.

As Malwarebytes Labs reports , researchers recently examined passwords generated by artificial intelligence and assessed their security. The bottom line? The results are disappointing. The researchers tested password generation in ChatGPT, Claude, and Gemini and found that the passwords were “highly predictable” and “not entirely random.” Claude, in particular, performed poorly: out of 50 prompts, the bot was able to generate only 23 unique passwords. Claude responded with the same password 10 times. The Register reports that researchers have found similar flaws in AI systems such as GPT-5.2, Gemini 3 Flash, Gemini 3 Pro, and even Nano Banana Pro. (Gemini 3 Pro even warned that the passwords should not be used for “sensitive accounts.”)

You may also like

The thing is, at first glance, these results seem good. They appear unbreakable because they are a mix of numbers, letters, and special characters, and password strength indicators can indicate their security. But these password generators are inherently imperfect, whether due to repeating results or the presence of a recognizable pattern. The researchers assessed the “entropy” of these passwords, or a measure of unpredictability, using both “character statistics” and “logarithmic probabilities.” If this all sounds technical, it’s important to note that the results showed entropy of 27 and 20 bits, respectively. Character statistics tests look for entropy of 98 bits, while logarithmic probabilities estimates look for 120 bits. You don’t need to be an expert in password entropy to understand how significant this difference is.

Hackers can exploit these limitations. Attackers can run the same queries as researchers (or, presumably, end users) and collect the results in a wordbank of common passwords. If chatbots repeat passwords when generating them, it’s logical to assume that many people might use the same passwords generated by these chatbots or try passwords that follow the same pattern. In this case, hackers can simply try these passwords during their hacking attempts, and if you used LLM to generate your password, it might be the same. It’s difficult to say exactly what this risk is, but to ensure true security, each password should be completely unique. Potentially using a password that hackers store in a wordbank is an unnecessary risk.

It may seem surprising that a chatbot is bad at generating random passwords, but it makes sense given how they work. LLMs are trained to predict the next token, or data point, that should appear in a sequence. In this case, the LLM mouse tries to select the characters that are most logically expected to appear next, which is the opposite of “randomness.” If the LLM mouse contains passwords in its training data, it can include them in its response. The password it generates makes sense in its “mind” because that’s what it was trained on. It’s not programmed for randomness.

Creating a strong password is not difficult.

Meanwhile, traditional password managers aren’t true random sequence generators (LLMs). Instead, they’re designed to create a truly random sequence by converting cryptographic bits into characters. These results aren’t based on existing training data and don’t follow any patterns, so the likelihood that someone else has the same password as you (or that hackers have it stored in a word database) is slim. There are many options, and most password managers come with strong password generators.

What do you think at the moment?

But you don’t even need any of these programs to create a strong password. Just pick two or three “unusual” words, mix up a few characters, and voila: you have a random, unique, and strong password. For example, you could take the words “shall,” “murk,” and “tumble” and combine them into “sH@_llMurktUmbl_e.” (Don’t use this option, as it’s no longer unique.)

Access keys can be even more secure than passwords.

If you want to further enhance your personal security, use Passkeys whenever possible . Passkeys combine the convenience of passwords with the security of two-factor authentication: with Passkeys, your device is your password. You use built-in authentication to log in (face scan, fingerprint, or PIN), meaning you don’t need to create a password. Without a trusted device, hackers won’t be able to hack your account.

Not all accounts support access keys, meaning this isn’t a one-size-fits-all solution at this time. You’ll likely need passwords for some of your accounts, which means you need to follow proper security practices to keep everything in order. However, replacing some passwords with access keys can improve both security and convenience, and avoid security issues associated with asking ChatGPT to generate passwords for you.

More…

Leave a Reply