Please Don’t Trust Artificial Intelligence to Recognize Mushrooms

Eating wild mushrooms is a very dangerous hobby. If you’re an experienced forager, you’ll know what’s growing in your area, where to find it, and how to be absolutely sure you’ve found an edible species and not a poisonous one. If not, you may end up eating mushrooms with names like “death’s cap” and “destroying angel.”

Becoming an expert at identifying mushrooms takes years of experience and a keen eye for detail. There are no simple rules for distinguishing good from bad; poisonous ones are often very similar in appearance to popular tasty edible mushrooms . But you should know that such confusion is possible and that you, as a beginner in this matter, can ruin everything. Join your local mycological (study of fungi) society and you can start learning from the experts.

You might be thinking there’s a shortcut: can’t you just download the app? If iNaturalist (for example) can tell you that the tree with white flowers in your neighbor’s yard is a dogwood, it can tell you what kind of mushroom you found in the woods, right? It can’t.

Artificial Intelligence Mushroom Apps Could Literally Kill You

In an in-depth report in Public Citizen, wild mushroom enthusiast Rick Claypool shows all the ways AI-powered identification apps and AI-generated field guides can kill you or make you sick if you trust them.

He cites the example of Google Lens, which identified a mushroom nicknamed “vomit” as another mushroom he called a “choice edible.” (The person who posted the photo got very sick but survived.) In an even scarier 2022 incident, an Ohio man used an app to confirm that some mushrooms he found were edible (the app said yes) and ended up in the hospital cursing . for his life. ( An experimental treatment may have helped him survive ; 40% of people who eat the toxic fly agaric mushrooms he is believed to have eaten end up either dying or needing a liver transplant.)

As Claypool points out, real live mushroom connoisseurs don’t look at a picture and say, “Yeah, that’s edible.” They will ask to see details of the bottom and bottom of the mushroom, will want to know exactly where and when it was found, and may recommend further steps for identification, such as creating a spore print . They will also be able to tell how confident they are in their conclusion. Claypool notes, “An application that responds to an identification attempt with a vague or evasive response may be perceived as faulty rather than cautious.”

He also notes that identifying the species is not the only step in understanding whether mushrooms are safe to eat: “The first mushrooms that novice foragers find are often mushrooms whose freshness is beyond what is required for safe consumption in the wild.” food. Foragers fight mold, insects, slugs, and anything else in the wild that feeds on fungi. If you don’t know the signs, whether a mushroom is infested with maggots or maggots may not be obvious until it is cut open.”

AI is not “sentient” and never has been.

The term “artificial intelligence” is a buzzword, a nickname, a fantasy. This is not a description of what these applications are or what they do. The term was coined by scientists dreaming about what might be possible in the future and then popularized in science fiction. The creators of tools like ChatGPT decided to use it because it sounds exciting and futuristic.

Never forget that much of the AI ​​hype is just marketing by big tech companies hoping to get money from other tech companies before the bubble bursts. This will all die down once people realize that AI doesn’t actually do anything useful for those who care about results , but it will take time for the tech bros to figure that out.

Claypool’s article lays out several things AI can supposedly do to identify mushrooms, as well as the deadly disadvantages of each:

  • Photo identification using mushroom apps . Even a person cannot accurately identify all mushrooms from photographs.

  • AI-generated travel guides . They were found to contain incorrect information. (It has not been conclusively proven that the guidebooks in question were written by AI, but they do appear to be so.)

  • Images generated by artificial intelligence . When Claypool tested the imaging tools, they regularly drew incorrect pictures of the features of edible and toxic mushrooms and labeled them incorrectly.

  • AI picture descriptions : Mushroom pickers use specific terminology to describe the characteristics of mushrooms in guidebooks. When Claypool asked the artificial intelligence tool to describe a photo of a toxic mushroom, it said the mushroom had “loose gills” when in fact it had attached gills, and misidentified other identifying features.

  • Search results summarized by artificial intelligence : Google happily provided a recipe for cooking toxic mushrooms, claiming that boiling can remove the toxins. (It is not true.)

The AI ​​tools tested by Claypool also threw out misinformation here and there, implying that toxic mushrooms are brightly colored and brightly colored mushrooms are toxic (usually neither of these is true).

The bottom line for you and me? AI doesn’t really “know” anything. These algorithms are best thought of as predictive: if you ask it a question, it predicts what an expert’s answer to that question might look like. Sometimes these forecasts are good, and sometimes they are absolutely terrible.

Just like when the robot Bender from Futurama cooked terrible food for his human friends because he didn’t understand the concept of taste, the AI ​​creates text or images that at first glance look like what it thinks you asked for. without understanding the concepts involved. AI does not know the toxicity of edible mushrooms. He doesn’t even know what a mushroom is. He just spits out words and images that he thinks will make you happy, and he doesn’t care whether you live or die.

More…

Leave a Reply