It should be clear by now that AI is capable of giving very, very bad advice. Sometimes the advice it gives is just plain stupid. Other times it is actively dangerous.
404 Media reports about an incident from the latter category, in which a popular Facebook group dedicated to mushroom foraging was invaded by an AI agent, who then provided suggestions on how to cook a dangerous mushroom. The agent in question, called “FungiFriend,” ended up in the chat of the Northeast Mushroom Identification & Discussion Facebook group, which has about 13,000 members. Then some truly terrible advice was handed out.
In what seems like it must have been a test of the AI agent’s knowledge, a member of the group asked him “how do you cook Sarcosphaera coronaria” – a type of mushroom that contains hyper-accumulating arsenic and has led to at least one death . 404 writes. When asked about the dangerous mushroom, FungiFriend informed members that it is “edible but rare,” then added that “cooking methods mentioned by some enthusiasts include sautéing it in butter, adding it to soups or stews and include pickling.”
404 writer Jason Koebler says he was alerted to the incident by Rick Claypool, the research director for the consumer safety group Public Citizen. Claypool, a dedicated mushroom forager, has previously written about the dangerous intersection between AI agents and his hobby, noting that using automation to distinguish between edible and poisonous mushrooms is “a risky activity that requires real skills that current AI systems cannot reliably emulate.” Claypool claims that Facebook encouraged mobile users to add the AI agent to the group chat.
This incident is reminiscent of a separate incident from last year in which a AI-based meal preparation app encouraged users to make sandwiches made with mosquito repellent, as well as another recipe involving chlorine gas. In another well-documented incident, a An AI agent encouraged users to eat rocks. Suffice to say, cooking may be one area that doesn’t really need AI integration.
Our own experimenting with AI platforms– as Google recently launched AI summaries– has shown that the algorithmically guided agents often have no idea what they are talking about (for example, Google’s program once tried to convince me that dogs play sports and told me that the best way to make pizza was to coat it with glue to fill). For whatever reason, corporate America continues to rush the integration of AI into customer service applications on the Internet, despite the obvious risk of providing a lot of bad advice to the public. The attitude seems to be: it doesn’t matter if the information is wrong, as long as we don’t have to hire a real human to do this work.
Leave a Reply