Home » Technology » Alexa told a child to do potentially lethal ‘challenge’

Share This Post

Technology

Alexa told a child to do potentially lethal ‘challenge’

Alexa told a child to do potentially lethal ‘challenge’

Amazon’s Alexa told a child to touch a penny to the exposed prongs of a phone charger plugged into the wall, according to one parent who posted screenshots of their Alexa activity history showing the interaction (via Bleeping Computer). The device seemingly pulled the idea for the challenge from an article describing it as dangerous, citing news reports about an alleged challenge trending on TikTok.

According to Kristin Livdahl’s screenshot, the Echo responded to “tell me a challenge to do” with “Here’s something I found on the web. According to ourcommunitynow.com: The challenge is simple: plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” In a statement to the BBC, Amazon said: “As soon as we became aware of this error, we took swift action to fix it.” Livdahl tweeted yesterday that asking for a challenge was no longer working.

Amazon isn’t the only company to run into issues trying to parse the web for content. In October, a user reported that Google displayed potentially dangerous advice in one of its featured snippets if you Googled “had a seizure now what” — the info it showed was from the section of a webpage describing what not to do when someone was having a seizure. At the time, The Verge confirmed the user’s report, but it appears to have been fixed based on tests we did today (no snippet appears when Googling “had a seizure now what”).

Users have reported other similar problems, though, including one user who said Google gave results for orthostatic hypotension when searching for orthostatic hypertension, and another who posted a screenshot of Google displaying terrible advice for consoling someone who’s grieving.

We’ve also seen warnings about dangerous behavior amplified to make the problem bigger than it originally was — earlier this month, some US school districts closed after self-perpetuating reports about shooting threats being made on TikTok. It turned out that the social media firestorm was overwhelmingly caused by people talking about threats, far more than any threats that may have existed. In the case of Alexa, an algorithm picked out the descriptive part of a warning and amplified it without the original context. While the parent was there to immediately intervene, it’s easy to imagine a situation where that isn’t the case or where the answer shared by Alexa isn’t so obviously dangerous.

Livdahl tweets that she used the opportunity to “go through internet safety and not trusting things you read without research and verification” with her child.

Amazon didn’t immediately reply to The Verge’s request for comment.

Share This Post

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.