A video of a sidewalk delivery robot crossing yellow caution tape and rolling through a Los Angeles crime scene went viral this week, garnering more than 650,000 views on Twitter and sparking debate about whether the technology is ready for prime time.
The robot’s error was caused by humans, at least in the case.
of Video The show’s owner, William Goode, took to Twitter and posted it Police LA movieAn LA-based police patrol account. A friend videotaped the bot hovering on a street corner around 10:00 a.m. near the Hollywood High School suspect’s school, allowing the bot to continue on its way until a confused person picked up the tape. Through the crime scene.
Uber spinout Serve Robotics told TechCrunch that its robot self-driving system hasn’t decided to enter the crime scene. A bot was the choice of a remote human operator.
The company’s delivery robots have so-called Level 4 autonomy, meaning they can drive themselves in certain situations without the need for a human to take over. The service has been flying its robots around Uber Eats since May.
Serve Robotics has a policy that requires a human operator to remotely control and assist the bot at each interface. If the bot encounters an obstacle like a construction zone or a fallen tree and can’t figure out how to get around it within 30 seconds, a human operator can also take over remotely.
In this case, the bot that completed the delivery was approached by the node and taken over by a human operator per the company’s internal operating policy. At first, the human operator stopped at the yellow caution tape. But when the audience picked up the tape and seemed to “wave,” the human operator decided to move on, Robotics CEO Ali Kashani told TechCrunch.
“The robot had never crossed (on its own),” Kashani said. “There are many systems that ensure that one never crosses until one passes.”
The error of judgment here is that someone actually decided to continue crossing, he added.
Whatever the reason, Kashani said, this should not have happened. Servis has extracted data from the incident and developed new protocols for humans and AI to prevent this in the future, he said.
A few obvious steps are making sure employees follow standard operating procedures (or SOPs), which include proper training and new rules for what to do if an individual tries to wave the robot over the barricade.
But Kashani says there are ways we can use software to prevent this from happening again.
The software can be used to help people make better decisions or avoid the environment altogether, he said. For example, the company could partner with local law enforcement agencies to send the robot updates on police incidents and travel around those areas. Another option is to give the software the ability to identify law enforcement and then inform human decision makers and remind them of local laws.
These lessons will be critical as robots advance and expand their functional domains.
“The funny thing is that the robot did the right thing; it stopped,” Kashani said. “So this really goes back to giving people enough context to make good decisions until we’re sure we don’t need people to make those decisions.”
Serv Robotics bots haven’t reached that level yet. However, Kashani told TechCrunch that the robots are becoming more autonomous and typically work by themselves, with two exceptions: coordinates and barriers.
What happened this week is the opposite of how people view AI, Kashani said.
“I think the overall narrative is basically that humans are very good at peripheral issues and then the AI makes mistakes or maybe it’s not ready for the real world,” Kashani said. “Ironically, we’re learning the opposite, that is, humans make more mistakes and we need to rely more on AI.