Video of a sidewalk delivery robot crossing yellow warning tape and driving through a crime scene in Los Angeles went viral this week, amassing more than 650,000 views on Twitter and sparking a debate over whether if the technology is ready for prime time.
It turns out that the robot error, at least in this case, was caused by humans.
The video of the event was taken and posted on Twitter by William Gude, the owner of Filming the LA Police, a Los Angeles-based police surveillance account. Gude was in the area of a suspected school shooting at Hollywood High School around 10 a.m. when he captured the bot on video as it hovered around the corner, looking confused, up until someone lifts the tape, allowing the bot to continue its way through the crime scene.
Uber Spinout Serve Robotics told TechCrunch that the robot’s self-driving system didn’t decide to enter the crime scene. It was the choice of a human operator who remotely operated the bot.
The company’s delivery robots have so-called Level 4 autonomy, meaning they can drive themselves in certain conditions without needing a human to take over. Serve has been piloting its robots with Uber Eats in the area since May.
Serve Robotics has a policy that requires a human operator to remotely monitor and assist their bot at every intersection. The human operator will also take over remote control if the bot encounters an obstacle such as a construction zone or a fallen tree and cannot figure out how to get around it within 30 seconds.
In this case, the robot, which had just completed a delivery, approached the intersection and a human operator took over, in accordance with the company’s internal operating policy. Initially, the human operator stopped in front of the yellow warning tape. But when bystanders lifted the tape and seemingly “passed it around,” the human operator decided to continue, Serve Robotics CEO Ali Kashani told TechCrunch.
“The robot would never have crossed (on its own),” Kashani said. “There’s just a lot of systems to make sure it never crosses over until a human agrees.”
The error in judgment here is that someone decided to keep crossing, he added.
Whatever the reason, Kashani said it shouldn’t have happened. Serve has extracted data from the incident and is working on a new set of human and AI protocols to prevent this in the future, he added.
A few obvious steps will be to ensure that employees follow the standard operating procedure (or SOP), which includes proper training and developing new rules on what to do if someone tries to smuggle the robot through a barricade.
But Kashani said there are also ways to use software to prevent this from happening again.
Software can be used to help people make better decisions or to avoid an area altogether, he said. For example, the company can work with local law enforcement to send up-to-date information to the robot about police incidents so it can roam those areas. Another option is to give the software the ability to identify law enforcement and then alert human decision makers and remind them of local laws.
These lessons will be essential as robots advance and expand their areas of operation.
“The funny thing is that the robot did the right thing; it stopped,” Kashani said. “So it really comes down to giving people enough context to make good decisions until we’re confident enough that we don’t need people to make those decisions.”
Serve Robotics robots have not yet reached this point. However, Kashani told TechCrunch that robots are becoming more and more independent and generally operate on their own, with two exceptions: intersections and random blockages.
The scenario that unfolded this week goes against how many people see AI, Kashani said.
“I think the narrative in general is basically people are really good at edge cases, and then the AI makes mistakes, or maybe not ready for the real world,” Kashani said. “Oddly enough, we’re learning a bit of the opposite, which is that we’re seeing people make a lot of mistakes and we need to rely more on AI.”