Top
No One Knows How to Define ‘Self-Driving Car’ — And It’s Becoming a Problem – A N I T H
fade
65376
post-template-default,single,single-post,postid-65376,single-format-standard,eltd-core-1.1.1,flow child-child-ver-1.0.0,flow-ver-1.3.6,eltd-smooth-scroll,eltd-smooth-page-transitions,ajax,eltd-blog-installed,page-template-blog-standard,eltd-header-standard,eltd-fixed-on-scroll,eltd-default-mobile-header,eltd-sticky-up-mobile-header,eltd-dropdown-default,wpb-js-composer js-comp-ver-5.0.1,vc_responsive

No One Knows How to Define ‘Self-Driving Car’ — And It’s Becoming a Problem

No One Knows How to Define ‘Self-Driving Car’ — And It’s Becoming a Problem


“We have now self-driving cars.” So declared no less an authority than the United States’ chief of transportation, Secretary Elaine Chao, in a May interview with Fox Business. “They can drive on the highway, follow the white lines on the highway, and there’s really no need for any person to be seated and controlling any of the instruments.”

This is wrong. Today, you can indeed buy a car with controls steering and braking for you. Tesla, Cadillac, Mercedes-Benz, Lexus, and Audi already, or soon will, offer this sort of advanced driver assistance system. But nothing now available or coming soon will let you nap or email or slap on a VR headset behind the wheel. Contrary to Chao’s thinking, today’s cars very much need humans to supervise them and intervene if something goes wrong.

Don’t blame the secretary for her confusion. When it comes to this new breed of cars that can (kind of) drive themselves, just about nobody knows what they’re talking about. How do you define self-driving, or autonomous, or driverless, or automated? Which technology does what, exactly? How is one car’s system different from another’s? “Consumers every day are seeing this conflation of automated vehicles, self-driving vehicles, and autonomous vehicles,” says Greg Rogers, a policy analyst with the transportation think tank the Eno Center.

If you’re looking for a scapegoat, you’ve got a likely candidate. The auto industry, according to a new study, is doing a terrible job conveying to the public how their newfangled systems work. In a report published last month, Massachusetts Institute of Technology researchers surveyed 450 participants about the functionality of semiautonomous features that are either currently available or about to come on the market. They found bewilderment.

The majority of respondents couldn’t estimate the features’ capabilities based only on their names. They did seem to understand that the term cruise meant they’d have to stay alert, like they do with current cruise control systems. (Great news for BMW’s Active Cruise Control and Nissan’s Intelligent Cruise Control, which each maintain a safe distance between vehicles.) But they were mystified by features with assist in the name, like Volvo’s Pilot Assist and Audi’s Traffic Jam Assist. Does that mean the system assists the driver, or the driver assists the system?

(The researchers’ results on the Tesla Autopilot feature were inconclusive—survey participants proved too familiar with the feature to judge it by its name alone.)

Outside the lab, that confusion could easily turn dangerous, creating situations where drivers get into cars without understanding their responsibilities behind the wheel. These systems offer similar capabilities, with important differences. One might work only on certain roads; another will stay in its lane but can’t handle sharp turns; another will handle just about anything, but requires the driver to tap the wheel every few minutes or else it will disengage.

“If there’s inconsistency with how things are named across different semiautonomous features that have different capabilities, that can lead to confusion for consumers both when they’re purchasing systems and when they’re using systems,” says Hillary Abraham, who worked on the research and studies how humans interact with driver assistance systems at MIT. “It’s important to understand how terminology can affect a consumer’s preconceived notion of what they might be capable of, and how it relates to other systems that might be on the market,” says Abraham.

The risk will only rise as more vehicles with automated features pour onto roads. Nearly 40 manufacturers offer models with advanced safety systems right now. Cadillac and Audi are about to launch their own semiautonomous driving features; other automakers will soon follow. That means there’s a lot more research needed. How do consumers use these vehicles? How do carmakers advertise their capabilities? (Mercedes-Benz pulled ads for its E-Class sedan last year after it confusingly, and wrongly, described the car as self-driving.) And how does the industry train customers to use these new features?

Automakers, of course, have opportunities other than commercials to teach consumers about how their vehicles work beyond their names. In the dealership, for example, where customers take potential vehicles for test drives and chat with salespeople. These are chances for manufacturers—through their car dealer proxies—to explain how their cars operate and where their limitations lie. Some do well. Subaru’s Eyesight feature bombed in the MIT research, with just 13 percent of participants guessing its lane assist, forward collision warning, and automatic braking capabilities. But other research has shown the Japanese carmaker dedicates unusual resources to training its dealers to explain the feature’s function.

Unsurprisingly, not everyone does such a good job spelling out how their “self-driving” cars work. When Erin MacDonald visited a California dealership recently to purchase a vehicle with some automated features, she found salespeople who didn’t know much about what they were selling. “They couldn’t explain why it worked, the limitations of it, or under what conditions it was safe to use,” she says. MacDonald, a mechanical engineer who studies product design at Stanford University, ended up doing her own research to figure out what she wanted, and then comparing disparately named features across brands.

This could prove a problem for automakers as well as customers. “What you call something can be a kind of implicit promise that the feature is capable of behaving safely under certain circumstances,” says Ryan Calo, who specializes in cyber law and robotics at the University of Washington’s School of Law. A judge or jury could interpret Autopilot or ProPilot as a pledge that a vehicle can, well, pilot itself, regardless of the fine print.

Engineers have specialized language for automation, a five-level system that explains what drivers are responsible for, and when. But this overly technical language has not caught on with the normies. Just ask Secretary Chao, the woman ostensibly in charge of regulating automated and autonomous vehicles. So what’s the alternative? Calo argues that semiautonomous features today are too different—BMW’s doesn’t operate like Nissan’s, which doesn’t operate like Tesla’s—to be standardized into names that could be used across all automakers’ brands. Ironing out language could come later.

Others don’t agree. “This study should be a call to action—the industry needs to solve these issues,” says Bryan Reimer, an MIT researcher who studies human driving behavior and worked on the research on brand names. He argues that automakers need to put aside concerns of brand differentiation—making their stuff look better than the next guy’s—in favor of labels that every consumer can understand.

“Automation is less an engineering problem than it is a behavioral problem of how you develop the engineering to support us,” says Reimer. Fully, totally self-driving cars—autonomous vehicles—are coming. But they don’t exist yet. In the meantime, humanity is going to have to figure out how to operate, and then describe, the robots that just want to help them out.



Source link

Anith Gopal
No Comments

Post a Comment