Improving trust in automated technology

Trust and assurance – from consumers, the public and governments – will be critical issues for the AI ​​and autonomous technology space in the coming year. However, gaining that trust will require fundamental innovations in the way autonomous systems are tested and evaluated, said Shawn Kimmel, managing director of EY-Parthenon Quantitative Strategies and Solutions at Ernst & Young LLP. Thankfully, the industry now has access to new techniques and new methods that promise to transform the field.

The new autonomous area

Automation has historically been positioned as a replacement for “boring, dirty and dangerous” jobs, and it still is, whether it’s working in underground mines, maintaining coastal infrastructure, or in medical facilities due to the pandemic. Keeping people out of harm’s way is a worthy goal in sectors as diverse as energy, commodities and healthcare.

But autonomous technologies are moving beyond those applications to finding ways to improve efficiency and comfort in everyday spaces and environments, says Kimmel, thanks to innovations in computer vision, artificial intelligence, robotics, materials and data. Warehouse robotics from A to B have evolved from flying tram closing materials to intelligent systems that can vary freely in space, detect obstacles, change lines based on stock levels, and handle delicate items. In surgical clinics, robots excel in microsurgical procedures with minimal human vibration. Startups in the autonomous vehicle sector are developing applications and services in areas such as mapping, data management and sensors. Robo-taxis are operating commercially in San Francisco and are expanding from Los Angeles to Chongqing.

As autonomous technology enters many contexts, from public roads to medical clinics, safety and reliability will become both more important and more difficult to ensure. Self-driving vehicles and unmanned aerial vehicles have been victims of collisions and injuries. “Hybrid” environments, with human and autonomous agents, have been identified as creating new security challenges.

The expansion of autonomous technology into new domains has broadened the range of stakeholders, from device manufacturers to software startups. This “system of systems” environment complicates the rules of testing, security, and validation. Longer supply chains, with more information and connectivity, introduce or exacerbate security and cyber risk.

As the behavior of autonomous systems becomes more complex and the number of stakeholders increases, a common framework and terminology for security models and interoperability testing becomes necessary. “Traditional systems engineering techniques are stretched to their limits when it comes to autonomy,” says Kimmel. “As autonomous systems are performing more complex tasks and safety-critical tasks, much larger requirements need to be tested.” This need, in turn, stimulates the need for efficiency, to avoid ballooning test costs.

That requires innovations like predictive safety performance metrics and unexpected “black swan” events, Kimmel argues, rather than relying on conventional metrics like mean time to failures. It also seeks ways to identify the most relevant and impactful test cases. Industry must increase the sophistication of testing techniques without making the process overly complex, expensive and ineffective. To achieve this goal, it is within the mandate of autonomous systems to manage the set of unknowns, reducing the test and security “state space” to a semi-infinite set of testable scenarios.

Experiment, experiment

An independent system security, testing and assurance toolkit will continue to evolve. Digital twins have become a growth asset in the autonomous vehicle space. Virtual and hybrid “in-the-loop” test environments allow system-of-inspection that includes components built by multiple organizations in the supply chain and reduces the cost and complexity of digital augmentation of real-world testing.

Source link

Related posts

Leave a Comment

10 − 9 =