Developing a multidisciplinary strategy with embedded responsible AI

[ad_1]

Because AI models can change over time, accountability and oversight must be ongoing; Indeed, the impetus around deep learning, as opposed to conventional data tools, is predicated on the flexibility to manipulate and transform data. But that can lead to problems like model drift, in which the model’s performance, such as its predictive accuracy, declines over time or begins to show flaws and biases, the longer it lives in the wild. Annotation techniques and human-in-the-loop control systems allow data scientists and product owners to not only create high-quality AI models from the start, but can be used in post-process control systems to ensure that models are not degraded beyond quality.

“We’re not just focusing on model training or making sure our training models aren’t biased; we’re also focusing on all the dimensions involved in the machine learning development lifecycle,” says Cukor. “It’s challenging, but this is the future of AI,” he says. He wants to see a level of discipline.”

Responsible AI prioritization

There is a clear commercial consensus that RAI is not only a necessary and good thing. In PwC’s 2022 AI Business Survey, 98% of respondents said they have at least some plans to improve AI governance, including monitoring and reporting on AI model performance, and holding AI accountable with actions that can be interpreted and easily explained.

Despite these aspirations, some companies have struggled to implement RAI. A PwC poll found that less than half of respondents planned concrete RAI measures. Another survey by MIT Sloan Management Review and The Boston Consulting Group found that the majority of organizations that view RAI as a tool for addressing technology risks — including risks related to security, discrimination, fairness, and privacy — admit to not prioritizing it, with 56% making it a top priority and a fully mature program. Only 25% is available. Challenges can come from organizational complexity and culture, lack of consensus on ethical procedures or tools, inadequate capacity or staff training, regulatory uncertainty, and integration with existing risk and information practices.

For Cukor, RAI is not an option despite these significant operational issues. “For many, investing in safeguards and practices that accelerate responsible innovation feels like business sense. JPMorgan Chase is committed to enabling our clients to innovate responsibly, which means addressing issues such as resourcing, resilience, privacy, power, transparency and business impact.” It means carefully balancing the challenges in between. Investing in the right governance and risk management practices early in all stages of the data-AI lifecycle, he argues, will allow the company to accelerate innovation and ultimately serve as a competitive advantage for the organization.

For RAI initiatives to be successful, RAI must be embedded in the organization’s culture rather than added as a technical certification mark. Implementing these cultural changes requires the right skills and mindset. A survey by the MIT Sloan Management Review and Boston Consulting Group found that 54% of RAI respondents struggle to find knowledge and talent, while 53% report a lack of training or knowledge among current employees.

Finding talent is easier said than done. RAI is a nascent field and experts note the clear interdisciplinary nature of the work, with contributions from sociologists, data scientists, philosophers, designers, policymakers and lawyers, to name a few.

“Given this unique context and the newness of our field, it’s rare to find individuals with the trifecta: technical skills in AI/ML, ethical expertise, and financial domain expertise,” Cukor says. “That’s why RAI needs to be a multidisciplinary practice with core collaboration in finance. You need to hire experts from different domains to get the right mix of skills and perspectives, so you can have serious conversations and surface issues that others might overlook.”

This article is for informational purposes only and is not intended as legal, tax, financial, investment, accounting or regulatory advice. The opinions expressed herein are the personal views of the individual(s) and do not necessarily represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations is not the responsibility of JPMorgan Chase & Co.

This content was produced by Insights, the custom content arm of MIT Technology Review. It is not written by the MIT Technology Review editorial staff.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

seven − one =