A new report documents the business benefits of ‘responsible AI’


As companies embrace artificial intelligence to drive business strategy, the topic of responsible AI implementation is gaining attention.

A new global research study defines responsible AI as “a framework with principles, policies, tools and processes to ensure that AI systems are designed and built to serve individuals and society well, while still achieving dynamic business impact.”

The study, conducted by MIT Sloan Management Review and Boston Consulting Group, found that while AI initiatives are on the rise, responsible AI is lagging behind.

The majority of organizations surveyed view responsible AI as a tool to mitigate technology risks – issues of security, discrimination, fairness and privacy – admitting to not prioritizing RAI. The gap increases the chance of failure and exposes companies to regulatory, financial and customer satisfaction risks.

The MIT Sloan/BCG report, which included interviews with C-level executives and AI experts along with survey results, found significant gaps between companies’ interest in RAI and implementation practices within the organization.

5 2 %

Fifty-two percent of companies practice responsible AI to some degree, but 79% say their implementation is limited in scale and scope.

The survey, conducted in the spring of 2022, analyzed responses from 1,093 participants representing organizations from 96 countries and across 22 industries reporting at least $100 million in annual revenue.

A majority of survey respondents (84%) believe RAI should be a top management priority, but just over half (56%) say RAI has achieved that level, and only a quarter say they have a fully mature RAI program. .

More than half of respondents (52%) said their organizations conduct some level of RAI practices, but 79% said their implementation was limited in scope and scope.

Why are companies struggling to walk the talk when it comes to RAI? Part of the problem is confusion over the term itself – which overlaps with ethical AI – a barrier cited by 36% of survey respondents as the practice is still evolving.

Other factors contributing to limited RAI implementation fall under general organizational challenges:

  • 54% of survey respondents say RAI is struggling to find knowledge and talent.
  • 53% do not have training or knowledge among employees.
  • 43% were given limited priority and attention by senior management.
  • Adequate funding (43%) and awareness of RAI initiatives (42%) hinder the maturity of RAI initiatives.

As AI becomes more prevalent in business, it is putting pressure on companies to prioritize addressing these gaps and successfully execute on RAI, the report said.

“As we navigate the complexity and unknowns of an AI-powered future, developing a clear ethical framework is not optional—it’s imperative,” said Riyanka Roy Chouhury of Stanford Law School’s Center for Computational Law and one of CodeX’s fellows. AI experts interviewed for the report.

Performing RAI correctly

Companies with the most mature RAI programs — 16% of respondents to the MIT Sloan/BCS survey, which the report called “RAI Leaders” — have a lot in common. They see RAI as an organizational issue rather than a technical one, and are investing time and resources into creating comprehensive RAI programs.

Related articles

These companies are taking a more strategic approach to RAI, guided by a multi-stakeholder view of responsibility in line with corporate values ​​and society at large.

Taking a leadership role in RAI translates into measurable business benefits such as better products and services, improved long-term profitability, and even improved recruitment and retention. 41% of RAI leaders found measurable business benefits compared to 14% of companies that did not invest in RAI.

RAI leaders are better equipped to deal with the increasingly regulatory climate of AI – more than half (51%) of RAI leaders feel prepared to meet the requirements of emerging AI regulations, compared to less than a third of organizations with new RAI initiatives. A survey was found.

Companies with mature RAI programs adhere to some common best practices. Among them:

Make RAI part of the executive agenda. RAI is not just a “check box” exercise, but part of the organization’s top management agenda. For example, 77% of RAI lead organizations are investing material resources (training, talent, budget) in RAI efforts, compared to 39% of respondents overall.

There is a clear message from the above that responsible implementation of AI is a higher organizational priority than product managers or software developers directing RAI decisions.

“Without leadership support, practitioners may lack the motivation, time and resources needed to prioritize RAI,” said Steven Voslow, digital policy specialist at UNICEF Global Awareness and Policy and one of the experts consulted for the MIT Sloan/BCG survey. .

In fact, nearly half (47%) of RAI leaders say they involve the CEO in their RAI efforts, twice as many as their peers.

Take a broader view. Beyond high leadership involvement, mature RAI programs include a wide range of participants in these efforts — an average of 5.8 in leadership roles in companies and 3.9 in non-leadership roles alone, the study found.

The majority of leading companies (73%) are approaching RAI as part of their corporate social responsibility efforts, considering the community as a key stakeholder. For these companies, the values ​​and principles that define their approach to responsible behavior apply to their entire portfolio of technologies and systems – with and including processes like RAI.

Nitzan Mekel Bobrov, AI officer at eBay and one of the experts interviewed, said, “Many of the core ideas behind responsibility, such as anti-bias, transparency and fairness, are aligned with the basic principles of corporate social responsibility.” For the survey. “So it should already be natural for an organization to collaborate in its AI efforts.”

Start earlier, not after the fact. Research shows that it takes an average of three years to realize business benefits from RAI. Therefore, companies should start RAI initiatives as soon as possible, maintaining the necessary knowledge and providing training. AI experts interviewed for the survey suggest increasing RAI maturity before AI maturity to prevent failures and significantly reduce the ethical and business risks associated with expanding AI efforts.

Considering the high stakes around artificial intelligence, RAI should be prioritized as an organizational mission, not just a technological issue. Companies that can connect RAI with their mission to be responsible corporate citizens will do well.

Read the report



Source link

Related posts

Leave a Comment

seven − seven =