Responsible AI has become critical to business


Investors take note. Your due diligence checklist is missing a critical component that can make or break your portfolio’s performance: responsible AI. Apart from screening and monitoring companies for future financial returns, growth potential and ESG criteria, now is the time. Private equity (PE) and venture capital (VC) investors are starting to ask tough questions about how organizations can use AI.

Given the rapid expansion and rise of AI in recent years – 75 percent Almost all businesses are incorporating AI into their core strategies – no wonder the technology is top-of-mind for PE and VC investors. In 2020, AI counts. 20 percent or 75 billion dollars International VC Investments. McKinsey & Company reports that AI can increase global productivity Approximately 1.2 percent per yearA total increase of $13 trillion by 2030.

AI now powers everything from online searches to medical advances to work productivity. But, like most technologies, it can cause problems. Hidden algorithms can threaten cyber security and hide bias. Unclear information can erode public trust. For example, BlenderBot 3 launched on the Meta in August 2022. The AI ​​chatbot made anti-Semitic comments and factually inaccurate statements about the US presidential election, even prompting users for offensive jokes.

In fact, the European Consumer Organization The latest survey on AI More than half of Europeans believed that companies use AI to influence consumer decisions, with 60 percent of respondents in certain countries thinking that AI could lead to greater misuse of personal data.

How can companies use AI responsibly and develop best practices for ethical AI management with cross-border organizations? Below are some of our recommendations, which were covered in the latest report of the year. Ethical AI management teamA collective of AI experts, entrepreneurs and investors dedicated to sharing practical insights and promoting responsible AI governance.

Best practices from the ESG movement

PE and VC investors can use lessons from ESG – short for environmental, social and governance – to design and deploy AI that generates value without harming their investors’ companies.

It’s ESG. Being the main in the PE realm and is slowly but surely making its mark on VC. We have seen the creation of international industrial bodies VentureESG And ESG_VC Advancing sustainability to early stage investments.

Gone are the days when it was enough for companies to offer cashbacks. Now, investors regularly demand information about the compliance of the fund portfolio. United Nations Sustainable Development Goals. Since 2018, significant steps have been taken to create comparable and global metrics to assess ESG performance. For example, the International Sustainability Standards Board In the year It was launched by the United Nations Climate Change Conference in 2021 to develop international disclosure standards.

In addition to investing in carbon capture technologies and developing environmentally friendly solutions, companies are being pushed to be more accountable for their social impact, including labor rights and fair equity ownership allocation. “Investors are getting more concerned about ESG”. 2022 report By Bain & Company and Associates of Institutions Limited. According to the publication 90 percent of limited partners will walk away from an investment opportunity if it presents an ESG risk.

Simply put, investors cannot afford to ignore their impact on their environment and the communities in which they participate. ESG has become a necessity rather than an addition. Now the same thing can be said Responsible AI.

A business case for responsible AI

There are clear parallels between the responsible AI and ESG movement: for one thing, both are simply good for business. According to Manoj Saxena, the Responsible Artificial Intelligence Institute, he said. Recently, “Responsible AI is Profitable”

Many organizations are accepting the call to prove this. AI is created, implemented and protected by processes that protect us from negative influences. In the year In 2019, the OECD was established AI principles To promote the use of AI that is innovative, trustworthy and respects human rights and democratic values. Meanwhile, including multi-sectoral partnerships World Economic Forum Global AI Action Coalition and the International Partnership on Artificial Intelligence They have established working groups and plans to translate these principles into best practices, certification programs, and actionable tools.

VC firms focused on funding innovative and ethical AI firms have also emerged, such as BGV. We believe early stage investors have a responsibility to build ethical AI startups and can do so through better due diligence, capital allocation and portfolio management decisions.

The term “accountable AI” speaks to the bottom-line reality of business: Investors have a duty to ensure that the companies they invest in are trustworthy and accountable. They should create value instead of destroying it by looking carefully at their impact on the society, not only at the risk of reputation.

Here are three reasons why investors should embrace and prioritize responsible AI:

  1. AI seeks ways of protection

For a taste of what happens when companies seem to lose control of their own creations, look no further than social media, where digital platforms have become vehicles for everything from spreading fake news and privacy violations to cyberbullying and grooming.

With AIA, there is still the possibility of establishing rules and principles for its ethical use. But once the genie is out of the bottle, we can’t put it back, and the consequences will be huge.

  1. Control pressure has powerful effects

Governments of the world They are getting tighter. Digital rules on Online security, Cyber ​​security, Data privacy And AI. In particular, the European Union has passed Digital Services Act and the Digital Markets Act (DMA) aims to establish a safe online space where the basic rights of all users are protected.

DMA in particular Targeting large platforms Known as “gatekeepers” (think search engines, social media, and online marketplaces), they demand transparency in advertising, data privacy, and illegal or harmful content. Effective as soon as 2023, the DMA may become mandatory. A fine of up to 6 percent of annual sales for disobedience, and such 20 percent For repeated offenses. In the worst case scenario, regulators can liquidate a company.

in A recent study In the C-suite view of AI regulation and readiness, 95 percent of respondents from 17 geographies believed at least one part of their business would be affected by EU regulations, and 77 percent identified regulation as a company-wide priority. Regulators in America and Asia will be following developments in Europe carefully and will certainly follow suit over time.

  1. Market opportunities

Estimated. That’s what he said. 80 percent of organizations do at least the action 10 percent In 2024, 45 percent pledged to allocate at least 20 percent of their AI budget to regulatory compliance. This regulatory push creates a huge market opportunity for PE and VC investors to make life easier for corporations that are already under pressure to fund startups.

Investors wondering about the overall reach of AI’s market should be optimistic. In the year In 2021, the global AI economy is estimated to be approximately US$ 59.7 billion, and the figure is Prediction to reach Some 422 billion dollars in 2028. European Union It is estimated The AI ​​Act will drive growth by increasing customer trust and adoption and making it easier for AI vendors to develop new and attractive products. Investors who prioritize responsible AI are strongly positioned to capture these benefits.

Worth the effort

The call for investors to integrate responsible AI into their investments can feel like a tall order. Special skills, new processes and portfolio company performance require continuous monitoring. Many fund managers, let alone limited partners, do not yet have the manpower to accomplish this.

But the impending regulation of AI and the market opportunities it presents will change how PE and VC firms operate. Some exit, shifting wealth to sectors less regulated. Others, fortifying themselves against reputational risk by balancing internal capabilities, are adding screening tools for AI risks. Still, some see responsible AI as mission critical.

Awareness is the biggest agent of change and this is done by adapting ethical best practices from startups, enterprises, investors and policy makers. Those who get out early and are actively involved in the drafting of the laws will reap the benefits in terms of fueling economic and sustainable growth.

This is an adaptation of an article published in Ethical AI Management Group 2022 Annual Report.



Source link

Related posts

Leave a Comment

16 + 13 =