Tech companies face increasing pressure from asset managers over potential misuse of AI

Leading institutional investors are exerting greater pressure on technology companies to take responsibility for the potential misuse of artificial intelligence (AI), as concerns about liability for human rights issues linked to the software grow.

The Collective Impact Coalition for Digital Inclusion, a group of 32 financial institutions representing $6.9tn in assets under management, including Aviva Investors, Fidelity International, and HSBC Asset Management, is at the forefront of this push to encourage technology companies to commit to ethical AI.

Aviva Investors has recently held meetings with tech companies, including chipmakers, to urge them to strengthen protections against human rights risks associated with AI, such as surveillance, discrimination, unauthorized facial recognition, and mass layoffs.

Louise Piffaut, head of environmental, social, and governance equity integration at Aviva Investors, stated that meetings on this topic have intensified due to concerns about generative AI, like ChatGPT. Should engagement fail, Aviva Investors is willing to take actions such as voting against management at annual general meetings, raising concerns with regulators, or selling shares.

Piffaut added, “It’s easy for companies to evade accountability by saying it’s not their fault if their product is misused. That’s where the conversation becomes more challenging.”

In a recent note, investment bank Jefferies suggested that AI could soon surpass climate change as a significant concern for responsible investors.

The increased activity from the coalition comes two months after Nicolai Tangen, CEO of Norway’s $1.4tn oil fund, announced plans to establish guidelines for the ethical use of AI by the 9,000 companies in which it invests, along with a call for stricter regulation of the rapidly growing sector.

Aviva Investors, which manages over £226bn, has a stake in Taiwan Semiconductor Manufacturing Company, the world’s largest contract chipmaker, which is experiencing a surge in demand for advanced chips used to train large AI models like the one powering ChatGPT.

The asset manager also holds stakes in hardware and software firms Tencent Holdings, Samsung Electronics, MediaTek, and Nvidia, as well as tech companies developing generative AI tools, such as Alphabet and Microsoft.

In addition, Aviva Investors is engaging with consumer, media, and industrial companies to ensure their commitment to retraining employees rather than laying them off if their jobs are at risk due to AI-driven efficiencies.

Jenn-Hui Tan, head of stewardship and sustainable investing at Fidelity International, commented that concerns about social issues, such as privacy, algorithmic bias, and job security, have evolved into fundamental concerns for the future of democracy and humanity.

Tan stated that Fidelity International has been meeting with hardware, software, and internet companies to address these concerns and will consider divestment if sufficient progress is not made.

Legal & General Investment Management, the largest UK asset manager with a focus on issues like deforestation and arms supplies, is also working on a similar document concerning artificial intelligence.

Kieron Boyle, CEO of the UK-government-funded Impact Investing Institute, observed that an increasing number of impact investors are concerned that AI could diminish entry-level opportunities for women and ethnic minorities, thereby setting back workforce diversity.

Richard Gardiner, EU public policy lead at the World Benchmarking Alliance, which launched the collective impact coalition, suggested that investors pushing tech companies to address their entire supply chain wish to proactively mitigate ethical and regulatory risks. Gardiner speculated that investors like Aviva may be concerned that failing to act could one day render them liable for human rights violations committed by the companies they invest in.

Gardiner added, “If you create a bullet that does nothing when in your hand, but it shoots someone when put in someone else’s hand, to what extent are you responsible for the product’s use? Investors want assurance that standards are in place to protect themselves.”

According to the World Benchmarking Alliance, only 44 out of 200 tech companies assessed in March have published a framework for ethical artificial intelligence.

A few companies, such as Sony, Vodafone, and Deutsche Telekom, have implemented best practices. Sony has enforced ethics guidelines for AI across its employee base, Vodafone provides a right of redress for customers who feel unfairly treated by AI decisions, and Deutsche Telekom has a “kill switch” to deactivate AI systems at any time.

While industries like mining have long been expected to address human rights issues within their supply chains, regulators are now pushing for technology companies and financiers to assume the same responsibility.

The EU is currently negotiating the corporate due diligence directive, which is expected to require chipmakers and other companies to consider human rights risks across their value chains.

The OECD has also updated its voluntary guidelines for multinationals to include recommendations for tech companies to prevent harm to the environment and society through their AI-related products.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment