The Responsible AI Gap



It’s that time of year again, we are close to the release of our Force for Good Forecast (FFGF), our team's signature annual report. We release this report to help corporate leaders navigate and lead on the year's most notable social and environmental advocacy trends.

We are putting final touches on this years report, but want to give a sneak peek at what’s to come by showcasing one of our 2026 trends, The Responsible AI Gap? Come back soon for our full report!

 The Responsible AI Gap

A growing number of civil society stakeholders and campaigners are watching how companies are deploying Artificial Intelligence (AI). According to Just Capital’s December 2025 report, “corporate leaders’ enthusiasm (93%) for AI outpaces the general public (58%) and investors (80%).” This gap highlights the challenge companies face in navigating a fast-moving AI landscape while managing potential unexpected impacts on people and societies. The proliferation of terms used to describe Responsible AI—including corporate digital responsibility (CDR), AI risk management, trustworthy AI, ethical AI, corporate AI responsibility (CAIR), human-centered AI, and rights-respecting AI—signals a broad, increasingly diverse set of stakeholder concerns.

Unsurprisingly, funders are taking action to promote responsible AI, so we anticipate increased resources flowing to civil society in the coming years. In February 2025, Schmidt Sciences launched a $10 million AI safety program to ensure that AI systems are “safe, trustworthy, and beneficial to society.” In late 2025, Humanity AI, a $500 million coalition initiative founded by ten philanthropies, was established to ensure “people have a stake” in AI’s future. Other funders, including the Patrick J. McGovern Foundation, Coefficient Giving, the AI Safety Fund, the Heising-Simons Foundation, and Jaan Tallinn, have also advanced responsible AI initiatives. Meanwhile, OpenAI’s conversion to a for-profit structure in late 2025 leaves its philanthropic arm, the OpenAI Foundation, with control of at least $130 billion. However, the organization’s philanthropic priorities remain unclear and its influence hard to predict.

One key stakeholder concern with AI systems is their potential to produce discriminatory and even abusive outcomes. Key voices like Algorithmic Justice League, Distributed AI Research Institute (DAIR), Electronic Frontier Foundation, “co-founded by former Future 500 board member,” the late John Perry Barlow, and Amnesty International are developing increasingly nuanced perspectives, such as distinguishing between individual AI harms—such as hiring discrimination, differential pricing, and increased surveillance—and collective social harms, including loss of opportunity, economic disadvantage, and social stigmatization. They are calling on companies to adopt ethical design practices and greater transparency in AI use, and to implement safeguards “against privacy, safety, and due process problems.” Recent investigations by the Tech Transparency Project show that serious harms can proliferate despite formal corporate policies with explicit prohibitions.

AI’s environmental impacts, particularly its associated water use, are well known but gaining prominence as AI infrastructure scales rapidly. Stakeholders ranging from the UN Environment Assembly to Greenpeace encourage companies to disclose the “direct environmental consequences” of AI across its lifecycle, including impacts on local communities, and to adopt more sustainable AI practices, such as powering infrastructure with renewable energy. While some companies have begun committing to initiatives such as water replenishment, we anticipate stakeholders will encourage more corporate commitments.

To foster global approaches, civil society groups and international bodies are calling for stronger corporate governance and accountability to guide AI’s societal impacts. Examples include:

  • Human Rights Watch has emphasized the importance of ensuring “meaningful human control,” non-discrimination, and the right to privacy and remedy. 

  • The UN Working Group on Business and Human Rights calls for AI system procurement and deployment that aligns with the UN Guiding Principles on Business and Human Rights (UNGPs) as well as “meaningful stakeholder engagement…especially with those most at risk of harm.” 

  • The UN’s Office of the United Nations High Commissioner for Human Rights (OHCR) also created practical guidance on applying the UNGPs to AI. (Note: While this guidance is oriented to the tech sector, other sectors may find useful insights.)

Despite rising awareness of AI risks, most companies are just beginning to consider responsible AI. The World Economic Forum’s Responsible AI Playbook, which surveyed 1,500 companies, found that most remain at an early stage of responsible AI maturity, with fewer than 1% reaching “fully operationalized responsible AI, taking a systemic, anticipatory approach that actively engages external stakeholders and risks across the wider value chain and ecosystem.” This research reveals a notable mismatch between recognized AI risks and corporate capacity to manage them, highlighting the need for companies to prioritize implementing robust AI frameworks.

Companies that fail to quickly adopt robust AI frameworks risk increased risk of whistleblower incidents and increased civil society scrutiny. To help companies, WEF’s Responsible AI Playbook, Amnesty International’s Algorithmic Accountability toolkit, and the Thomson Reuters Foundation’s AI Company Data Initiative are useful starting points. Ultimately, we recommend  companies carefully choose what best demonstrates how they intend to manage AI risks in alignment with their board and investors’ expectations for responsibility as we anticipate civil society will be mobilizing to hold companies accountable. 

Indicators:

Bottom line:

AI is advancing faster than corporate governance, and stakeholders are actively shaping expectations. In this evolving space, companies must balance rapid innovation with preventing harm to workers, communities, the environment, and society. Companies that prioritize speed alone risk losing trust, compliance, and resilience, while those investing in transparent, rights-respecting AI governance will be better positioned to manage risk, stand up to scrutiny, and sustain long-term value as regulation intensifies. 

Author: Wynn Kwan and The Future 500 Team


Future 500 is a non-profit consultancy that builds trust between companies, advocates, investors, and philanthropists to advance business as a force for good. We specialize in stakeholder engagement, sustainability strategy, and responsible communication. From stakeholder mapping to materiality assessments, partnership development to activist engagement, target setting to CSR reporting strategy, we empower our partners with the skills and relationships needed to systemically tackle today's most pressing environmental, social, and governance (ESG) challenges.

Want to learn more? Reach out any time.

 

More from our team:

Previous
Previous

Spotlight Series: JoAnn Yamani

Next
Next

Welcomes, Reflections, and Looking Forward - Fall 2025 Org Update