(Human) agents of impact are shaping the algorithm for ‘good AI’
Slowly, then all at once. The adage sums up the feeling of late that artificial intelligence, under development for years, is suddenly on the verge of disrupting every facet of life. That is putting new urgency behind efforts to ensure that AI models are designed in ways that benefit humanity, preserve agency and contribute to the public good.
“We need a mindset shift so that we’re thinking more about technology as tools, and not as our masters,” said Katy Knight of Siegel Family Endowment. “There’s a very strong imperative from Big Tech to make us believe that the tech itself is the end. The reality is that we have the agency to choose what we adopt and what we don’t adopt.”
ImpactAlpha’s Agents of Impact Call brought together Knight, Chris Jurgens of Omidyar Network, Paul Fehlinger of Project Liberty Institute and Mohamed Nanabhay of Mozilla Ventures to sketch an investment thesis around responsible and ethical AI and kick off ImpactAlpha’s Shaping the Algorithm beat, in partnership with Siegel Family Endowment.
A growing number of impact investors are getting comfortable with investing in the infrastructure of AI — AI governance models, “orchestration” layers, and assurance tech — in addition to AI-enabled applications. “If we don’t have that core AI stack safe and secure, then all the things we want to do for AI for good on top of it aren’t going to work,” said Jurgens.
Omidyar is working with partners to build some of the frameworks investors need to assess AI – what Jurgens described as akin to a Taskforce for Climate-related Financial Disclosure to identify, disclose and engage on material AI risks. It’s also convening other foundations, family offices, pension funds and other asset owners on the issue. “There’s a really strong cohort of investors now that are ready to lead on this,” Jurgens said.
Omidyar itself is prepping a vehicle, Project Alexandria, that will invest in AI assurance and safety tech, such as algorithm auditing and deepfake and other forms of fraud detection. “We need to treat that as an impact vertical, just as we would look at applications in health or climate as impact,” he said.
Fehlinger of the Project Liberty Institute, a nonprofit focused on building a people-centered digital future, has also been urging investors to invest in AI infrastructure, as opposed to just AI-enabled applications. “There’s a lot of need for innovation at the infrastructure level, the protocol level of the AI economy,” he said.
Techno-pragmatism
Nanabhay of the open source nonprofit Mozilla zoomed in from India, where the India AI Impact Summit was taking place — and clashing notions of the AI future were on display.
There was the usual excitement and optimism around AI as a productivity booster that can usher in new breakthroughs, Nanabhay reported. But there was also “a palpable sense of fear in the air as people think about some of the risks, think about job losses that are coming.”

Mozilla, an early thinker on building trustworthy AI systems, stood up a $35 million venture capital fund three years ago to invest in startups developing safe and inclusive AI. It has made some 55 investments to date.
A self described techno-pragmatist, Knight staked out a middle ground between the doomers and the zoomers. She stressed the need to build brides between the AI for Good crowd and the Big Tech world that is raising vast sums of money in their race to dominate AI. “It can’t be competition. We will lose,” she said. “The scale of capital is just not there to serve every public good need. So how are we going to be savvy about making things greater than the sum of their parts, making something out of the work that we’ve been invested in across the social impact spaces we have cared about for a long time with AI and for AI in this moment, and not reinvent them?”
For impact investors who have pursued good jobs, shared prosperity and a healthy planet, “It’s time for us to take these threads and really weave them together and think more deeply about the moment of opportunity.”
Lively chat
Participants from foundations, venture funds, family offices, nonprofits and startups shared their takes in an active webinar chat sidebar.
Katie Hallaran of Accion Ventures said that her team works with “early-stage inclusive fintech companies, helping them integrate AI in responsible ways.” Sorenson Impact Foundation’s Ibrahim Rashid reported that he has “taught a monthly workshop for 9+ months teaching investors how to build AI Agents to help with sourcing, diligence, reporting, and fund operations.”
A major thread in the chat centered on capacity gaps between the tech and impact sectors. Steven Clift put it bluntly: “The gap between the tech and impact world that Chris mentions is huge. In short, the impact world does not have the talent in-house to shape the direction of AI.” Nish Acharya of Equal Innovation added her two cents: “We are scouting AI startups for foundations and global NGOs around the world. Lots going on, but not as much as we would hope in this area.”
Another point of frustration: funding constraints for public-interest AI. Arclet’s Adrienne Ammermandescribed “how challenging it is to get funding for this kind of work” in sustainable, AI-enabled public health communications. Susan Bratton said that “to build a safe, effective, explainable, traceable, auditable platform… takes time,” adding that “the 5-7 year time frame isn’t necessarily right for that. [We] need longer money.” Astrid Scholz argued that the space “would benefit from more revenue based finance” to keep companies’ missions intact.
Equity and inclusion emerged as another focal point. Tanuja Prasad raised concerns about systemic bias, warning that AI’s probabilistic logic is “reinforcing the middle,” meaning “innovation and creativity are always in the minority.” Brett Kettle, who works in ecological modeling, agreed, and added that “so much of what is important are critical edge cases.”