Menu
Blog

Balancing AI and Social Impact

How We Balance AI and Social Impact 

Artificial Intelligence (AI) has quickly become a part of our daily lives, particularly at work. For those of us working in the social impact sector, tools like ChatGPT, Microsoft Copilot, and AI-powered notetakers promise to save time, reduce administrative burdens, and even spark/inspire creativity when we’re stuck.  

At Balanced Good, like other organizations in the sector, our work is inherently human-centred. Our sector exists to address social and environmental needs, and its solutions must come from the people and communities most impacted, and not from chat GPT (sorry Chat!) 

But as we navigate this new technological environment, one principle must guide us: AI can’t replace human judgement. Our work involves nuance, empathy, and an understanding of context that no algorithm can replicate. That is why we must always keep the human in the loop (HITL). HITL is a process where human judgement, decision-making, and oversight is intentionally and actively integrated into automated and AI-powered workflows.   

Our AI Use Framework is based on the HITL model to guide us as we adapt to an AI-powered world. It outlines not only when and how AI can be used in our work, but also the limits of its use. The guidelines aim to protect privacy, uphold ethical standards, and ensure that AI serves as a tool for good, not a source of unintended harm.   

Before we dive into our Framework, here are a few AI concerns that inspired us to write it.  

Privacy and Consent  

Working in this sector often means handling sensitive and often vulnerable information. Feeding that into AI tools can compromise confidentiality and trust. That’s why our framework requires informed and explicit consent before AI is used in projects and meetings and forbids entering client data into generative AI tools.  

Bias and Equity 

AI tools are not neutral; they are informed by the same biases that shape real-world inequities. Systemic bias is embedded into the data that AI systems are based on, and if we don't review AI outputs critically, we risk reinforcing harmful narratives that we want to challenge." 

The Danger of “Almost Right” 

AI often outputs information that sounds accurate but is factually wrong and often creates new information without prompts or original databases. In a sector where decisions have real-world impacts, this makes fact-checking and critical oversight essential. Whether summarizing a policy brief or generating statistics, human review is required to ensure accuracy and credibility.  

The Hidden Environmental Cost 

It’s easy to forget that AI runs on vast computing power. Every prompt consumes energy and water, contributing to carbon emissions and decimating local water systems. For a sector committed to environmental sustainability, this environmental footprint must be considered to decide when AI is truly necessary, and when a simple search or human effort is enough.  

The Risk of Losing Our Human Edge 

The greatest danger of AI is over-reliance. AI is already having measurable impacts on human thinking and cognition, and in a sector that thrives on empathy, critical thinking, cultural understanding, and relationships, that can be dangerous. If we rely heavily on AI, we risk weakening the very qualities that make our work transformative and impactful.  

Our Promise to Clients 

Our AI Use Framework enables us to use AI in ways that serve our mission. It is designed to guide us in integrating AI tools ethically, safely, and effectively while respecting the clients and communities we serve.  

  • Transparency: We will inform clients whenever we use AI tools, explain their limitations, and maintain open dialogue about their use. Client comfort levels will guide the extent of AI integration.  
  • Privacy: We will only use AI tools with adequate privacy standards, policies, and practices. We will never enter client data or identifiable information into AI tools.  
  • Consent: We will only use AI tools with informed, explicit consent from our clients.  
  • Appropriate Purpose: We will not use AI in projects or meetings involving confidentiality, including HR discussions, board meetings, or conflict resolution sessions. We will only use AI tools in projects when it supports our work without compromising confidentiality or ethical standards.  
  • Accuracy: We will treat AI outputs as rough drafts, and always review, edit, and verify the quality and accuracy of the content.  
  • Bias Awareness and Equity: We will check for bias and review AI generated content through a critical lens.  
  • Accountability: We will remain responsible for all work supported by AI, safeguard client data, and remain open to feedback.  
  • Ecological Considerations: We will be mindful of AI’s energy, water, and resource use and avoid unnecessary prompting.  

AI can support our work, but it cannot replace human judgement, empathy, or context. By using it responsibly, we ensure technology enhances, not replaces, the human-centred approach that drives our work. 

Does your nonprofit workplace have an AI policy to guide your team's engagement with AI tools? Even if your practice is not to adopt AI, it’s important to have policies that provide clarity for your team. If you are looking for guidance on developing appropriate workplace policies for your team, send us an email.