We’ll be honest—the first time we started using AI, it felt uneasy.
We worried about losing the human touch—about what it meant for the stories we help bring to life. But that discomfort quickly became an invitation—to reimagine how technology can serve a social justice purpose, not silence it.
The reality is: AI is already here, quietly shaping the ways stories are told, data is shared, and communities are heard (or silenced). The question isn’t whether your organization should use AI—it’s how to do it ethically.
Integrating AI into your organization’s communications isn’t just about efficiency or productivity. It’s about responsibility. It’s about ensuring that the stories you tell with the help of machines still honor the people behind them. And here’s the good news: It’s not just possible. It’s happening.
Disability-centered innovators and design justice practitioners are already showing us how to design inclusively, intentionally, and with care.
So let’s explore how your organization can follow their lead.
Step 1: Start With Your Values—Not the Technology
Before you jump into tools like ChatGPT, Claude, or Canva’s Magic Studio, take a step back and ask: Why are we using AI in the first place?
Too often, organizations start with the tech and try to retrofit their values afterward. But as Sasha Costanza-Chock reminds us in Design Justice, “all design choices are political.” Every tool you use—including AI—encodes values. If your mission is rooted in justice, equity, and inclusion of all voices from the community, your use of AI should reflect that.
That means being intentional from the start.
Hold a team conversation about your organization’s values as they relate to storytelling. Maybe you value truth, dignity, accessibility or representation. Once those are named, evaluate how AI tools can support—not replace—those principles.
For example, if you’re an advocacy organization that uplifts immigrant voices, your communications strategy might include an AI translation tool. But instead of relying solely on the algorithm, you might have bilingual staff review and contextualize translations, ensuring that cultural nuance isn’t lost in the process.
AI can enhance your work, but it should never override your humanity.
Step 2: Choose Inclusive Design Over Efficiency
One of the most exciting shifts in ethical AI is happening in the disability justice movement.
Disability-centered organizations have long understood what tech companies are only now learning—that design isn’t inclusive unless it starts with those most impacted by exclusion.
Leaders in disability ethical AI emphasize accessibility as a baseline, not a bonus. They’re asking questions like: Who benefits from this technology? Who might be harmed? Who gets left out?
These questions should guide your organization’s adoption of AI.
When evaluating tools for communications—from automated video captions to content generators—test them through a justice-centered lens:
- Does this tool represent a diverse range of voices, accents, and dialects?
- Can people with disabilities easily engage with the content it produces?
- Is there transparency about where the data comes from?
In practice, this might look like choosing an AI transcription service that prioritizes accessibility standards or avoiding platforms that train their models on stolen creative work.
Inclusive design slows you down just enough to make sure you’re not replicating the same inequities your mission seeks to address.
Step 3: Create Guardrails for How You’ll Use AI
Every organization needs a shared understanding of what “ethical use” actually looks like. Otherwise, well-meaning staff may experiment with tools in ways that unintentionally compromise privacy or authenticity.
Start simple: Write down a short AI use policy. It doesn’t have to be long or full of legal jargon. A few clear guidelines can go a long way:
- Transparency: Always disclose when AI-generated content is being used, especially in donor communications, reports, or storytelling campaigns.
- Data Privacy: Never input confidential or identifying information about clients, patients or participants into AI tools.
- Verification: Treat AI outputs as first drafts, not final truths. Always verify facts, statistics and quotes.
- Representation: Avoid using AI-generated images or videos that could distort reality or misrepresent community members.
These guardrails empower your team to experiment responsibly. Think of them as your ethical compass—keeping you creative, but grounded.
Step 4: Train Your Team to Think Critically About AI
AI isn’t something you can “set and forget.” Like any tool, it’s only as good as the people using it. That’s why education and reflection are key.
Host a learning session with your staff, interns or volunteers. Explore questions like:
- What’s exciting or concerning about using AI in our communications?
- How can we use it to amplify marginalized voices, not automate over them?
- Which communities might be most impacted by our AI choices?
You might even use Sasha Costanza-Chock’s Design Justice Principles as a discussion guide. They invite teams to center the voices of those most affected by design decisions—a principle that maps perfectly onto AI adoption.
Incorporate lived experience wherever possible. If your organization serves people with disabilities, immigrants, or Black and Brown communities, bring those voices into the room when shaping your approach to AI. Ethical integration isn’t just about technical policy—it’s about participatory practice.
Step 5: Lead With Storytelling That Reflects Humanity
At the end of the day, communications is about connection.
AI can help us reach more people, faster. But only human stories create lasting impact. So instead of thinking of AI as a storyteller, think of it as a story supporter. Use it to free up time for deeper conversations, not replace them.
Here are a few ways justice-motivated leaders are already doing this well:
- Automating accessibility: Nonprofits like the Disability Rights Education & Defense Fund are using AI to make their communications more accessible—captioning videos, translating materials, and summarizing reports in plain language.
- Amplifying all voices: In Kentucky, a coalition of local organizations used Sensemaker, an AI-powered storytelling platform, to help residents share their experiences and discover common ground. Instead of replacing human connection, the tool deepened it—revealing shared struggles and hopes through real community stories.
- Designing for dignity: The Anti-Eviction Mapping Project in California is building a community-based conversational storytelling tool that documents tenants’ experiences with displacement. Co-created with residents, it centers lived experience and ensures technology reflects people’s voices, not erases them.
When AI becomes a partner in creativity—not a substitute for it—your organization can tell stories that are both efficient and ethical, digital and deeply human.
The Future Is Already Here
The future of ethical AI in communications isn’t a far-off dream—it’s already unfolding in the hands of disabled technologists, justice-oriented designers, and community-led innovators.
They’re showing us what’s possible when we design with—not for—the people most affected by technology.
As organizations rooted in justice, we have a responsibility to follow their lead. The tools may be new, but the mission hasn’t changed: to tell stories that honor our dignity, shift power, and move people to action.
If we can remember that, we won’t just adapt to the AI era—we’ll help shape it.

