The Heart of Responsible AI: A Conversation on Ethics, Creativity, and Care

AISTORYTELLING

Maša Hilčišin

4/30/20253 min read

In my classes, where students come together from every corner of the world, we often open our hearts to conversations about the responsible use of AI. These discussions are deeply human—full of curiosity, concern, and hope.

Over time, I’ve noticed two recurring attitudes emerge: some students express genuine worries about AI—its impact on copyright, ethics, and the creative soul of our work—while others are excited, eager to explore AI’s possibilities for enhancing creative expression.

Ethical and Bias Considerations: AI and Visual Storytelling

Ethics in AI is not just a technical conversation. It’s a human one. It's been unfolding for years, and still, we are only scratching the surface of its layered, emotional complexity. Among the many pressing concerns, one topic repeatedly calls for our tender attention: bias.

According to the article Artificial Intelligence: Examples of Ethical Dilemmas published by UNESCO, "AI systems can often return biased results. Search engines, for example, are not neutral tools. They reflect user behaviors—prioritizing popular content, geographic preferences, and learned patterns that may unconsciously favor certain voices over others. Gender bias is especially prevalent, and its presence must be acknowledged, reduced, and actively countered during the development and deployment of AI technologies." (UNESCO, 2023)

While this post isn’t long enough to fully unpack the nuances of AI bias, I want to share a few personal reflections—moments where I’ve seen how algorithms can unintentionally reflect the narrowness of their training data, and how small changes in our prompts can begin to shift the story.

To bring this into a real-world example, I conducted a small experiment using Adobe Express. I began with a simple prompt: "Six scientists sitting at a table." The results? Multiple images, mostly featuring white men, and in most photos almost no women.

Then, I tried another basic prompt: "A successful person next to a car." Once again, the outcome was predictable—several images, most of them showing man and one woman, and most of them showing only white people.

Next, I experimented with a relatively simple prompt: "Six cleaners sitting at a table." The AI generated several images in response, most of which featured women. This result reflects common biases in AI image generation models, where certain professions are often gendered based on the dataset the model was trained on.

When I changed the prompt to: "Six scientists of diverse genders and races sitting at a table," the results were different.

AI has the potential to amplify creativity and enrich our processes in beautiful, transformative ways. But with this power comes a shared responsibility. We are all caretakers of the digital worlds we’re helping to build.

Beyond the Default: Using AI Responsibly and Intentionally

Bias, of course, goes far beyond AI. It is part of a larger conversation about society, identity, and justice. Still, AI can be a mirror—and what we see reflected matters. That’s why these ethical questions must stay at the heart of our work.

What tools can we use to ensure fairness? What policies and practices can support developers in creating inclusive systems? And importantly, what can WE, as users, do?

Creating diverse prompts can certainly lead to noticeable shifts in how bias manifests in AI-generated outputs. However, it's important to recognize that the issue of bias in AI is far more complex than simply rephrasing or altering a few words. Tackling this challenge requires a broader, systemic approach that involves multiple stakeholders.

This conversation must include developers, those responsible for training AI models, data curators, and, crucially, a collective commitment to education around inclusion and diversity.

We can begin by being intentional with our prompts. By naming underrepresented communities. By adding statistics, context, and narratives that go beyond the default. With every input, we help shape the future of AI toward a more loving, inclusive, and diverse world.

Understanding what true representation means—and embedding that understanding into our systems and practices—is essential. The more intentionally we engage with these themes in our own work, the more likely we are to generate relevant, representative data that can inform and shape AI systems for the better.

As artists and content creators, we hold a unique position of influence. We can take on the responsibility of incorporating inclusive and diverse perspectives into both our creative outputs and our educational efforts. By doing so, we contribute meaningfully to a more equitable digital future.

Let’s keep the conversation open—with care, with creativity, and always with compassion.

If you’d like to explore this topic further, here are a few resources:

Reference:

UNESCO (2023) Artificial Intelligence: examples of ethical dilemmas. Available at: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics/cases (Accessed: April 30, 2025)