SafeAI Pro

AI ethics

By Katya Kamlovskaya

Margaret Mitchell’s DES2025 talk (Digital Enterpise Show, Malaga, Spain) on tech ethics reminded me about the book I’d just finished reading - “The Careless People” by Sarah Wynn-Williams - and its account of Facebook’s role in the Rohingya genocide in Myanmar.

In 2015, I visited Myanmar during its fragile democratic transition, just as internet access surged (to ~12% of the population). In the years to follow, in an Australian detention centre, I met Rohingya refugees fleeing ethnic cleansing. While Facebook was then rising, it was not immediately obvious how it is impacting the world - not in a positive way, as it turns out.

In her talk, Margaret reminded us that all decisions are value-driven, demanding a mini-ethical review where values are identified, prioritised, and transparent.

Facebook viewed Myanmar as an untapped market. Yet, its decisions incited ethnic violence as its inaction led to the spread of misinformation and hate speech. Warnings from civil society and internal reports flagged the risks, but effective content moderation was not in place. Algorithms, designed for engagement, amplified harmful content, creating an echo chamber of dehumanising material against the Rohingya.

It looked like the company’s business values did not prioritise safety and human rights.

What values did they focus on?

Growth and market expansion

Keeping users on the platform, increasing their interaction with algorithms optimised for it - without acknowledging that emotionally charged content and hate speech often lead to high engagement.

Targeted ads incentivised amplifying harmful content despite human rights violations.

The company claimed to believe in making the world better”= - but was it even possible without understanding Myanmar’s local realities, ethnic conflicts, and weak rule of law? Negative impacts were not foreseen or proactively managed.

Human content moderators worked in a centralised manner - with no local moderators (with local language knowledge) on the ground. The existing centralised systems could not detect harmful content in Burmese efficiently, and the threats were not addressed.

So, as Margaret Mitchell warns: businesses need to consciously identify and prioritise values - and it is not about a mission statement: add transparent processes to incentivise ethical work, effective risk management, and human-centric approach to tech-related decision-making, and then we build not just better products, but more resilient and trusted businesses that harness technology for collective good.

Read the book, too.