From UNESCO to AWS: Building bridges between AI and social good

From UNESCO to AWS: Building bridges between AI and social good

Sasha Rubel has spent her career making technology work for people — now, at AWS, she's helping purpose-driven businesses cut through the barriers holding them back

Nicole Deslandes

May 7, 2026    5 Minutes Read


With over 12 years at UNESCO under her belt, Sasha Rubel joined AWS as head of public policy for generative AI for Europe, Middle East and Africa, driven by a passion to help tech-for-good companies grow.

In her role, she examines how businesses can better adopt advanced AI and has a bird’s-eye view of the barriers holding them back — from skills gaps to limited access to scale-up funding.

Rubel points to organizations features in AWS’s Pioneers Project, a European storytelling initiative, as evidence of AI’s underused potential. Among them: the HALO Trust, which uses AI and drone imagery to clear landmines, and xFarm Technologies, an agritech app used by more than 500,000 farmers across 100-plus countries.

Over coffee, we discuss her career to date, how businesses can turn AI ambition into action and the tech-for-good work she’s involved in.

Let’s start with your career. What led you to your current role?

I’ve been at AWS for almost five years. Before that, I spent nearly 15 years working with the UN, where I led initiatives on responsible AI, including developing recommendations on AI ethics.

A big part of my work was in sub-Saharan Africa, focusing on how responsible AI development and deployment can go hand in hand with social impact. For me, responsibility and positive societal outcomes are essential to unlocking AI’s long-term benefits, not just economically, but culturally and socially too.

Even my PhD work focused on responsible AI in indigenous communities — how technology can genuinely improve people’s everyday lives.

We often talk about productivity and efficiency gains, which matter because they link to quality of life. But what really drives me is accessibility. For example, my 96-year-old father was able to use a generative AI chatbot to access his social security benefits, something he couldn’t have done via a traditional website.

That’s what motivates me. How we democratize access to AI and put it into the hands of people closest to real-world problems, in a safe and responsible way.

You spent time at UNESCO — what lessons from that experience still shape your work today?

One of the biggest lessons is the importance of being a “bridge builder” and a “weaver.”

We need to bring together policymakers, industry and civil society to create frameworks that ensure safety and responsibility, while still enabling innovation. AI is evolving incredibly quickly — faster than regulation can keep up.

So the challenge is: how do we create frameworks that protect fundamental rights but also allow innovation to flourish?

The “weaving” part is about building a shared vision, translating responsible AI principles into both policy and product design. Responsibility shouldn’t be bolted on; it needs to be built in from the start.

What excites me most is using AI to address its own risks, like detecting bias in datasets or watermarking AI-generated content to combat misinformation.

Are businesses fully using AI’s potential, or are they still scratching the surface?

Many are still at a basic stage. They are using AI for chatbots or simple productivity gains. But they’re not yet embracing more transformative uses, where AI fundamentally reshapes how they operate.

This is significant because advanced adoption could unlock huge economic value.

A helpful analogy: it’s like having a smartphone but only using it to make calls. The capability is there, but it’s underused.

What’s holding businesses back?

There are three main barriers: There’s a shortage of AI and digital skills. Hiring can take months, by which point the technology has already moved on.

Second, companies need investment, not just for startups to scale, but for building full AI strategies.

Finally, in Europe, fragmented regulations make it difficult for companies to scale. Many businesses struggle to understand how different laws interact, and compliance costs are rising and so diverting resources away from innovation.

Are startups and large companies approaching AI differently?

Yes, there’s a growing “two-tier” dynamic. Startups are moving quickly, while larger organizations are often slowed by legacy systems and change management challenges.

There’s also a mindset shift needed — leaders need to rethink how AI can drive transformation, not just efficiency.

What excites you most about the work you’re doing?

The real-world impact.

There are startups using AI to speed up drug discovery, map oceans for renewable energy, deliver real-time alerts in conflict zones and clear landmines so farmers can return to their livelihoods

One example that stayed with me is a farmer who couldn’t work his land due to landmines. After AI-powered demining, he was able to farm again, feed his family and send his children back to school.

Finally, what’s your top priority for the next year?

Building stronger bridges between industry and policymakers.

Trust is critical. Responsible AI — designed with safety in mind from the outset — builds that trust. And trust drives adoption, which ultimately unlocks innovation and long-term societal benefit.

How would you take your coffee?

In the afternoon, I’ll have a flat white with oat milk. In the mornings, though, it’s espresso shot after espresso shot. I try to slow down a bit later in the day.

10 Leaders Defining the Future of Tech

Discover who’s setting the agenda for 2025.

VIEW LEADERS

10 Leaders Defining the Future of Tech

Discover who’s setting the agenda for 2025.

VIEW LEADERS