Skip to content
Vuong Nguyen avatar Vuong

My Mother Was Smuggling Rice at 13. AI Would Screen Her Out.

10 min read

Young woman carrying rice along railroad tracks in rural Vietnam, early morning light

Every time I build an engineering team for a client, my requirements are the same: impact and speed. As a fractional CTO, I’m often introducing new blood into existing organizations, and doing that well means assembling diverse teams of capable engineers. When the clock starts and all the internal and external referrals and recruiting pipelines begin to flow, the result is almost always the same: overwhelmingly, they’re male.

Once in a while, I’ll get a strong female candidate or a non-binary individual. On paper, they have the skills and experience. But fitting them into an aggressive, fast-moving team brings friction that has nothing to do with their ability. The team culture was shaped by whoever was already in the room, and the room has been mostly men for a long time. Meeting cadences, communication norms, the unspoken rules about who speaks first and who gets interrupted, all of it was built around a default that nobody chose deliberately but everyone inherited.

Some talented women navigate that current and thrive. Others get pushed out before anyone notices what happened.

This doesn’t tell you anything about women’s ability. It tells you about the system that keeps producing the same results.

Numbers Behind the Gap

Women hold 23% of software engineering roles in the US. In 20 years, that number increased by just 2%. Only 26% of AI jobs globally are held by women. A Harvard Business School meta-analysis of 18 studies covering more than 143,000 individuals across 25 countries found that women have 22% lower odds of using generative AI than men. Women make up just 42% of ChatGPT’s roughly 200 million monthly users, and only about 27% of app downloads.

The gap persists even when you equalize access. In a study conducted in Kenya where everyone had equal access to ChatGPT, women were still 13% less likely to adopt it. This goes beyond a pipeline problem. Something deeper is broken, and it runs further than most of us are willing to admit.

The Women Who Built Everything

My family owned a rice wine brewery in post-war Vietnam. We distributed to an entire province, starting as an underground “moonshine” business before the country opened up during Đổi Mới, the economic reforms of the mid-80s. If you saw our operation years later, my father managing large-scale production and provincial distribution, you’d assume he started it.

He didn’t. It started with my grandmother. Then my mother took it forward. My father expanded the concept into a larger operation, but the foundation was laid by women who didn’t have the luxury of titles or recognition.

After the war ended, many men in South Vietnam were in re-education camps, assigned to labor or new economic zones, or were college dropouts doing work for the commune. Women did the majority of the work to produce food, through farming, smuggling, and creative use of whatever resources existed. My mother was 13 when she started carrying bags of rice, jumping off running trains to sell it and make up the price difference between regions. She’s told me about train running with friends, pulling the lever to stop the train when it reached a certain location near the forest, then jumping back on as soon as it restarted. Sometimes they got caught and ended up in temporary holding cells. They were 13-year-old girls.

I was 8 or 9 when I started helping my parents mind the fire in our production operation. Even at that age, I could see the shape of the thing. The whole brewery felt like it belonged to my father. But the knowledge of how it worked, the relationships with buyers, the instinct for when to push and when to wait, that lived in my mother’s hands. She ran the operation. He scaled it. History remembers the scaling.

It’s wild to reconcile all of that with my mother in those first years in the US: a woman with a 5th grade education, making ends meet while carrying the weight of everything on her shoulders. She doesn’t know what a large language model is. But she built a supply chain across a province before she was old enough to drive.

It Doesn’t Get Better in the Office

You’d think moving from physical labor to knowledge work would level things out. It doesn’t. Women in software engineering earn $0.93 for every dollar men make. 76% of women in tech report experiencing gender bias or discrimination at least once. The ratio of men to women in technical roles is 4:1. And these numbers have barely moved in a generation.

The bias isn’t always loud; it lives in who gets invited to the architecture review, whose name comes up first in promotion conversations, whose “communication style” gets flagged in performance reviews while the same directness in a male colleague gets called leadership. It compounds quietly, year after year, until the gap looks like a natural outcome instead of a constructed one.

A Mirror We Built

Chris Rock has a bit from his 1999 special Bigger & Blacker that I think about often. He’s on stage, successful, wealthy, and he tells the audience something to the effect of: there isn’t a white man in this room who’d change places with me, and I’m rich. That’s how good it is to be white.

That observation cuts differently when you apply it to gender. Late last year, a woman named Megan Cornish ran an experiment on LinkedIn. She changed her gender setting to male and rewrote her profile in more “male-coded” language. Her impressions jumped 400% in a week. Another woman, Lucy Ferguson, changed her name from Lucy to Luke for 24 hours and saw an 818% increase. Others replicated the experiment with similar results.

But the experiment had a second act. One of these women invited her male friends to join the experiment, to temporarily switch their profiles to female and compare results. Not a single one did. They didn’t refuse because they thought the experiment was flawed. They refused because, on some level, they already knew what would happen. And knowing was enough. They didn’t need to feel it.

We know it, we see it, and we write it down. And here’s what we have to come to terms with: AI and large language models (LLMs), where they currently stand, are a mirror constructed from the reality of who we are as a society.

A 2024 UNESCO study found that major LLMs associate women with “home” and “family” 4 times more often than men, while linking male-sounding names to “business,” “career,” and “executive.” The implicit bias that makes a country comfortable with a Black man as president but unable to elect a woman to the same office doesn’t disappear when you feed it into a neural network. It amplifies. These aren’t mere statistics. They’re training data for every system we build.

And that training data doesn’t just sit there. It gets dispensed. A woman asking an LLM for career advice might receive subtly different guidance than a man asking the same question. A hiring manager drafting a job description might get language that unconsciously signals “this role is for men.” Someone building a performance review template might end up with criteria that penalizes communication styles more common in women. Every day, millions of people receive outputs shaped by these patterns without knowing the deck was already stacked.

What We Can Actually Control

The conversation around ethical AI and unbiased training data is growing, and it matters. Organizations like UNESCO, Deloitte’s AI Institute, and Harvard’s D^3 Institute are publishing research that makes the problem harder to ignore. But research alone won’t close the gap.

Whether we’re comfortable with it or not, organizations are already using LLMs in hiring workflows, from screening resumes to drafting job descriptions to evaluating candidate responses. The question isn’t whether this will happen. It’s what context those models carry when they do.

One approach I’m exploring involves atomic context prompts, purpose-built context that’s relevant to the specific moment, task, and intent. The idea is straightforward: if you can’t fix all the bias upstream in the training data, you can control what context reaches the model at the point of interaction.

Take hiring for example: when an LLM screens engineering candidates without deliberate context, it draws on everything it absorbed during training, including decades of hiring patterns where “strong engineer” correlates overwhelmingly with male profiles. But give it a purpose-built context prompt that defines what “strong engineer” means for this specific role, this team composition, this company’s actual performance data, and you’ve replaced inherited assumptions with relevant, current truth. The model’s output shifts because the context shifted.

This isn’t prompt engineering. Prompt engineering is ad hoc and user-dependent, only as good as the person typing. What I’m describing is structured context, built once for a specific purpose and reused reliably, so the quality of output doesn’t depend on whether the person prompting happens to know what biases to watch for. The difference matters here: one actually scales while the other hopes the right person is in the room.

That same principle applies everywhere these biases surface, from content generation to performance reviews to loan approvals. You can’t rewrite the internet’s history. But you can ensure the context you provide at the moment of interaction is clean, specific, and free of inherited defaults. Better context creates better results, regardless of who’s prompting.

So, What Now?

“Are women not as good as men in using AI?” is a question that mistakes provocation for insight. And the honest answer is that the question itself is a symptom. Women aren’t less capable. They’re working inside systems, technical, cultural, economic, that were never designed to include them equally.

My mother didn’t need a degree to build a supply chain. She needed the system to stop arresting her at train tracks. The women I try to recruit into engineering teams don’t need easier interviews. They need an industry that stops treating their presence as an anomaly.

AI is a mirror. And right now, it’s showing us exactly who we’ve been.

If the next generation of AI is trained on the world we’ve built so far, what are we actually teaching it about half the population?