
Artificial Intelligence and the Weaponization of Honor: Risks for Women in Pakistan
By Fawad Pirzada
In Pakistan, women already face layers of discrimination, from social restrictions to the ever-present threat of honor-based violence. Each year, cases of harassment, defamation, and killings in the name of honor remind us how fragile women’s safety can be. Now, artificial intelligence (AI) has added another dimension to these dangers. Technology that was once celebrated as progress is now being used to create fake photos, deep fakes, and manipulated videos that can ruin reputations and even cost lives in a society where accusations of dishonor can be fatal.
The numbers show how serious this problem has become. The Digital Rights Foundation’s Cyber Harassment Helpline recorded 2,473 new complaints of cyber harassment in 2023, many of which involved women targeted with manipulated or intimate images (DRF 2023). By 2024, that figure had risen to 3,171 complaints, reflecting a sharp increase in technology-facilitated gender-based violence (DRF 2024). The Federal Investigation Agency also reported about 1,200 cases in 2023 involving deep fakes or non-consensual intimate images, again with women making up the majority of victims (FIA 2023).
These statistics translate into real people’s lives. Women in politics and journalism are frequently attacked with fake images. In 2024, digitally altered photos of Chief Minister Maryam Nawaz were circulated online to question her morality and leadership (Dawn 2024). Other women activists and journalists reported similar attacks, where AI-generated content was used to intimidate them into silence (DRF 2024). During the election period, watchdogs noted that AI-based harassment against women escalated sharply, with fabricated images designed to humiliate and spread rumors (The Friday Times 2024).
The link between such content and physical violence cannot be ignored. Pakistan has one of the highest numbers of honor killings in the world. In 2024 alone, 405 people, mostly women, were killed in the name of honor (AP News 2024). Many of these killings stemmed from allegations of immoral behavior, often without clear proof. With AI making it possible to manufacture convincing but false evidence, the danger has increased. In communities where digital literacy is low, a fake video or photo can be taken as undeniable truth, pushing families into decisions that destroy lives.
Pakistan does have laws that aim to protect against this kind of abuse. The Prevention of Electronic Crimes Act of 2016 includes provisions against using someone’s images without consent or creating sexually explicit material to harm their dignity (PECA 2016). Section 21 specifically addresses superimposed or manipulated images. Yet enforcement is weak. The FIA’s cybercrime units often lack the resources or technical expertise to detect deep fakes, and cases take months or years to resolve (Pakistan Today 2024). In Punjab, a new defamation act was introduced in 2024, but its implementation is controversial, with critics warning it may be used to silence dissent rather than protect victims (Punjab Defamation Act 2024).
Beyond law, several barriers keep women vulnerable. Social stigma means that when women are targeted, they are more likely to be blamed than supported. Many families do not understand how easily images can be faked, so they accept manipulated content as real. The gender digital divide also plays a role: women are less likely than men to own mobile phones or have digital literacy, leaving them unable to defend themselves effectively (Daily Times 2024). Even when women try to report abuse, they often face secondary victimization at police stations or within their own families. In rural areas, parallel justice systems like jirgas may hand down punishments without giving women any chance to prove that evidence is false.
Despite these challenges, there are paths forward. The FIA and other law enforcement agencies must be equipped with advanced AI detection tools and trained experts who can verify manipulated content quickly. Laws like PECA need to be updated to directly criminalize the creation and distribution of deep fakes, ensuring that perpetrators face accountability. Public awareness campaigns are also crucial. People across Pakistan, especially in rural areas, must be taught that not everything they see online is real, and that photos and videos can be fabricated. Victims need safe spaces, helplines, legal aid, and counseling, with NGOs like the Digital Rights Foundation already offering some support but requiring stronger backing from the state. At the cultural level, religious leaders, teachers, and community elders must challenge the harmful notions of honor that make women vulnerable to violence in the first place. Social media platforms must also play their part by quickly removing harmful content and offering reporting tools to victims.
Pakistan stands at a crossroads. AI has the potential to improve education, healthcare, and innovation, but in a society already burdened with gender inequality, it risks becoming another instrument of oppression. When 2,473 cases of harassment are reported in one year (DRF 2023), rising to 3,171 the next (DRF 2024), when 1,200 deep fake complaints are filed with the FIA (2023), and when 405 honor killings are recorded in a single year (AP News 2024), the message is clear: women’s safety is under attack from both old traditions and new technologies. If the state, society, and technology companies fail to act, then what should be tools of progress will instead magnify injustice, turning rumor into “evidence” and suspicion into tragedy.
Fawad Pirzada writes what he feels and what he observes. weaving emotions and reflections into words that echo both the heart and the world around him




1 Comment
Powerful and thought-provoking, Mr. Fawad Khan
You’ve highlighted a critical issue with clarity and empathy — technology must become a tool for protection, not oppression. 👏