At the opening of the Feminists Tackle AI Series on March 27th, panellists confront assumptions about Artificial Intelligence head-on in their talk “12 Problems with AI.”
Organized by the departments of Gender Studies, Political Science, and the Nexus Centre, the series brings together feminist speakers to examine the overlooked material, political, and social harms caused by AI.
The first event featured keynote speaker Dr. Mél Hogan of Queen’s University and respondents Dr. Julia Polyck-O’Neill and Rhea Rollmann, along with Dr. Carol Lynne D’Arcangelis who moderated the session.
Contemporary issue of AI
Artificial intelligence (AI) is rapidly becoming embedded in everyday life as corporations, academic institutions, and social organizations increasingly integrate its systems into their operations. This hype cycle frames AI as neutral, purely technical, and inevitable.
While the talk was framed around 12 issues that challenge these assumptions, Dr. Hogan emphasized that harms caused by large-scale computation are complex and cannot be neatly categorized. She says that flagging these particular critiques help situate AI within broader systems of political power and global development.
Many of the concerns discussed were drawn from previous conversations with scholars on Dr. Hogan’s podcast, The Data Fix. The following highlights a few of the the critical issues outlined during the talk.

Unlearning or corporatizing learning
One concern raised was the growing reliance on AI to automate or streamline aspects of learning. Students, for instance, may use systems like ChatGPT to generate written assignments or summarize course material.
This reliance is an emerging phenomenon called “cognitive offloading,”—a process that reduces mental effort when external tools are used to perform intellectual tasks. Research suggests that increased reliance on AI may negatively impact critical thinking, memory, and long-term learning, resulting in cognitive atrophy.
Dr. Hogan explained this is because skills like reading and writing are not innate and require active engagement, because learning depends on sustained interactions with ideas as opposed to passive consumption.
Students must be especially familiar with this particular issue as a survey conducted in Canada indicates AI use among university students is expected to increase.
Environmental destruction and extraction
Another issue highlighted was AI’s significant environmental toll. The industry requires enormous amounts of water, land, and energy in order to train and maintain AI infrastructure.
Producing such infrastructure depends on resource-extractive practices like mining and deforestation, contributing to heavy emissions.
Speakers noted that these environmental effects are felt most acutely by individuals in regions in the Global South.
AI’s entanglement with environmental harm is compounded by the fact that it is also being used to expand and optimize extractive industries.
Bias and discrimination
Dr. Hogan also flagged the way AI systems amplify bias and discrimination. This is because AI systems inherit historical data saturated in existing social inequities. These systems are further shaped by the design choices and institutional priorities of their developers.
AI’s adoption in areas such as hiring, policing and health care infrastructure has resulted in discriminatory consequences, particularly for marginalized communities.
Deepfakes and non-consensual imagery
Advancements in AI have collapsed the cost and skill barrier to creating realistic non-consensual sexualized images, often referred to as deepfake pornography.
Dr. Hogan pointed out women and girls make the majority of those victimized, with Black women and women of colour being especially targeted.
Once images are created and circulated, the content can be extremely difficult to fully remove. This phenomenon raises significant and ongoing concerns about consent, privacy, and emerging gender-based harms.
Militarization and settler-colonial violence
The talk also addressed the role of commercial AI in military and surveillance contexts, where systems are actively being weaponized by major political powers in events ongoing in our current geopolitical climate.
Dr. Hogan suggests that data-driven technologies in these contexts turn data into a “key terrain of war,” and function to maintain broader histories of settler-colonialism, imperialism and racialized violence.
Why a feminist lens matters
During the respondent portion of the session, speakers explained how an intersectional feminist lens offers useful ways of understanding AI’s mass effects, as the issues that arise often overlap with each other.
“Although it seems there’s an uncritical mass adoption of corporate AI, many people are resisting this uptake,” said Dr. Polyck-O’Neill. “A feminist lens helps us to understand why the various reasons for resistance are actually interconnected, by giving us a framework for understanding how [these] harms are interrelated.”

Rather than examining problems in isolation, an intersectional framework reveals how harms caused by AI are embedded within existing social structures, drawing attention to whose experiences are prioritized and whose are overlooked.
Dr. Polyck-O’Neill also outlined the importance of how we frame conversations around AI, in that it has potential to unsettle dominant narratives driving the marketing hype behind it.
“We need to colour our discussion of AI in human rights and human centric language,” they said. “We need to refocus our discourse on values, human experience, autonomy, community, dignity and equality.”
Rollmann emphasised AI as a political project that can be resisted rather than a technology whose global domination is imminent. She argued that, whether it be feminists, community groups, or academic institutions, there is a “dissonance” and a “lack of critical thinking” in organizations readily adopting AI systems, adhering to the idea of an inevitable AI future.
“I think the political project of what one might call fascism, patriarchy, colonization… is telling us something is inevitable,” said Rollmann. “We need to be deploying the critical thinking to not accept the idea that anything is inevitable.”
