Publication History
Submitted: April 17, 2025
Accepted:Â Â May 22, 2025
Published:Â May 30, 2025
Identification
D-0462
DOI
https://doi.org/10.71017/djsi.4.05.d-0462
Citation
Ali Değirmenci & Nail Gökalp (2025). The Algorithmic Gaze: An Interdisciplinary Review of Gender Challenges in AI and Robotics. Dinkum Journal of Social Innovations, 4(05):268-276.
Copyright
© 2025 The Author(s).
268-276
The Algorithmic Gaze: An Interdisciplinary Review of Gender Challenges in AI and RoboticsReview Article
Ali Değirmenci 1*, Nail Gökalp 2
- Gaziantep University, TĂĽrkiye.
- Gaziantep University, TĂĽrkiye.
*Â Â Â Â Â Â Â Â Â Â Â Â Correspondence: alideg20851@gaziantep.edu.tr
Abstract: Artificial Intelligence (AI) and robotics are no longer technologies of the future; they are deeply embedded in the fabric of daily life, influencing social interaction, economic opportunity, and personal well-being. However, these systems are not neutral arbiters of progress. This review article provides a comprehensive, interdisciplinary analysis of the pervasive gender challenges within AI and robotics. It argues that from their conception to their deployment, these technologies often reflect, reinforce, and amplify harmful gender stereotypes and structural inequalities. The analysis is structured around three core areas of concern. First, it examines the origins of bias, tracing them to skewed training data laden with historical prejudice and the profound lack of gender diversity within development teams. Second, it investigates the manifestations of this bias across a spectrum of technologies, from the feminization of subservient virtual assistants and the discriminatory outcomes of high-stakes algorithmic systems in hiring and healthcare, to the embodiment of gender stereotypes in physical robots designed for care and service. Third, the article synthesizes critiques from feminist technoscience, ethics, and intersectional theory, arguing that a narrow, techno-solutionist approach is insufficient. These perspectives reveal how the design of AI often defaults to a masculine worldview and prioritizes optimization over an ethics of care. The article concludes by exploring potential mitigation strategies, including technical debiasing, policy regulation, and a fundamental paradigm shift toward inclusive, value-sensitive design. It posits that creating truly equitable AI requires a systemic, interdisciplinary effort to challenge the underlying social and cultural norms that these powerful technologies currently mirror.
Keywords: Gender bias, Artificial intelligence, Robotics, Algorithmic bias
1. INTRODUCTION
The proliferation of artificial intelligence and robotics has ushered in an era of unprecedented technological integration. From the conversational agents on our smartphones and in our homes, such as Apple’s Siri and Amazon’s Alexa, to the complex algorithms that determine loan eligibility, filter job applications, and even assist in medical diagnoses, these systems are becoming inextricable from the core functions of modern society [1]. Proponents herald AI as a force for objective, data-driven efficiency that can overcome the fallibility of human prejudice This techno-utopian vision, however, obscures a more complicated and troubling reality: AI systems are not impartial. They are socio-technical artifacts, shaped by the data they are fed, the priorities of their creators, and the cultural contexts in which they are developed. A growing body of interdisciplinary research reveals that one of the most significant ethical failings of contemporary AI and robotics is their tendency to absorb, replicate, and even amplify societal biases, particularly those related to gender [2]. This is not merely a technical glitch to be patched but a systemic issue with profound social consequences. When an AI hiring tool learns from decades of biased data that men are more likely to be executives, it perpetuates a discriminatory status quo under a veneer of computational objectivity When virtual assistants are overwhelmingly designed with default female voices and subservient personalities, they reinforce harmful stereotypes that link femininity with servitude When robots are designed to perform care work and are gendered as female, it codifies traditional divisions of labor into our technological future [3]. The implications of this algorithmic and robotic gendering extend far beyond simple misrepresentation. They risk deepening economic inequality, restricting access to opportunities for women and gender-nonconforming individuals, and solidifying regressive social norms at a scale and speed previously unimaginable. Addressing these challenges requires more than just technical adjustments; it demands a robust, interdisciplinary inquiry that draws on insights from computer science, sociology, gender studies, ethics, and critical theory. This review article undertakes such an inquiry. It provides a comprehensive overview of the primary gender challenges in AI and robotics, structuring the analysis around their origins, manifestations, and potential solutions. First, it will explore the dual genesis of the problem: the biased data sets that form the foundation of machine learning and the stark lack of gender diversity in the teams that build these technologies. Second, it will detail the concrete ways these biases manifest in the world, examining gendered virtual assistants, discriminatory algorithmic decision-making systems, and the embodiment of stereotypes in physical robots. Third, the article will engage with critical frameworks, particularly from feminist technoscience and ethics, to analyze the deeper, structural issues at play. Finally, it will survey the landscape of proposed interventions, from technical debiasing methods to calls for systemic change in how we design, regulate, and deploy these powerful tools. Ultimately, this article argues that achieving gender equity in our technological future is contingent on our ability to critically interrogate and fundamentally reshape the values embedded in the machines we create.
2. THE GENESIS OF GENDER BIAS: DATA AND DEMOGRAPHICS
The gender bias observed in AI systems is not spontaneous. It is a direct consequence of the materials from which AI is built and the people who build it. These twin foundations—the data used for training and the demographic composition of the workforce—are deeply flawed, creating a systemic predisposition toward gender inequality from the very inception of an AI model.
2.1 Garbage In, Gospel out”: The Problem of Biased Training Data
At the heart of modern AI is machine learning, a process by which algorithms “learn” patterns and relationships from vast datasets. The performance of these models is fundamentally limited by the quality and representativeness of the data they are trained on [4]. The principle of “garbage in, garbage out” is paramount; however, when applied to social data, it becomes “bias in, bias out” The digital and historical records that constitute these datasets are not objective archives of reality; they are repositories of human history, complete with all its systemic biases, stereotypes, and inequalities. One of the most well-documented examples of this phenomenon is in Natural Language Processing (NLP), the field of AI that deals with text and language. Large language models (LLMs) are trained on enormous corpuses of text from the internet and digitized books. These texts contain decades of gendered associations [5,6]. Consequently, models learn to replicate these associations. For example, word-embedding models, which represent words as mathematical vectors, have been shown to produce analogies such as “man is to computer programmer as woman is to homemaker” A comprehensive 2024 UNESCO study of popular LLMs found that they consistently produced text linking men with careers, business, and salaries, while associating women with family, home, and domestic roles, often assigning them to lower-status jobs like “housekeeper” or “cook” These models are not inventing these biases; they are accurately reflecting the statistical regularities of a biased world, thereby laundering historical prejudice and presenting it as a neutral computational output. This problem extends to computer vision. Facial recognition systems have been repeatedly shown to have significantly higher error rates when identifying women and, in particular, women of color, compared to white men This disparity arises because the benchmark datasets used to train these systems are overwhelmingly composed of images of lighter-skinned males. When the training data fails to represent the full spectrum of humanity, the resulting technology fails to serve—and can actively harm—those who are underrepresented. The consequences range from the inconvenience of a phone failing to unlock to the grave injustice of a false identification in a law enforcement context [7].
2.2 The Homogeneous Architect: Lack of Diversity in AI Development
The problem of biased data is compounded by the profound lack of diversity within the teams that design, build, and test AI systems. The AI industry is overwhelmingly dominated by men. According to a 2023 report by the World Economic Forum, women account for only around 30% of the AI workforce globally.This gender imbalance is even more pronounced in technical and leadership roles. This homogeneity has critical consequences for the technology that is produced [8]. First, a lack of diversity leads to “blind spots” in the development process A team composed primarily of men is less likely to anticipate the ways in which a product might fail for or negatively impact women. For example, early voice recognition systems were notoriously less accurate for female voices, likely because they were predominantly tested and calibrated on male voices The design of a health tracking app by an all-male team might famously neglect to include a feature for tracking menstrual cycles, a fundamental aspect of health for half the population [9]. These are not acts of malice but failures of imagination stemming directly from a lack of diverse lived experiences in the room where decisions are made. Second, the homogeneity of the workforce reinforces a “male-as-default” paradigm in design. Technology is often designed with a specific user in mind, and that user is implicitly assumed to be male. This can be seen in everything from the physical size of smartphones, which are often too large for the average female hand, to the parameters of crash test dummies, which were historically based on the male physique, leading to higher injury rates for women in car accidents in AI, this defaultism means that problems affecting men are often prioritized, and solutions are optimized for male users, further marginalizing the needs and experiences of others. The combination of biased data and a non-representative workforce creates a destructive feedback loop. Biased data leads to the creation of biased AI products. The lack of diversity on development teams means these biases are less likely to be identified and corrected. The biased products are then released into the world, where they can reinforce the very stereotypes and inequalities present in the original data, creating an ever-more biased digital environment [10].
3. MANIFESTATIONS OF BIAS: FROM VIRTUAL ASSISTANTS TO PHYSICAL ROBOTS
The biases originating in data and demographics do not remain abstract. They become manifest in the technologies that individuals interact with daily, shaping perceptions, distributing opportunities, and creating new avenues for harm. These manifestations can be observed across a spectrum of applications, from the subtle gendering of conversational agents to the life-altering decisions of automated systems and the physical embodiment of stereotypes in robots.
3.1 The Digital Handmaiden: Gendered Voice Assistants and Affective Labor
One of the most ubiquitous forms of AI is the conversational voice assistant. Amazon’s Alexa, Apple’s Siri, and Google Assistant have become fixtures in millions of homes and devices. A striking feature of these assistants is their overwhelming feminization [11]. Upon their launch, and for many years after, they defaulted to female names, female voices, and personalities designed to be helpful, docile, and accommodating When faced with verbal abuse or sexual harassment, their responses were often deflective, apologetic, or even flirtatious, a design choice that UNESCO famously condemned as “reinforcing a belief that women are… obliging, docile and eager-to-please helpers. This design choice is not accidental. It leverages deep-seated societal stereotypes that associate femininity with care, service, and assistance. The role of the AI assistant is a digital form of “affective labor”—work intended to produce or modify emotional experiences in others—which has historically been feminized and devalued by gendering these assistants as female, tech companies make them more socially legible and commercially appealing, but in doing so, they codify harmful stereotypes into the infrastructure of daily life [12]. Research has shown that users, particularly men, interact differently with female-voiced assistants, interrupting them more frequently and using more commanding language, potentially reinforcing gendered power dynamics in human-to-human interaction. While some companies have begun to offer male or gender-neutral voice options, the initial and enduring legacy of the “digital handmaiden” demonstrates how easily AI can become a vehicle for regressive social norms [13].
3.2 Algorithmic Inequity: Gender Discrimination in High-Stakes Systems
While gendered assistants shape social perceptions, other AI systems have a more direct and material impact on people’s lives. When algorithms are deployed in high-stakes domains like employment, finance, and healthcare, the embedded biases can lead to tangible and discriminatory outcomes. In the realm of hiring, the case of Amazon’s abandoned AI recruitment tool is a canonical example [14]. The system was trained on a decade of the company’s hiring data, a dataset that reflected the tech industry’s male dominance. The algorithm “learned” that male candidates were preferable and systematically penalized resumes containing the word “women’s” (as in “women’s chess club captain”) and downgraded graduates from two all-women’s colleges. The system did not just reflect past bias; it automated and scaled it, threatening to create a formidable barrier to entry for qualified female candidates. Similar issues have been found in financial services, where algorithms used to determine creditworthiness can inadvertently discriminate. If historical data shows that women, on average, have lower incomes (due to the gender pay gap), an algorithm might learn to associate female applicants with higher risk, even when controlling for other factors [15]. This can result in women being offered smaller loans or higher interest rates, further entrenching economic disparity In healthcare, AI diagnostic tools trained primarily on data from male patients may be less accurate at identifying conditions that present differently in women, such as heart attacks, leading to potentially fatal misdiagnoses In each of these cases, the AI system acts as a mechanism for laundering discrimination, making biased outcomes appear to be the result of objective, mathematical calculation rather than social inequality [16].
3.3 Embodied Stereotypes: Gender in Social and Care Robotics
As AI moves from the screen into the physical world, these challenges take on a new dimension. The design of robots, particularly those intended for social interaction and care, is a fertile ground for the projection and reinforcement of gender stereotypes. Robots designed for tasks traditionally associated with women’s labor—such as elder care, childcare, and domestic assistance—are frequently given feminized features, such as softer voices, more rounded features, and names that are typically female [17]. This design practice risks naturalizing the idea that care work is inherently “women’s work,” devaluing it and making it less likely that men will be seen as competent in these roles, whether human or robotic. The debate becomes even more fraught with the development of sex robots. The vast majority of these robots are hyper-sexualized female forms, designed to cater to a male gaze [18]. Feminist critics argue that these robots objectify women, promote unrealistic and potentially harmful expectations of real-world sexual partners, and model relationships based on domination and subservience rather than mutual consent and respect They represent an extreme but telling example of how technology can be used to embody and perpetuate the most regressive gender ideologies. The gendering of robots raises complex ethical questions. While some argue that designing robots to meet user expectations and stereotypes can lead to smoother human-robot interaction, critics counter that this capitulation to bias is a moral failure Instead of simply reflecting society as it is, they argue, roboticists have a responsibility to design technologies that challenge harmful stereotypes and point toward a more equitable future [19].
4. INTERDISCIPLINARY CRITIQUES: FEMINIST AND ETHICAL PERSPECTIVES
To fully grasp the scope of the gender challenges in AI and robotics, it is necessary to move beyond a purely technical analysis of bias. Insights from the humanities and social sciences, particularly feminist technoscience and ethics, provide critical frameworks for understanding the deeper power dynamics and value systems at play. These perspectives argue that gender bias in AI is not an anomaly to be fixed, but a predictable feature of a technological ecosystem developed within a patriarchal and capitalist society [20].
4.1 A Feminist Technoscience Lens
Feminist technoscience is a field of study that rejects the notion of technology as a neutral tool and instead examines how it is co-produced with social norms, power relations, and identities. From this perspective, AI and robotics are not simply reflecting gender; they are actively participating in the construction and reinforcement of what it means to be a man or a woman [21]. A key concept is the idea of technology being “masculine in its bones” Historically, the fields of engineering and computer science have been culturally coded as masculine domains, prioritizing values such as objectivity, rationality, control, and abstraction while devaluing stereotypically “feminine” attributes like emotion, embodiment, and relationality [22]. This cultural coding influences the very goals of AI development. The quest for artificial general intelligence (AGI), for example, can be seen as an attempt to create a disembodied, hyper-rational mind, a project that reflects a long philosophical tradition of associating masculinity with the mind and femininity with the body and nature Feminist critiques also highlight how AI can reinforce a rigid gender binary. Most systems are designed with a binary classification of gender (male/female), erasing the existence of transgender and non-binary individuals. This is not just a failure of inclusion; it is an active imposition of a specific, socially constructed view of gender. By designing systems that only “see” two genders, developers are encoding a particular ideology into the digital infrastructure, making it harder for alternative gender identities to achieve recognition and legitimacy [23].
4.2 The Ethics of Care vs. the Logic of Optimization
A central tension in the design of AI systems lies in the conflict between a logic of optimization and an ethics of care. The dominant paradigm in engineering and computer science is one of optimization: designing a system to maximize a specific, quantifiable metric (e.g., accuracy, efficiency, user engagement). While effective for certain tasks, this approach is ill-suited for complex social domains where human well-being is at stake. Feminist ethicists have proposed the “ethics of care” as an alternative framework [24]. An ethics of care de-emphasizes abstract rules and principles and instead prioritizes relationships, context, interdependence, and vulnerability. It asks not “What is the most efficient solution?” but “What are the needs of the individuals involved, and how can we respond to them responsibly? “Applying an ethics of care to AI design would lead to fundamentally different technologies. An AI system for social welfare distribution designed with a logic of optimization might focus on fraud detection and efficiency, potentially leading to cruel and bureaucratic outcomes that fail to recognize individual circumstances. A system designed with an ethics of care, by contrast, would prioritize the dignity of the recipient, the importance of the human relationship with a caseworker, and the need for contextual, compassionate decision-making The feminized AI assistants discussed earlier offer a superficial performance of care, but they lack the genuine relationality and responsibility that an ethics of care demands. Building truly caring AI would require a paradigm shift away from pure optimization and toward designing for human flourishing in all its complexity [25].
4.3 Intersectionality: Beyond the Gender Binary
A final, crucial critique emphasizes that gender does not exist in a vacuum. The concept of intersectionality, developed by legal scholar, posits that social categories like gender, race, class, sexuality, and disability are not separate but interlocking systems of oppression. An individual’s experience is shaped by the intersection of these identities [26]. A focus on gender alone is insufficient because the challenges faced by a white, affluent woman in tech are vastly different from those faced by a Black, transgender woman or a woman with a disability in a low-income country. The work of Joy Buolamwini on facial recognition systems is a powerful demonstration of intersectional failure in AI. Her research revealed that the systems with the highest error rates were those trying to identify darker-skinned women [27]. These individuals were at the intersection of two marginalized identities in the training data (female and Black), making them doubly invisible to the algorithm. This shows that a system that is biased against women and a system that is biased against Black people does not just create two separate problems; it creates a unique and compounded form of harm for Black women. An intersectional approach to AI development is therefore essential. It requires that data collection be representative across multiple identity axes. It means that development teams must be diverse not just in terms of gender but also race, ethnicity, socioeconomic background, and other dimensions. And it demands that we analyze the impact of technology not on an abstract, universal “user,” but on the specific, situated experiences of people living at the intersections of various power structures [28].
5. CHARTING A MORE EQUITABLE FUTURE: MITIGATION STRATEGIES AND SYSTEMIC CHANGE
Addressing the deeply embedded gender challenges in AI and robotics requires a multi-pronged approach that combines technical fixes, procedural changes, policy interventions, and a broader cultural shift within the technology sector. No single solution is sufficient; progress depends on a sustained, collaborative effort across disciplines.
5.1 Technical and Procedural Interventions
On a technical level, researchers are developing methods to audit and mitigate bias in machine learning models. These “fairness-aware” algorithms attempt to correct for biases in training data or adjust the model’s outputs to ensure more equitable outcomes across different demographic groups [29]. Techniques include re-weighting data to give more importance to underrepresented groups, applying constraints during model training to enforce fairness metrics, and post-processing model predictions to remove discriminatory patterns. While these technical interventions are valuable, they are not a panacea. They often involve trade-offs between fairness and accuracy, and there is no universal consensus on which mathematical definition of “fairness” is most appropriate for a given context Procedural change within organizations are equally critical [30]. Implementing “bias bounties,” where researchers and the public are incentivized to find and report biases in AI systems, can create a culture of accountability. The adoption of AI ethics boards and transparent impact assessments can ensure that the social implications of a technology are considered throughout its lifecycle, from conception to deployment. Furthermore, a commitment to Explainable AI (XAI)—designing systems whose decision-making processes can be understood by humans—is essential for identifying and challenging biased logic [31].
5.2 The Imperative of Diversity and Inclusion
Technical and procedural fixes can only go so far without addressing the root cause of demographic homogeneity. A concerted effort to increase the representation of women and non-binary people in AI and robotics is fundamental to any long-term solution. This involves creating inclusive educational pipelines, from early STEM education that actively combats gender stereotypes to university programs that support women in computer science [32]. Within the industry, it requires equitable hiring practices, robust mentorship programs, clear pathways to leadership, and the cultivation of workplace cultures that are genuinely inclusive and hostile to harassment and discrimination A diverse workforce is not just a matter of social justice; it is a prerequisite for creating better, safer, and more innovative technology [33].
5.3 Policy, Regulation, and a Paradigm Shift
Finally, individual and corporate efforts must be supported by a strong framework of public policy and regulation. Governments have a crucial role to play in setting standards for AI development, mandating transparency and audits for high-stakes systems, and establishing clear lines of accountability when algorithmic harm occurs [34]. Regulations like the European Union’s proposed AI Act represent a significant step in this direction, attempting to create a legal framework that prioritizes human rights and safety Beyond regulation, what is ultimately needed is a paradigm shift in how we approach technology development. The current model, driven primarily by market incentives and a narrow engineering mindset, has proven insufficient [35]. A move toward Value-Sensitive Design (VSD), a methodology that seeks to proactively incorporate human values into the design process from the very beginning, offers one alternative Similarly, embracing the principles of feminist technoscience and an ethics of care can help reorient the goals of AI development away from pure optimization and toward genuine human and societal well-being. This would mean prioritizing collaboration, context, and compassion, and building technologies that are designed not just to be smart, but to be wise [36].
6. CONCLUSION
The intimate relationship between gender, AI, and robotics presents one of the most significant ethical frontiers of the 21st century. As this review has demonstrated, the challenges are not superficial or isolated; they are systemic, stemming from biased historical data and the homogenous cultures in which these technologies are created. The result is a technological landscape where virtual assistants often perform a digital version of feminized domestic labor, high-stakes algorithms perpetuate economic and social discrimination, and physical robots risk embodying our most regressive stereotypes. This “algorithmic gaze” is not neutral; it sees the world through a lens warped by historical power imbalances and threatens to encode them into our future. An effective response must be as multifaceted as the problem itself. It requires the diligent work of engineers and data scientists to develop fairer and more transparent algorithms. It demands a steadfast commitment from industry leaders to build diverse teams and inclusive cultures. It calls for ethicists, sociologists, and feminist scholars to continue to provide the critical frameworks necessary to understand the power dynamics at play. And it necessitates the engagement of policymakers and the public to create regulatory structures that hold technology accountable to human values. Ultimately, the challenge of gender in AI is a challenge to our collective imagination. It forces us to ask what kind of future we want to build. Will it be a future where our most powerful tools amplify the injustices of the past, or one where we consciously design them to help create a more equitable world? The path forward requires moving beyond a reactive stance of “fixing bias” and toward a proactive project of building a truly feminist technology—one that is inclusive by design, accountable in its operation, and dedicated to the flourishing of all people, across the full spectrum of gender and identity.
References
- HipĂłlito, I., Winkle, K., & Lie, M. (2023). Enactive artificial intelligence: subverting gender norms in human-robot interaction. Frontiers in neurorobotics, 17, 1149303.
- William C. Seto & Debra Solecki (2025). Artificial Intelligence and the Transformation of Digital Marketing. Dinkum Journal of Social Innovations, 4(04):198-204.
- Perugia, G., & Lisy, D. (2023). Robot’s gendering trouble: a scoping review of gendering humanoid robots and its effects on HRI. International Journal of Social Robotics, 15(11), 1725-1753.
- Ulrich Hausknost & Brand, Daniel (2025). Lonely Online: A Social Model of Digital Media Addiction in a Global Context. Dinkum Journal of Social Innovations, 4(04):218-224.
- Zeller, F., & Dwyer, L. (2022). Systems of collaboration: challenges and solutions for interdisciplinary research in AI and social robotics. Discover Artificial Intelligence, 2(1), 12.
- Craiut, M. V., & Iancu, I. R. (2022). Is technology gender neutral? A systematic literature review on gender stereotypes attached to artificial intelligence. Human Technology, 18(3), 297-315.
- WeĂźel, M., Ellerich-Groppe, N., & Schweda, M. (2023). Gender stereotyping of robotic systems in eldercare: An exploratory analysis of ethical problems and possible solutions. International Journal of Social Robotics, 15(11), 1963-1976.
- VallverdĂş, J. Gender in AI and Robotics. Intelligent Systems Reference Library.
- Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., & Barro, S. (2023). AI literacy in K-12: a systematic literature review. International Journal of STEM Education, 10(1), 29.
- Agata PrzepiĂłrka (2025). The Resonant Organization: Informal Leadership, Strategy, and the Power of Silent Authority. Dinkum Journal of Social Innovations, 4(01):43-50.
- Khushk, A., Zhiying, L., Yi, X., & Zengtian, Z. (2023). Technology innovation in STEM education: A review and analysis. IJERI: International Journal of Educational Research and Innovation, (19), 29-51.
- Sean Edward S. Saludar (2024). Gender and Development Implementation of Secondary Schools in Samar Division. Dinkum Journal of Social Innovations, 3(12):682-691.
- Elendu, C., Amaechi, D. C., Elendu, T. C., Jingwa, K. A., Okoye, O. K., Okah, M. J., … & Alimi, H. A. (2023). Ethical implications of AI and robotics in healthcare: A review. Medicine, 102(50), e36671.
- Choudhary, S., & Kandel, L. (2025). Gender Differences in Affinity Toward Technology Among Undergraduate Management Students: A Statistical Analysis. NPRC Journal of Multidisciplinary Research, 2(3), 81-96.
- Osawa, H., Miyamoto, D., Hase, S., Saijo, R., Fukuchi, K., & Miyake, Y. (2022). Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis. International Journal of Social Robotics, 14(10), 2123-2133.
- Lau, P. L., Nandy, M., & Chakraborty, S. (2023, January). Accelerating UN sustainable development goals with ai-driven technologies: A systematic literature review of women’s healthcare. In Healthcare (Vol. 11, No. 3, p. 401). MDPI.
- Alam, A., & Mohanty, A. (2024). Integrated constructive robotics in education (ICRE) model: a paradigmatic framework for transformative learning in educational ecosystem. Cogent Education, 11(1), 2324487.
- Neha Mishra (2024). Evaluation of Socio-Economic Status of Women in Durga Bhagwati Rural Municipality. Dinkum Journal of Social Innovations, 3(07):424-435.
- Goos, M., & Savona, M. (2024). The governance of artificial intelligence: Harnessing opportunities and mitigating challenges. Research Policy, 53(3), 104928.
- Barnes, E., & Hutson, J. (2024). Navigating the ethical terrain of AI in higher education: Strategies for mitigating bias and promoting fairness. In Forum for Education Studies (Vol. 2, No. 2).
- Chu, S. T., Hwang, G. J., & Tu, Y. F. (2022). Artificial intelligence-based robots in education: A systematic review of selected SSCI publications. Computers and education: Artificial intelligence, 3, 100091.
- Nusrat Jahan (2024). Identification of Gender Violence Intensity and Workplace Vulnerabilities: A Cross-sectional Study of Savar Dhaka. Dinkum Journal of Social Innovations, 3(06):328-339.
- Sakubu, D. (2025). Challenges of Artificial Intelligence today and future implications for society and the world. World Journal of Advanced Research and Reviews.
- Willem, T., Fritzsche, M. C., Zimmermann, B. M., Sierawska, A., Breuer, S., Braun, M., … & Buyx, A. (2024). Embedded ethics in practice: a toolbox for integrating the analysis of ethical and social issues into healthcare AI research. Science and Engineering Ethics, 31(1), 3.
- Carstensen, T., & Ganz, K. (2023). Gendered AI: German news media discourse on the future of work. AI & SOCIETY, 1-13.
- Su, Y., Wang, E. J., & Berthon, P. (2023). Ethical marketing AI? A structured literature review of the ethical challenges posed by artificial intelligence in the domains of marketing and consumer behavior.
- Raman, R., Pattnaik, D., Hughes, L., & Nedungadi, P. (2024). Unveiling the dynamics of AI applications: A review of reviews using scientometrics and BERTopic modeling. Journal of Innovation & Knowledge, 9(3), 100517.
- Joyce P. Keith, Joshua Cachin Plando, Henrry Fordd & Faith Ann Agpaoa (2024). A Narrow Review on Harassment among Medical Students in Workplace Nepal. Dinkum Journal of Social Innovations, 3(04):210-217.
- Bisconti, P., Orsitto, D., Fedorczyk, F., Brau, F., Capasso, M., De Marinis, L., … & Schettini, C. (2023). Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology. AI & SOCIETY, 38(4), 1443-1452.
- Md Aktar Hossain & Halima Sadia (2023). Feminist Positivism Perspective of Social Media in the Success of Women Political Leaders. Dinkum Journal of Social Innovations, 2(11):640-646.
- Dutta, S. (2023). Framing the Landscape of Technological Enhancements: Artificial Intelligence, Gender Issues, and Ethical Dilemmas. In Communication Technology and Gender Violence (pp. 109-123). Cham: Springer International Publishing.
- Goktas, P., & Grzybowski, A. (2025). Shaping the future of healthcare: ethical clinical challenges and pathways to trustworthy AI. Journal of Clinical Medicine, 14(5), 1605.
- Browne, J., Cave, S., Drage, E., & McInerney, K. (Eds.). (2023). Feminist AI: Critical perspectives on algorithms, data, and intelligent machines. Oxford University Press.
- HipĂłlito, I., Winkle, K., & Lie, M. (2023). Enactive artificial intelligence: subverting gender norms in human-robot interaction. Frontiers in Neurorobotics, 17, 1149303.
- Fryxell, A. R. (2021). Artificial Eye: The Modernist Origins of AI’s Gender Problem. Discourse, 43(1), 31-64.
Publication History
Submitted: April 17, 2025
Accepted:Â Â May 22, 2025
Published:Â May 30, 2025
Identification
D-0462
DOI
https://doi.org/10.71017/djsi.4.05.d-0462
Citation
Ali Değirmenci & Nail Gökalp (2025). The Algorithmic Gaze: An Interdisciplinary Review of Gender Challenges in AI and Robotics. Dinkum Journal of Social Innovations, 4(05):268-276.
Copyright
© 2025 The Author(s).
