Algorithmic Black Box and Value Bias: Ethical Risk Scrutiny and Safety Governance of Generative AI Embedding in Content Production of Internet Ideological and Political Education in Universities
DOI:
https://doi.org/10.62051/ijgem.v9n3.19Keywords:
Generative Artificial Intelligence, Internet Ideological and Political Education, Content Production, Algorithmic Black Box, Value Bias, Safety GovernanceAbstract
With the rapid development of Generative Artificial Intelligence (AIGC) technologies and their deep embedding into the entire process of content production for Internet ideological and political education in universities, their powerful automated generation and multi-modal interaction capabilities have empowered educational innovation while simultaneously introducing deep ethical risks and security challenges. The inherent "algorithmic black box" attribute of Generative AI leads to the uninterpretability of content production and the concealment of responsible subjects. Meanwhile, potential biases in training data can easily trigger "value bias" in output content, manifesting as the implicit penetration of ideology, the breeding of historical nihilism, and the obscuration of mainstream values. Based on the perspective of the overall national security concept and Marxist technological ethics, this paper deeply scrutinizes the multi-dimensional ethical dilemmas caused by the embedding of Generative AI into university online ideological and political content production, such as data privacy leakage, the dissolution of teacher-student subjectivity, and the solidification of algorithmic discrimination. On this basis, the article systematically constructs a full-chain safety governance system covering "data source purification—algorithmic model alignment—human-machine collaborative review—dynamic risk monitoring" from four dimensions: technical governance, subject reshaping, institutional standardization, and literacy improvement. The aim is to build a strong line of defense for university online ideological security in the intelligent age and promote the organic unity of "tech for good" and educational goals.
Downloads
References
[1] Chen Xiaoping. Philosophical Reflection and Governance Path of Artificial Intelligence Ethical Risks [J]. Philosophical Trends, 2023(08): 89-96.
[2] Xi Jinping. Adhering to the Overall National Security Concept and Blazing a Path of National Security with Chinese Characteristics [N]. People's Daily, 2014-04-16(01).
[3] Sun Hanqi. Ethical Risks and Governance of Generative Artificial Intelligence Empowering Ideological and Political Education [J]. Journal of Ideological and Theoretical Education, 2025(01): 112-118.
[4] Marx, Engels. Complete Works of Marx and Engels (Vol. 42) [M]. Beijing: People's Publishing House, 1979: 95.
[5] Yang Yunxia. Adherence to Subjectivity and Technological Empowerment of Ideological and Political Education in the Intelligent Age [J]. Studies in Ideological Education, 2025(03): 22-28.
[6] Wang Shao. Technical Route, Security Risks and Prevention of ChatGPT Intervention in Ideological and Political Education [J]. Journal of Shenzhen University (Humanities & Social Sciences), 2023, 40(02): 153-160.
[7] Feng Lin, Ni Guoliang. Digital Transformation of Ideological and Political Education Based on Generative Artificial Intelligence [J]. Studies in Ideological Education, 2024(02): 46-53.
[8] Liu Xiaofeng, Zhang Jinlin. Three-Dimensional Exploration of Generative Artificial Intelligence Impacting University Ideological and Political Education [J]. Journal of National Academy of Education Administration, 2023(12): 66-75.
[9] Deng Huan. Research on the Path of Precise Supply of University Internet Ideological and Political Education from the Perspective of Generative Artificial Intelligence [J]. China Higher Education, 2025(02): 55-60.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Journal of Global Economics and Management

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.






