Artificial Intelligence in Journalism: A Narrative Review of Opportunities, Challenges, Ethical Tensions, and Human-Machine Collaboration

Authors

  • Habeeb Abdulrauf School of Communication, Western Michigan University, Kalamazoo, MI, USA
  • Abdulmalik Adetola Lawal Media Innovation and Journalism, Reynolds School of Journalism, University of Nevada, Reno, USA
  • Amarachi Nina Uma Mba Department of Communication Studies, West Virginia University, West Virginia, USA
  • Comfort Ademola Communication studies, Northern Illinois university, Illinois, USA
  • Zaynab B. Yusuf Department of communication, Wayne State University, Detroit, Michigan, USA
  • Shalewa Babatayo Nicholson School of Communication and Media, University of Central Florida, Florida, USA
  • Idris Ayinde School of Media and Communication, Pan-Atlantic University, Lagos, Nigeria

DOI:

https://doi.org/10.54536/ajahs.v4i4.5963

Keywords:

AI Ethics, AI in Journalism, Artificial Intelligence, Deepfake, Human–Machine Communication

Abstract

Artificial Intelligence (AI) is changing the practices of journalism around the world, which influence how news is gathered, produced, and disseminated. This review synthesizes theories, empirical, and other literature to explore the multidimensional impact that AI has on journalistic workflows and values. This review centered on 81 core sources published between 2015 to 2024, examining AI’s affordances, including automation of routine reporting, data mining and audience personalization. The paper also assesses the emerging risks such as algorithmic bias, erosion of editorial transparency, and the popularity of deepfakes in the media. Guided by Human–Machine Communication (HMC) frameworks, Actor-Network Theory, and affordance theory, this review submit that AI is a collaboratived partner rather than a competitor to human journalists. Case examples from newsrooms worldwide (e.g., Associated Press, Washington Post, ICIJ) show both promise and issues in AI integration to the practice of journalism. The paper also addresses the ethical tensions arising from AI-generated content, newsroom accountability, and evolving public trust in machine-assisted reporting. The paper offers future directions that highlight seven key areas: advancing deepfake detection tools, creating of AI ethics guidelines, advocating for the AI training in journalism education, and bridging technological gaps between large and smaller newsrooms. It concludes by hammering on maintaining human editorial oversight and democratic values as AI is growingly augmented in journalistic practice. This paper, therefore, offers a timely and interdisciplinary contribution to media scholars, technologists, and newsroom leaders who are embracing the future of AI-driven journalism.

Downloads

Download data is not yet available.

References

Anderson, C. W., & Revers, M. (2022). The automation of journalism: Reconfiguring the field in the age of algorithms. Journalism, 23(2), 445–463.

Andrejevic, M. (2020). Automated media. Routledge. https://doi.org/10.4324/9780429242595

Bandy, J., & Diakopoulos, N. (2020). Auditing news curation systems: A case study examining algorithmic and editorial logic in Apple News. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1–22.https://doi.org/10.1609/icwsm.v14i1.7277

Bashardoust, A., Feng, Y., Geissler, D., Feuerriegel, S., & Shrestha, Y. R. (2024). The effect of education in prompt engineering: Evidence from journalists. arXiv. https://doi.org/10.48550/arXiv.2409.12320

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 149–159). ACM. https://arxiv.org/abs/1712.03586

Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. https://doi.org/10.1007/s10676-013-9321-6

Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press. https://mitpress.mit.edu/9780262537018/artificial-unintelligence/

Bucher, T., & Helmond, A. (2018). The affordances of social media platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE handbook of social media (pp. 233–253). SAGE. https://doi.org/10.4135/9781473984066.n14

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512. https://doi.org/10.1177/2053951715622512

Calo, R. (2018). Artificial intelligence policy: A primer and roadmap. University of California Davis Law Review, 51(2), 399–435. https://lawreview.law.ucdavis.edu/sites/g/files/dgvnsk15026/files/media/documents/51-2_Calo.pdf

Carlson, M. (2015). The robotic reporter. Digital Journalism, 3(3), 416–431. https://doi.org/10.1080/21670811.2014.976412

Chiu, T. K. F., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers & Education Open, 6, 100171. https://doi.org/10.1016/j.caeo.2024.100171

Coddington, M. (2014). Clarifying journalism’s quantitative turn: A typology for evaluating data journalism, computational journalism, and computer-assisted reporting. Digital Journalism, 3(3), 331–348. https://doi.org/10.1080/21670811.2014.976400

Cools, A., & Diakopoulos, N. (2023). Towards guidelines for guidelines on the use of generative AI in newsrooms. Medium. https://generative-ai-newsroom.com/towards-guidelines-for-guidelines-on-the-use-of-generative-ai-in-newsrooms-55b0c2c1d960

Cools, H., & Diakopoulos, N. (2024). Uses of generative AI in the newsroom: Mapping journalists’ perceptions of perils and possibilities. Journalism Practice. Advance online publication. https://doi.org/10.1080/17512786.2024.2394558

Demartini, G., Mizzaro, S., & Spina, D. (2020). Human-in-the-loop artificial intelligence for fighting online misinformation: Challenges and opportunities. IEEE Bulletin of the Data Engineering Technical Committee. https://www.damianospina.com/publication/demartini-2020-human/demartini-2020-human.pdf

Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press. https://openaccess.city.ac.uk/id/eprint/23001/1/Diakopoulos%20-%20Automating%20the%20news%20GC%20edit_CP.pdf

Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–828. https://doi.org/10.1080/21670811.2016.1208053

Donald, A. N. (1999). Affordance, conventions, and design. interactions, 6(3), 38–43. https://doi.org/10.1145/301153.301168

Dörr, K. N. (2016). Mapping the field of algorithmic journalism. Digital Journalism, 4(6), 700–722. https://doi.org/10.1080/21670811.2015.1096748

Edwards, A., Edwards, C., Spence, P. R., & Shelton, A. K. (2019). Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter. Computers in Human Behavior, 89, 58–62. https://doi.org/10.1016/j.chb.2013.08.013

Edwards, C., Spence, P. R., Gentile, C. J., & Edwards, A. (2013). How much Klout do you have? A test of system-generated cues on source credibility. Computers in Human Behavior, 29(5), A12–A16. https://psycnet.apa.org/record/2013-02487-001

Fallis, D. (2021). The liar’s dividend and the epistemology of deepfakes. Philosophy & Technology, 34, 735–755. https://doi.org/10.1007/s13347-020-00419-2

Farhi, P. (2023, January 20). CNET’s use of AI raises questions about transparency and accuracy. The Washington Post. https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/

Ferrari, R. (2015). Writing narrative-style literature reviews. Medical Writing, 24(4), 230–235. https://doi.org/10.1179/2047480615Z.000000000329

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694. https://doi.org/10.1007/s11023-020-09548-1

Foa Couraçeiro Pinto Martinho, C. (2024). Decoding algorithmic literacy among journalists: Methodological tool design and validation for preliminary study in the Portuguese context. Observatorio (OBS) Journal, 18(5), 2433. https://doi.org/10.15847/obsOBS18520242433

Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin.

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0009

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 80–89. https://doi.org/10.1109/DSAA.2018.00018

Graefe, A. (2016). Guide to automated journalism. Columbia University Tow Center. https://academiccommons.columbia.edu/doi/10.7916/D80G3XDJ

Graves, L. (2018). Boundaries not drawn: Mapping the institutional roots of the global fact-checking movement. Journalism Studies, 19(5), 613–631. https://doi.org/10.1080/1461670X.2016.1196602

Greenhalgh, T., Thorne, S., & Malterud, K. (2018). Time to challenge the spurious hierarchy of systematic over narrative reviews? European Journal of Clinical Investigation, 48(6), e12931. https://doi.org/10.1111/eci.12931

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723

German, D. M. (2024). Copyright-related risks in the creation and use of ML/AI systems. arXiv. https://arxiv.org/abs/2405.01560

Guzman, A. L. (2018). What is human–machine communication, anyway? Human–Machine Communication, 1, 1–28. https://doi.org/10.30658/hmc.1.1

Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human–machine communication research agenda. New Media & Society, 22(1), 70–86. https://doi.org/10.1177/1461444819858691

Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850

Gondwe, G. (2023). ChatGPT and the Global South: How are journalists in sub-Saharan Africa engaging with generative AI? Online Media and Global Communication, 2(2), 228–248. https://doi.org/10.1515/omgc-2023-0023

Humprecht, E., Esser, F., & Van Aelst, P. (2020). Resilience to online disinformation: A framework for cross-national comparative research. The International Journal of Press/Politics, 25(3), 493–516. https://doi.org/10.1177/1940161219900126

IBM. (n.d.). 5 pillars of trustworthy AI: Explainability, fairness, robustness, transparency, and privacy [Infographic]. IBM. https://www.ibm.com

International Consortium of Investigative Journalists (ICIJ). (2016). Panama Papers. https://www.icij.org/investigations/panama-papers/

Johnson, D. G. (2021). Algorithmic accountability in the making. Social Philosophy and Policy, 38(2), 111–127. https://doi.org/10.1017/S0265052522000073

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. https://doi.org/10.1093/oso/9780199256044.001.0001

Law, J. (1992). Notes on the theory of the actor-network: Ordering, strategy, and heterogeneity. Systems Practice, 5(4), 379–393. https://doi.org/10.1007/BF01059830

Laupichler, M., Aster, A., Schirch, J., & Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: A scoping literature review. Computers & Education: Artificial Intelligence, 3, 100101. https://doi.org/10.1016/j.caeai.2022.100101

Lewis, S. C., Guzman, A. L., & Schmidt, T. R. (2020). Automation, journalism, and human–machine communication: Rethinking roles and relationships of humans and machines in news. Digital Journalism, 8(2), 100–118. https://doi.org/10.1080/21670811.2019.1577147

Lewis, S. C., & Usher, N. (2013). Open source and journalism: Toward new frameworks for imagining news innovation. Media, Culture & Society, 35(5), 602–619. https://doi.org/10.1177/016344371348549

Li, L., Chu, W., Langford, J., & Schapire, R. E. (2010). A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web (WWW ’10) (pp. 661–670). Association for Computing Machinery. https://doi.org/10.1145/1772690.1772758

Li, L., Wang, D. D., & Zhu, S. Z. (2011). Personalized news recommendation: A review and an experimental investigation. Journal of Computer Science and Technology, 26(5), 754–766. https://doi.org/10.1007/s11390-011-0175-2

Linden, C. G. (2017). Algorithms for journalism: The future of news work. The Journal of Media Innovations, 4(1), 60–76. https://doi.org/10.5617/jmi.v4i1.2420

Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43. https://doi.org/10.1145/3233231

Marconi, F. (2020). Newsmakers: Artificial intelligence and the future of journalism. Columbia University Press. https://cup.columbia.edu/book/newsmakers/9780231191371

Marconi, F., & Siegman, A. (2017). The future of augmented journalism: A guide for newsrooms in the age of smart machines. Associated Press. https://journalismai.com/2017/02/22/future-of-augmented-journalism-ap-2017/

Maylott, P., Dhillon, S., Brooks, D., & Wojkowski, S. (2023). Integration of artificial intelligence (AI) into the data extraction phase of a scoping review. Journal of Interdisciplinary Research, 3(1), 45–58. https://doi.org/10.54536/jir.v3i1.3946

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220–229). ACM. https://doi.org/10.1145/3287560.3287596

Montal, T., & Reich, Z. (2017). I, robot. You, journalist. Who is the author? Digital Journalism, 5(7), 829–849. https://doi.org/10.1080/21670811.2016.1209083

Moses, L. (2017, August 7). AP’s robot journalists are writing thousands of earnings stories a year. Digiday. https://digiday.com/media/washington-posts-robot-reporter-published-500-articles-last-year/

Moyo, L. (2020). Data journalism and the Panama Papers: New horizons for investigative journalism in the Global South. In B. Mutsvairo, S. Bebawi, & E. Borges-Rey (Eds.), Data journalism in the Global South (pp. 151–167). Palgrave Macmillan. https://link.springer.com/chapter/10.1007/978-3-030-25177-2_2

Nakov, P., Barrón-Cedeño, A., Da San Martino, G., & Elsayed, T. (2021). Automated fact-checking for assisting human fact-checkers. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 35(18), 15426–15435. https://arxiv.org/abs/2103.07769

Napoli, P. M. (2019). Social media and the public interest: Media regulation in the disinformation age. Columbia University Press. https://doi.org/10.7312/napo18454

Nass, C., & Moon, Y. (2002). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153

Obermeyer, Z., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm that guides health decisions for 70 million people. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT ’19) (p. 89). Association for Computing Machinery. https://doi.org/10.1145/3287560.3287593

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.

Ramos, A. R., & Suizo, C. D. (2022). Challenges, adaptability, and resilience of campus journalists amidst the COVID-19 pandemic. Journal of Teaching, Education, and Learning, 2(1), 108–118. https://doi.org/10.54536/jtel.v2i1.2308

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. CSLI Publications.

Rieder, B., & Hofmann, J. (2020). Towards platform observability. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1535

Robins-Early, N. (2024, May 10). CEO of world’s biggest ad firm targeted by deepfake scam. The Guardian. https://www.theguardian.com/technology/article/2024/may/10/ceo-wpp-deepfake-scam

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. In Data and discrimination: Collected essays. https://ai.equineteurope.org/system/files/2022-02/ICA2014-Sandvig.pdf

Stray, J. (2019). Making artificial intelligence work for investigative journalism. Digital Journalism, 7(7), 1076–1097. https://doi.org/10.1080/21670811.2019.1630289

Sundar, S. S., & Kim, J. (2019). Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) (pp. 1–9). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300768

Simon, M. (2024). Artificial intelligence in the news: How AI retools, rationalizes, and reshapes journalism and the public arena. Tow Center/Columbia Journalism School.

Talha, M. M., Khan, H. U., Iqbal, S., Alghobiri, M., Iqbal, T., & Fayyaz, M. (2023). Deep learning in news recommender systems: A comprehensive survey, challenges and future trends. Neurocomputing, 562, 126881. https://doi.org/10.1016/j.neucom.2023.126881

Tandoc, E. C., Jr., & Kim, H. K. (2023). Avoiding real news, believing in fake news? Investigating pathways from information overload to misbelief. Journalism, 24(6), 1174–1192. https://pmc.ncbi.nlm.nih.gov/articles/PMC9111942/

The New York Times Visual Investigations team. (2024, March). AI news that’s fit to print: The New York Times’ editorial AI director on the current state of AI-powered journalism. Nieman Lab. https://www.niemanlab.org/2024/03/ai-news-thats-fit-to-print-the-new-york-times-editorial-ai-director-on-the-current-state-of-ai-powered-journalism/

Thurman, N., Dörr, K. N., & Kunert, J. (2017). When reporters get hands-on with robo-writing: Professionals consider automated journalism’s capabilities and consequences. Digital Journalism, 5(10), 1240–1259. https://doi.org/10.1080/21670811.2017.1289819

Thomson Reuters Foundation. (2025). Journalism in the AI era: Opportunities and challenges in the Global South. (Unequal access; cost barriers; equitable design recommendations). https://www.trust.org/wp-content/uploads/2025/01/TRF-Insights-Journalism-in-the-AI-Era.pdf

Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1), 2056305120903408. https://doi.org/10.1177/2056305120903408

Van Dalen, A. (2012). The algorithms behind the headlines: How machine-written news redefines the core skills of human journalists. Journalism Practice, 6(5–6), 648–658. https://doi.org/10.1080/17512786.2012.667268

Voice of America (VoA) News. (2024, January 15). Deepfakes a ‘weapon against journalism,’ analyst says. Voice of America. https://www.voanews.com/a/deepfakes-a-weapon-against-journalism-analyst-says-/7442897.html

Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383–388. https://doi.org/10.1016/j.tics.2010.05.006

Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52. https://doi.org/10.22215/timreview/1282

Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., … Dean, J. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv. https://arxiv.org/abs/1609.08144

Downloads

Published

2025-09-26

How to Cite

Abdulrauf, H., Lawal, A. A., Mba, A. N. U., Ademola, C., Yusuf, Z. B., Babatayo, S., & Ayinde, I. (2025). Artificial Intelligence in Journalism: A Narrative Review of Opportunities, Challenges, Ethical Tensions, and Human-Machine Collaboration. American Journal of Arts and Human Science, 4(4), 5–18. https://doi.org/10.54536/ajahs.v4i4.5963