Skip to main content

CFP (Articles): World History Connected, “World History in the Age of AI: Ethics, Pedagogy, and the Use of Large Language Models” (Proposals due 03/01/24)

World History Connected invites submissions for a Forum (a set of four to eight curated articles), set to be published in Summer 2024, on the topic "World History in the Age of AI: Ethics, Pedagogy, and the Use of Large Language Models". The Forum will focus on how world history instructors attend to the ethical issues surrounding Large Language Models (LLM), such Open AI’s ChatGPT or Google Bard, in our pedagogies. Contributions may include archival research, fieldwork, and the scholarship of teaching. Submissions of articles are due by March 1, 2024.

Articles submissions are due by March 1, 2024.

About the Forum

This issue’s Forum, entitled “The Ethics of using Large Language Models in World History Instruction” will be guest edited by Jack Norton. Our Forum will focus on how world history instructors attend to the ethical issues surrounding Large Language Models (LLM), such Open AI’s ChatGPT or Google Bard, in our pedagogies. Historians have long struggled to use tools and sources ethically. Who gets to use a source or tool to craft a history, and how is their use justified? The histories of Henrietta Lacks, the Elgin marbles, and the cultural heritage tool Mukurtu  all foreground the ethical questions history instructors ask our students to consider.

How then may we place LLMs into historical context and advise our world history students on LLM usage? The ethical issues involved with LLMs as technology are well-reported, including privacy violations, high water usage and carbon production from LLM servers, and the mental health costs of low-paid content moderators who sort information too violently or sexually complex for the LLM. What’s more, LLMs struggle to do basic calculations, answer questions truthfully, or write accurately about the past with legitimate citations. Responding to LLM usage in the classroom, some research finds that “GPT detectors are biased against non-native English writers.”

Even with these issues, world history instructors should not ignore LLMs as these tools produce content at such speed and with such coherence that even high failures rates will not deter students from exploring their usage. More importantly, students will face competition from LLMs in the jobs, which suggests an ethical obligation for instructors to prepare students for LLMs at work. This Forum invites articles that explore the ethics of AI usage in the world history classroom. Topics might include instruction on history of technology, labor, colonialism, language, disability, or the environment and the concomitant ethical issues that attend such histories. As well, the Forum invites articles that model novel world history pedagogies attending to LLMs grounded in the ethics of the discipline of history, such as how to evaluate credible sources, how to guide students towards productive uses of LLMs, how to encourage academic integrity in source usage and citations, or how to interrogate LLMs output for falsehoods as a stepping stone to broader understanding of how history is created.

Learn more about World History Connected at https://journals.gmu.edu/whc.

Important Dates:

  • December 15, 2023: Articles due to Jack Norton via email at jack.norton@normandale.edu with the subject line “WHC Submission”, followed by a last name and a short title or query.

Prior to the submission of a prospective article authors are encouraged to consult the journal’s Submissions and Style Guide (https://journals.gmu.edu/index.php/whc/submission-guidelines), or risk possible delays in consideration. The journal, like all academic journals, reserves the right to decline to publish any submission.

  • Summer 2024: Forum published

Please direct all inquiries to Jack Norton, Guest Editor of the Forum, at jack.norton@normandale.edu, or Editor-Elect Cynthia Ross at cynthia.ross@tamuc.edu.