Skip to Main Content

Generative AI and ChatGPT

Criticisms

Some commentators have questioned the ethics of generative AI tools. Students, instructors and researchers may wish to be familiar with these objections when considering whether to engage with AI. Many of the major criticisms fall into the following themes:

Misinformation: Generative AI could increase the quantity of misinformation due to both convincing-yet-false output and use by bad actors. For example, papers written by ChatGPT have begun to be indexed in Google Scholar, as have the bogus citations that the AI has generated.

Bias: The training data used by generative AI is often scraped from the web and isn't curated for quality or neutrality. It is likely to reproduce societal biases, and tools have been shown to give biased output. 

Critical thinking: Some have cautioned that a reliance on ChatGPT in academic contexts could de-emphasize traditional academic skills and competencies and ultimately lead to a decrease in critical thinking skills.

Copyright infringement: There are questions about the legality of the use of copyrighted material in the training data used to create AIs, especially those that use artworks. At the time of writing, there is an open court case against Stable Diffusion for its use of copyrighted artworks.

User data: Some commentators have questioned the ethics of requiring or encouraging students to accept terms of use that allow ChatGPT or other AI tools to record and use their data for profit. 

Business practices: OpenAI, the company behind ChatGPT, outsourced the labour of manually identifying toxic, offensive, and illegal text in its training data to low-paid Kenyan workers. This work has been criticized as potentially psychologically harmful.

Environmental impact: Large language models generate significant greenhouse gasses in their creation and use. 

Further reading:

Bender, E.M., Gebru, T., McMillan Major, A., Schmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conerence on Fairness, Accountability, and Transparency. 610-623. https://dl.acm.org/doi/10.1145/3442188.3445922

Perrigo, B. (2023, Jan. 18). Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time. Retrieved from https://time.com/6247678/openai-chatgpt-kenya-workers/ [June 8, 2023].