Skip to Main Content

Generative AI and ChatGPT

Choosing the right tool: questions to consider

Choosing an AI tool for study or library research involves considering the tool in multiple ways. There are many tools available that are designed for a wide variety of tasks, each having their own strengths and weaknesses. UBC's Centre for Teaching, Learning, and Technology (CTLT) maintains a list of AI tools, and welcomes submissions of relevant tools to be added to the list. Given the broad range of tools available, even for a specific use case, there may exist several tools to choose from.

The LibrAIry has designed the ROBOT test as a tool for evaluating AI. The ROBOT acronymreliability, objective, bias, ownership, and type-- can help you remember important criteria that can be used to evaluate a new and unknown AI tool:

Reliability of documentation/ information about AI:

  • What information is available about the AI?
  • Who produced this information? The creators of the AI, or a third party? If a third party, what are their credentials? Is the information biased?
  • How much information is publicly available? What information isn't available (for example, to protect proprietary models or trade secrets)?

Objective of AI tool:

  • What is the objective of the company that created the AI?
  • What was the AI designed to do? 
  • What context was the AI designed to work in (Academic research? Public use?)

Bias of training data and output

  • Is there noticeable bias in information about the AI or information from the AI creators?
  • What training data was used in creating the AI? What bias could be included?
  • Are there documented biases in the output? When you use the AI, what bias do you notice?

Ownership of AI

  • Who is the owner or developer of the AI? What are their goals?
  • Who is responsible for the AI?
  • Who is able to access the AI? 

Type of AI

  • What type of AI is it (see LibrAIry AI Family Tree for types)
  • What kind of information system does it rely on?

Researching with generative AI

Journal Policy: Major journals are creating policies for the use of LLMs and generative AI in research. See the how to cite tab for major policies.

Composition:  Large Language Models are designed to create natural sounding speech and can be helpful in the writing process. Some possible uses include:

  • Generating a starting point for editing when facing writer's block.
  • Generating titles, abstracts and conclusions
  • Helping with structure and outlining
  • Assisting with composition for non-native English speakers

Information: Caution should be taken when generating information using generative AI, as output may not be factual. However, some possible uses (when properly fact-checked) include:

  • Summarizing general background knowledge on a common topic
  • Generating code

Transparency and reproducibility: The use of AI in research can reduce the reproducibility of results. AIs may give different outputs to the same prompts at different times. The algorithms used may be proprietary and not transparently documented, so it isn't clear how decisions are being made. Using AI with transparent and explainable algorithms can help mitigate these issues.

Privacy: In addition to other concerns about the privacy of personal user data when using an AI, researchers should be sure to not input any confidential information or research data to AI tools without fully understanding the privacy policy of the tool.

Recommended best practices: There are not, at the time of writing, consistent standards for using AI in research. However, suggested best practices (from Buriak et al.) include:

"(i) Acknowledge, in the Acknowledgements and Experimental Sections, your use of an AI bot/ ChatGPT to prepare your manuscript. Clearly indicate which parts of the manuscript used the output of the language bot, and provide the prompts and questions, and/ or transcript in the Supporting Information

(ii) Remind your coauthors, and yourself, that the output of the ChatGPT model is merely a very early draft, at best. The output is incomplete, might contain incorrect information, and every sentence and statement must be considered critically. Check, check, and check again. And then check again.

(iii) Do not use text verbatim from ChatGPT. These are not your words. The bot might have also reused text from other sources, leading to inadvertent plagiarism.

(iv) Any citations recommended by an AI bot/ChatGPT need to be verified with the original literature since the bot is known to generate erroneous citations.

(v) Do not include ChatGPT or any other AI-based bot as a co-author. It cannot generate new ideas or compose a discussion based on new results, as that is our domain as humans. It is merely a tool, like many other programs, for helping with the formulation and writing of manuscripts...

(vi) ChatGPT cannot be held accountable for any statement or ethical breach. As it stands, all authors of a manuscript share this responsibility.

(vii) And most importantly, do not allow ChatGPT to squelch your creativity and deep thinking. "Use it to expand your horizons, and spark new ideas!" (Buriak et. al., 2023, pp 4092).

Further reading:

Buriak, J., Akinwande, D., Artzi, N. Brinker, C.J., Burrows, C., Chan, W. C. W., Chen, C., Chen, X. Chowalla, M., Chi, L., Chueh, W., Crudden, C.M., Di Carlo, D., Glotzer, S.C., Hersam, M.C., Ho, D., Hu, T.Y., Huang, J., Javey, A.,... Ye, J. (2023) Best practices for using AI when writing scientific manuscripts. ACS Nano, 17(5), 4091-4093.

Chubb, J., Cowling, P., & Reed, D. (2022). Speeding up to keep up: Exploring the use of AI in the research process. AI & Society 37, 1439-1457. https://doi.org/10.1007/s00146-021-01259-0

Khalif, Z. N. (2023). Ethical concerns about using AI-generated text in scientific research. Available at SSRN: https://ssrn.com/abstract=4387984 or http://dx.doi.org/10.2139/ssrn.4387984

Best practices for using Generative AI tools

Know what it can and can't do: AI tools are good at some tasks but struggle at others. ChatGPT is good at creating natural sounding sentences, but it often fails at math and it fabricates sources when asked to use citations.

Prompt engineering: Prompt engineering refers to techniques used to design prompts to achieve the best output for your goals. Depending on the AI, the best prompts may vary, so experiment to find out what works best. Common techniques include

  1. being clear and concise about expected output. 
  2. feeding the AI information that you want included. You can briefly explain a theory or style of response before making the query.
  3. requesting the output in a specific style or format (e.g. "write a fairy tale", "respond as if you are Elon Musk", "in the style of Frida Kahlo");
  4. Provide examples of the expected task before making the query.

Fine-tune the output: For more complex queries, you may need to use multiple follow-up queries to improve the output. Consider yourself a co-editor of ChatGPT's output and tell it it clearly what changes you would like it to make to the draft. 

Fact-check claims: For language-based generative AI, results are not always factually correct. Check the results against your own background knowledge and other sources. AI does not eliminate the need for lateral reading, checking multiple sources, and thinking critically.

Further reading:

Leo, S.L. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4). https://doi.org/10.1016/j.acalib.2023.102720

Mollick, E. (2023, Feb 17.). My class required AI. Here's what I've learned so far. Retrieved from https://www.oneusefulthing.org/p/my-class-required-ai-heres-what-ive [June 6, 2023].

Ozdemir, S. (2023) Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs.

Terrasi, V. (2023, May 3). How to write ChatGPT prompts to get the best results. Retrieved from https://www.searchenginejournal.com/how-to-write-chatgpt-prompts/479324/  [June 6, 2023].

Xiao, D. (2023.) How to fine tune prompts, AI-Based Literature Review Tools. Texas A&M University Libraries [research guide]. https://tamu.libguides.com/c.php?g=1289555&p=9642751

 

AI Image Guidelines

How to check if an image is AI generated:

  • Check the title and description (and comments section, if there is one) with the image content
  • Use reverse image search tools to try to trace back to the original image source
  • Verify the authenticity of the image source and cross reference it to other sources
  • Check for unusual or distorted features (odd lighting or textures, or other details that don’t fit)
  • Look for a watermark on the image, which can sometimes indicate the AI tool that was used
  • Use an AI image detector (there are many different online tools)