Skip to Main Content

Generative Artificial Intelligence (GenAI): AI Literacy

Generative Artificial Intelligence (AI) Guide

Critical Appraisal of AI

Critical thinking and appraisal are defining characteristics of university-level study. It underpins everything from reading to analysing ideas and evidence, to developing your own arguments. Critical thinking and appraisal are key aspects of AI Literacy because you will need to reflect on and analyse your academic practice to ensure you work with integrity, evaluate the AI tools you select and the material they generate. 

Take some time to think about why you have chosen to use a specific AI tool in your studies. Has it enhanced your learning, supported the development of your knowledge, or enhanced your academic skills? Using AI without caution and consideration may have unintended consequences, such as hindering critical thinking skills and your development as a student.

While it can be fun to dive right in and experiment with new and exciting tools, take time to research and consider their capabilities and limitations. Before creating an account with a tool or using it in your studies, stop and think about who created it, why and for what purpose? What are its strengths and weaknesses, what data was used to train it, how can you get the best out of it, and will your personal information - and the information you enter into the AI via prompts - be kept safe?

The ROBOT checklist located in the box below will help you evaluate different AI tools. 

The ROBOT Checklist

This checklist is a resource to help you think critically and support you in making judgements about which AI tools to use. The technology is developing rapidly, with new tools appearing for free or through a subscription, every day. Before creating an account with a tool or using it within your studies you will need to judge whether it is a tool you will be able to use ethically, responsibly and with confidence that your personal information will be protected. To help you do this, we can use the ROBOT checklist

Reliability • How reliable is the information available about the AI technology itself? What information is shared about the app or on the website about when the tool was released, when it was last updated, the aim or mission? • Are terms and conditions, copyright or privacy policies available for you to read? Are you happy with how your personal data will be used? • Is there a way to contact the developer or company creating the tool? • How much information is made available about how the tool works? • Are you able to identify what sources were used in its creation or the dataset generative AI is drawing on? What impact does this have on the reliability of the tool? • How biased is the information that the tool produces?

Objective • What is the goal or objective of the tool? • What is the goal of sharing information about the tool or making it available to use? To inform or persuade potential users? To add to the dataset teaching the tool? To find financial backing?

Bias • What could create bias in the AI technology? Is there information about how the tool has been built or the dataset it draws on? • Are there ethical issues associated with this?

Owner • Who is the owner or developer of the AI technology? • Who is responsible for it? A researcher or research institute, a private company, government organisation? • Who has access to it? • Who can use it?

Type • Which sub-type of AI based tool is it? • What kind of information system does it rely on? Does it rely on human intervention? • What does this mean for the functionality and capability of the tool and how you hope to use it?

Based on the ROBOT tool created by Hervieux, S. and Wheatley, A. (2022) ‘Separating Artificial Intelligence from Science Fiction: Creating an Academic Library Workshop Series on AI Literacy’, in The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries. Association of College & Research Libraries, Chicago, IL, pp.61-70

Critical Thinking Checklist for AI

Before using AI:

  • Would a human approach be better?
  • Who is the audience? Are you writing for the public, partners, or internal staff? This affects tone, accuracy, and depth.
  • Could this task involve sensitive or confidential information? If yes → Do NOT input confidential information into public AI tools.
  • Do I understand the limitations of the AI tool I’m using? Check if it’s known to generate incorrect or biased outputs. AI is a language model, not a fact-checker.

Whilst using AI: 

  • Am I providing clear instructions and context? Vague prompts get vague answers. Include relevant background, goals, and tone.
  • Does the output seem a bit odd? Trust yourself! Be alert to hallucinations, as AI may invent data or sources.
  • Does this reflect our organization's values and voice? Review tone, framing, and word choices. AI can unintentionally reproduce harmful stereotypes.
  • Am I depending too heavily on the AI? Use AI to support your work, not replace your judgment, creativity, or ethics.

After using AI:

  • Have I fact-checked the content? Verify facts and events through reliable sources.
  • Is there potential for harm, confusion, or bias in this content? Think: Who might be misrepresented or excluded?
  • Do I need to disclose AI involvement? For transparency, note when AI was used to draft content for reports, proposals, or publications.
  • Did I review and edit the final version? Treat AI output as a draft, not a finished product. Always apply human oversight.

The SIFT method of Critical analysis

Mike Caulfield, Washington State University digital literacy expert, has helpfully condensed key fact-checking strategies into a short list of four moves, or things to do to quickly decide whether or not a source is worthy of your attention. It is referred to as the “SIFT” method:

Stop

SIFT icon for "stop" shows hand over stop sign

When you initially encounter a source of information and start to read it—stop. Ask yourself whether you know and trust the author, publisher, publication, or website. If you don’t, use the other fact-checking moves that follow to get a better sense of what you’re looking at. 

Investigate the Source

SIFT icon for "Investigate" shows a magnifying glass

You don’t have to do a three-hour investigation into a source before you engage with it. But if you’re reading a piece on economics, and the author is a Nobel prize-winning economist, that would be useful information. Knowing the expertise and agenda of the person who created the source is crucial to your interpretation of the information provided.

Find Better Coverage

SIFT icon for Find Better Coverage shows a check mark

What if the source you find is low-quality, or you can’t determine if it is reliable or not? Perhaps  you don’t really care about the source—you care about the claim that source is making. You want to know if it is true or false. You want to know if it represents a consensus viewpoint, or if it is the subject of much disagreement. A common example of this is a meme you might encounter on social media. The random person or group who posted the meme may be less important than the quote or claim the meme makes.

Trace Claims, Quotes, and Media to the Original Context

SIFT icon for Trace Claims shows 3 dots narrowing down to one dot

Much of what we find on the internet has been stripped of context. Maybe there’s a picture that seems real, but the caption could be misleading. Maybe a claim is made about a new medical treatment based on a research finding—but you’re not certain if the cited research paper said that. The people who report these stories either get things wrong by mistake or, in some cases, intentionally mislead us.

Source: Introduction to College Research. (n.d.) 

Authors: Walter D. Butler; Aloha Sargent; and Kelsey Smith. Edited by Cynthia M. Cohen. License: Creative Commons Attribution