Regardless of the true percentage of students using AI in their assignments (40%, 56%, 89%), it must be accepted that widespread use of LLMs in higher education is a fact.

A.R. Dykes Library's Research and Learning librarians have put together and actively curate an excellent set of pages focused on AI tools and resources faculty can use in teaching, research and publishing.

Beyond using an AI detection tool, though, how can faculty track and better assess student work to ensure either AI hasn't been used or – if allowed per the assignment – being used in an ethical manner?  The following two resources offer some ideas on how to build assignments and assessments that are resilient to the unethical use of AI and/or involve critical student use/examination of the capabilities and limitations of the technology.

30 Ideas for Generating AI-Resilient Assessments. This list comes from monsha.ai, a company specializing in AI-assisted curriculum design and lesson planning.  A lot of these ideas revolve around in-person/supervised student work.  For example:

Analyzing Unseen Case Studies in Supervised Settings

This method evaluates students’ ability to apply theoretical knowledge to new situations in real-time. In a controlled environment, students receive case materials and the necessary resources. For example, marketing students could analyze a new product launch strategy during a timed, in-class session. This approach ensures that students independently demonstrate their understanding and problem-solving skills, free from external assistance. It also encourages quick thinking and effective application of learned concepts in practical scenarios.

Designing assessments for an AI-enabled world.  This is an excellent collection of resources from the University College of London.  Much of the information here is provided through YouTube videos released under Creative Commons Attribution license.  The videos are embedded below.  Additionally, and kind of buried at the bottom of the page, is the PowerPoint “menu” of assessment ideas… well, OK, there are actually two there: the UCL version, and an adapted version provided by the Jisc National Centre for AI (both released under CC Attribution, Non-Commercial Use and Share-alike license).  The Jisc version is slightly restructured and includes linking for easier navigation to the assessment examples.  Assessment ideas are broken into six categories:

  • Controlled Condition Exams (time limited exams – in person or online - which may or may not allow for use of authorised resources).
  • Take-Home Papers / Open Book (assessments with limited duration of say 1 day to 1 week. Students are allowed access to resources).
  • Quizzes & In-class Tests (these take place in person or online and although often include MCQ type questions may also include other class activities).
  • Practical Exams (practical assessments with a short, fixed duration such as presentations, group presentations, vivas, clinical exams, OSCEs, lab tests etc).
  • Dissertation (extended, in-depth coursework assignments involving research and independent study).
  • Coursework and other Assessments (Assignments where students are typically given a few weeks to complete the assessment). Includes essays, reports, portfolios, artefacts, exhibitions or other assessment that does not fit into other categories.

This last category, Coursework and other Assessments, is the largest.  Take a look at all the menu here:

 

And here's an example of a coursework assignment:

 

And here are the videos:

1. Discuss Academic Integrity and AI with your students


2.  Increase formative feedback 

3.  Revise written exam questions 

4. Revise essay questions

5. Convert generic questions to scenario-based questions 

6. Upgrade your Multiple Choice Questions (MCQs) 

Hopefully you find these useful in developing assignments and activities that are “AI-resilient” and encourage authentic learning.