Experimenting with generative AI: (re)designing courses and rubrics
In this post, I share some ideas for (re)creating courses and assessment rubrics as well as getting ideas for creative assessments using generative AI.
Experimenting for creating a course
I tried out Google Bard and chatGPT 3.5 to design courses and rubrics. In each case, being specific about what I wanted to see created was key. What this means is that when you are creating your prompt or query, you should be specific in terms of:
Context: e.g. state who you are or who you imagine yourself to be when creating the prompt
Audience: who is the audience of what you want to create? Students? Staff? Administrators? Management? The Public?
Purpose: in brief terms, what do you want to achieve?
Scope: similar to context, however, I see this as more focused, so ‘create a university level course on sociology’ is fine, but narrowing it down to ‘Year 1, Year 2’ etc. will focus the prompt and subsequently generate examples more tightly.
Length: it’s always helpful to state the length of the proposed course or output. For example, are you asking for a draft of a 12-week course? A two-page maximum syllabus? A three-paragraph summary?
For this example, I used the following prompt…
I am a lecturer who teaches university-level chemistry. I wish to create a new course on inorganic chemistry for Year 2 university students. The course should be 12 weeks long and have 4 assignments. What might this look like?
Below are two GIFs showing chatGPT and Google Bard respectively.
NB: You may wish to select the images to see a larger version.
Brief reflections
I used a similar prompt for both generative AI tools. I decided to add an element of creativity when so I slightly changed the prompt when using Google Bard to get it to suggest creative assessments. I then went back to chatGPT to ask it do also suggest ideas for creative assessments within the context of this course.
They seem to produce similar results regarding this particular prompt. Both suggest an outline of a suggested course on inorganic chemistry; while Google Bard integrates the creative assessments into some of the topics, chatGPT predictably creates a list of suggested creative assessments as I had asked it after the initial prompt.
Interestingly, Google Bard also expands a bit at the end of the outline with further examples of non-written, creative assessments. chatGPT, on the other hand, does give some examples of ways of supporting learning and teaching after creating an example course outline. The creative assessments it lists are similar to those of Google Bard, although they are different, such as the quiz show example among others.
For transparency, I do not teach chemistry nor have I taught it. I have, however, supported those learning chemistry with their academic writing abilities, including writing lab reports and researching the topic. On the surface, the course looks coherent. However, I will leave that to those who teach chemistry!
What you can do
To replicate what I’ve done, copy and paste the prompt into your generative AI tool of choice.
Please note: you’ll likely get a slightly different response. I did not test each response again. That said, Google Bard automatically offers additional draft examples.
Creating assessment rubrics
Educators are often handed marking rubrics with little chance to develop or create their own. What this means is that when it comes to creating an assessment rubric, some educators may not have practical experience beyond what they have observed. In this case, generative AI can provide ideas and food for thought. This can be especially helpful for getting ideas for creative assessments that are still valid and rigorous while offering a suitable alternative to traditional assessments.
I ask generative AI tools to create assessment rubrics in the examples below. Remember: you need to give generative AI a context (e.g. you’re a lecturer teaching X), a specific request (e.g. you want to create an assessment rubric) and ensure the request has specific parameters (e.g. you provide your specific criteria for this rubric) .
I am a lecturer. I wish to create a marking rubric for an essay-based assessment. The rubric should include the following criteria: criticality, academic rigor, references to research, style and formatting.
NB: You may wish to select the images to see a larger version.
Reflections
In both cases, I state my (imagined) role and the type of assessment I usually employ and ask the tools to suggest ideas with specific criteria included. In both cases, each generative AI tool creates a sample rubric based upon what I have asked it.
Both tools create a table I would expect an assessment rubric to look like. Each table includes the criteria and sample grade bands with descriptor text that cross-references to the criteria. What both generally do well with is providing some sample descriptor text. However, you will need to tweak, modify and/or change the criteria to your specific, local context.
Creating rubrics specific to your institution
If your institution has a general, overarching rubric often used, you can get generative AI to suggest sample rubrics. This may, however, be difficult given how complex your institution’s rubric may be.
In the examples below, I ask chatGPT 3.5 and Google Bard respectively to create an example rubric based on Glasgow University’s 22-point marking system. This did, however, prove difficult!
Can you change the marking scale to a 22 point scale used at the University of Glasgow?
Reflections
The prompt above initially confused both generative AI tools. This could be because a 22-point scale differs from many scales out there. This could also be because I hadn’t provided specific context of the different bands. In this case, my suggestion is to suggest that chatGPT or Google Bard create a rubric based on your marking criteria. You can then tailor the created sample rubric to your local needs.
As you can see, both tools got some areas right and others wrong.
What chatGPT did well:
it created a scale based on the criteria I provided
it included the marking bands, cross-referenced against the criteria
it included some basic descriptor text
What chatGPT can do better at:
the descriptor texts were wildly off compared with the example marking schemes
it struggled to capture the nuances between the marking bands
What Google Bard did well:
the descriptor text for each band more closely matches what I would expect to see
the marking bands are divided out nicely
the criteria are cross-referenced against marking bands
What Google Bard can do better at:
it’s hard to say what it can do better at right now given how it created a marking rubric based upon my query!
that said, the descriptor texts for each band would likely need some tweaking to match local styles
Getting ideas for creative assessments
As I noted earlier, you can use generative AI to get ideas for (more) creative assessments that aren’t traditional, written-based assignments. Traditional, written-only assignments are great for some things. However, there are other, more inclusive and creative ideas for assessments that you can use in your teaching, no matter the subject.
For this particular example, I draw upon my own area of expertise and subject area which lies at the intersections of education and sociology.
I teach a social sciences subject in university. Traditionally, we use written assessments such as essays and exams as assessments. What are some creative alternative assessments?
Reflections
In brief, similar to the first example on chemistry, both generative AI tools create a good range of creative and event collaborative assessments that you can use within your own context.
You may already use some of these, such as mind maps and portfolios. That said, there are a lot of good ideas that have been suggested that might be worth trying out. I would recommend co-creating these with students, especially if an idea appears new or innovative or out of your personal comfort zone as an educator. You may be surprised at how quickly your students take to becoming partners in learning and teaching.