We’re launching a new semi-regular feature to keep you informed about how artificial intelligence (AI)—especially generative AI (GenAI)—is shaping teaching and learning at Carleton and beyond. We’ll share practical strategies, instructor stories and timely developments that support thoughtful, intentional and student-centred pedagogy throughout this rapidly shifting AI landscape.
As we continue to explore our four-pronged approach to building an AI strategy for your course, this month’s focus is on prong No. 4: Rethinking assessments.
If you missed previous posts, catch up on prong No. 1: AI policies and syllabus language, prong No. 2: The AI talk and prong No. 3: 1+ GenAI activity.
Rethinking Assessments – Designing for Depth, Not Detection
No, this doesn’t mean creating an assignment graveyard and starting from scratch. Instead, it means thinking critically about where and how learning is best demonstrated in a world where GenAI tools are readily available. Some instructors are tightening guardrails, others are loosening them—but most are asking the same questions:
- What are students really being assessed on?
- Can this assignment still assess that skill if GenAI is involved?
- Are there multiple ways students could demonstrate that learning?
We don’t have to overhaul courses to make them more resilient—we just need to make a few strategic shifts. Even small changes can help students engage more meaningfully with their learning process.
A few ideas to get started:
- Process-first assignments: Ask students to submit outlines, drafts or planning notes along with their final work—helping make their thinking visible. Consider using cuPortfolio—Carleton’s ePortfolio platform—to help students document their process over time.
- Consider intrinsic motivation: When students value the learning that comes with the assessment, they are more likely to engage with it genuinely. Think of how to surface the value in completing the learning activity autonomously.
- Space to practice: Is there room for productive failure in your course? Can students practice the skill before demonstrating it for grades?
Teaching and Learning Services will be hosting a number of workshops over the fall break (Oct. 20 to 24) that focus on assessment, including sessions on AI-resilient assignment design and rethinking alternative grading practices. Register here.
What’s Next?
We’ve completed our spotlight on the four-pronged AI strategy, but there’s much more to come. In the coming months, we’ll be featuring instructor stories, critical conversations about bias and ethics in AI, and tools for empowering students to engage responsibly. Stay tuned.
Tool Spotlight: AI Writing Detection Tools – Proceed With Caution
As GenAI tools like ChatGPT become more common, some have turned to AI writing detection tools—like Turnitin’s AI checker—to identify potentially AI-generated student work. While these tools can feel reassuring, it’s important to approach them with a healthy dose of skepticism.
AI detectors:
- Are prone to false positives, especially with multilingual students or those using assistive writing tools
- Cannot definitively prove authorship
- Can erode trust if used without transparency
Rather than relying on detection tools as a first line of defense, we recommend designing assessments that encourage transparency and reflection:
- Invite students to declare and explain any use of GenAI in their process
- Use short reflection prompts to surface how they engaged with the material
- Create space for authentic dialogue about the evolving role of these tools
Have something to share or a question about AI?
We want to hear from you! Reach out to our team with your ideas, challenges or success stories.