By Lindsay Richardson, Educational Technology Supervisor (TLS), Adjunct Professor of Psychology
I was listening to Lex Fridman’s (2024) podcast when psycholinguist Edward Gibson said, “we don’t think in language.” That’s when it hit me: language is simply the vehicle we use to express our thoughts. Now—on the precipice of a revolution—we have a new vehicle: Large Language Models (LLMs). This revolutionary way of expressing ourselves holds the promise of increasing rigor and academic integrity while breaking down barriers to effective and durable learning.
Why do teenagers create new language? Because they are desperate for ways to express themselves. Have you ever felt frustrated that your thoughts are not landing? This typically lies in our inability to communicate effectively. Now, we have a new tool that can unlock that potential. But this is only true if we use it right and promote student engagement with the tool.
A New Era of Universal Design for Learning (UDL)
Universal Design for Learning (UDL) is an educational framework that aims to accommodate the diverse needs of all learners by providing multiple means of engagement, representation, and expression. The goal is to create a more inclusive learning environment where every student can succeed, including neurodivergent learners. Traditionally, UDL has required significant effort from educators to adapt their teaching materials and methods to proactively accommodate diverse learners.
With the advent of wide access to LLMs, we now have an opportunity to revolutionize how UDL is implemented in education. These artificial intelligence (AI) tools empower students to demonstrate their learning in ways that best suit their individual strengths and preferences. By leveraging LLMs, we can create a more dynamic and responsive educational experience that proactively accommodates individual student needs and empowers learners to articulate their unique ideas and perspectives. This means increasing academic rigor and integrity because we can finally task students with accessing their higher-order cognitive skills. As educators, we can finally promote and facilitate deep learning in truly learner-centered environments, allowing students to offload lower-order, menial tasks to machine learning.
Becoming a “Master Prompter”
In a recent Spotlight on AI session held by Teaching and Learning Services at Carleton University, the term “Master Prompter” was introduced to describe the expertise required in crafting effective prompts for LLMs (Danielle Manley, personal communication, May 14, 2024). To fully harness the power of LLMs in education, it is crucial to master the art of prompting. While some argue that prompt engineering will soon become obsolete due to the rapid evolution of LLMs (e.g., Genkina, 2024; Willison, 2023), understanding and adapting our prompts will always be essential. Fundamentally, educators and students alike must recognize that prompting LLMs is not the same as performing keyword searches in search engines (Abrahams, 2024).
The effectiveness of LLMs relies heavily on the quality of the prompts they receive, as well-crafted prompts enable AI to generate meaningful and relevant responses. This concept aligns with the principles of the Turing Test, which traditionally measures a machine’s ability to exhibit human-like intelligence through the quality of interaction. Rather than just producing human-like output, the Turing Test emphasizes the importance of meaningful and contextually appropriate interactions. Similarly, effective prompting ensures that AI responses are accurate, insightful, and relevant to the context. Therefore, mastering the art of prompting is essential for educators to leverage LLMs effectively, fostering deeper and more personalized learning experiences.
To achieve this, it’s important to incorporate key ingredients into prompts:
- Task: Clear instructions are mandatory. The task is the core component of any prompt. It defines what you want the AI to do. Clear and specific instructions are crucial because they ensure that the AI understands exactly what is being asked. Just like when communicating with humans, vague or ambiguous tasks can lead to misunderstandings. With LLMs, these misunderstandings are represented as irrelevant or incomplete responses. For example, instead of saying, “Tell me about the Cognitive Revolution,” a clearer task would be, “Provide a brief history of the paradigm shifts throughout psychology, focusing on the Cognitive Revolution.”
- Context and Exemplars: Providing background and examples is crucial. Context sets the stage for the task by giving the AI additional information that might be necessary to generate a meaningful response. Exemplars (i.e., examples) help by showing the AI what a good response looks like, thus guiding it towards the expected format and content. For example, if you want the AI to write a research proposal, providing context about the research and an exemplar of a well-written proposal can help the AI produce a better response.
- Persona, Format, and Tone: These elements are crucial for shaping the interaction and enhancing the quality of the AI’s output. Customizing the AI’s responses to match the desired style and audience involves setting a specific persona, determining the appropriate format, and adjusting the tone. The persona refers to the role the AI should take (e.g., a molecular biologist, an educator). The format dictates the structure of the response (e.g., a list, an essay, an executive summary), and the tone adjusts the language style (e.g., formal, casual). For example, “As a molecular biologist, describe how our mitochondria are negatively affected by a sedentary lifestyle. Provide the information in a list format and use a friendly and engaging tone.
By incorporating these elements, you ensure that the AI’s responses are not only relevant but also tailored to meet specific needs and expectations. This approach enhances the overall interaction, making the outputs more useful and appropriate for the intended context.
Prompt Frameworks for Effective AI Interaction
Shelly Palmer (2023) has curated a helpful list of prompt frameworks specifically geared toward interacting with the LLM ChatGPT. Many of these frameworks can be applied to various other forms of generative AI. Here are a few choice frameworks from that list, each providing a structured approach to crafting effective prompts:
- Task, Action, Goal (TAG): This framework focuses on setting a clear task, describing the action to be taken, and clarifying the end goal. It’s particularly effective for ensuring that the AI’s responses are aligned with specific objectives and outcomes. For example, “I’d like to improve team productivity. With the goal of streamlining workflow to reduce project completion time, plan the implementation of a new project management software.”
- Context, Action, Result, Example (CARE): This framework emphasizes providing context, describing the action, stating the desired result, and offering an example. It’s especially useful for instructional and explanatory prompts, ensuring that the AI understands the background and can deliver a well-rounded response. Including these components helps the AI provide more comprehensive and contextually appropriate suggestions.
- Role, Input, Steps, Expectation (RISE): This framework involves specifying the role for the AI, describing the input information, detailing the steps to be taken, and clarifying the expected outcome. It’s beneficial for process-oriented tasks where a clear sequence of actions is necessary. Using this framework ensures that the AI’s output is structured and follows a logical progression, making it easier to implement the suggested actions.
What all these frameworks—TAG, CARE, and RISE—have in common is their emphasis on clarity, context, and structure. Each framework ensures that prompts are detailed and specific, providing the AI with a clear understanding of the task at hand. By incorporating context, examples, and defined roles, these frameworks guide the AI to generate responses that are not only relevant but also tailored to the specific needs of the “Master Prompter”.
For students, mastering the art of prompting enhances their learning experience by promoting higher-order thinking and deeper engagement with the material. It enables them to interact more effectively with AI, leading to more meaningful and insightful responses that support their educational goals. For example, a student might use the RISE framework to collaborate on a project by specifying the role of the AI as a research assistant, providing input about the topic, detailing the steps for conducting research, and setting expectations for the outcome. This approach encourages the student to think critically about the research process and engage more deeply with the content.
Ultimately, effective prompting empowers students to take control of their learning, making it more personalized, interactive, and impactful. The takeaway message is that effective prompting is an art that requires thoughtful preparation and precision. Mastering this art is essential for educators and students alike to fully harness the potential of LLMs, fostering deeper engagement and personalized learning experiences.
As we consider the broader implications of these skills, it becomes evident that AI has the potential to revolutionize our educational approaches.
Adaptive Andragogy: A New Approach to Teaching and Learning
Yes, we are on the brink of an AI Revolution in education. This revolution presents an opportunity to rethink and reshape our teaching methods to focus on durable learning. Adaptive Andragogy—thinking about the way adults learn and adapting our teaching methods to optimize that learning—is the way forward. By embracing AI and integrating it thoughtfully into our teaching practices, we can create a more inclusive, engaging, and effective learning environment for our students.
Adaptive Andragogy emphasizes the need to tailor educational methods to not only the way humans learn best, but also the societal landscape, in which we find ourselves. By incorporating AI-driven tools and effective prompting strategies, we can enhance UDL and create more inclusive, engaging, and effective educational environments.
LLMs facilitate UDL by providing multiple means of engagement, representation, and expression. They cater to diverse learning preferences, enabling students to interact with content in ways that suit them best. For educators, this means less time spent on one-size-fits-all teaching methods and more time supporting individual student needs. For instance, a student with dyslexia might use an LLM to listen to written content, while another student might use it to generate study questions tailored to their learning style.
The potential of LLMs to revolutionize education lies not only in their ability to generate content but in their capacity to facilitate higher-order thinking, creativity, and personalized learning experiences. This aligns perfectly with our goal of fostering student success through durable learning strategies. By integrating AI and focusing on adaptive, student-centered methodologies, we can break down traditional barriers and usher in a new era of educational excellence.
The journey towards becoming a “Master Prompter” is a step towards realizing the full potential of AI in education. It reinforces our commitment to innovation and excellence, ensuring that our educational practices evolve to meet the needs of all learners in a rapidly changing world.
Critics may argue that relying on AI diminishes critical thinking and creativity. However, LLMs are not a replacement for human thought but a tool to enhance it. By offloading routine tasks, students can focus more on higher-order thinking and creativity. Additionally, concerns about academic integrity can be mitigated by teaching students how to use AI ethically and transparently. Encouraging students to include disclaimer statements in their coursework not only promotes ethical and transparent use of AI but also helps educators provide more accurate and relevant feedback.
Call to Action
The AI Revolution is not just about new technology; it’s about a new way of thinking. It’s about harnessing the power of AI to break down the barriers of traditional teaching methods and promote student success through durable learning. By embracing this change, we can work together to build a critical mass around the idea of Adaptive Andragogy. I encourage you to explore how LLMs can enhance your teaching practices and join the discussion on how we can collectively revolutionize education.
AI Etiquette Disclaimer
This blog post was written in collaboration with ChatGPT Model 4.0 (May, 2024), which was utilized as an AI-powered writing assistant to enhance and refine my blog post in the following ways:
- Provided valuable feedback on the structure, clarity, and coherence of my initial draft.
- Helped to smooth transitions between sections.
- Streamlined content for better readability.
- Incorporated practical suggestions.
- Addressed potential criticisms constructively.
- Ensured that my blog effectively communicated the transformative potential of AI in education.
- Maintained a professional and engaging tone.
This collaboration helped me present my ideas more clearly and compellingly.
References
- Abrahams, M. (Host). (2024, March 19). How to Chat with Bots: The Secrets to Getting the Information You Need from AI [Audio podcast episode]. In Think Fast, Talk Smart: Communication Techniques Podcast. Stanford GSB. https://www.gsb.stanford.edu/insights/how-chat-bots-secrets-getting-information-you-need-ai
- Fridman, L. (Host). (2024, April 17). Edward Gibson on language and thought [Audio podcast episode]. In Lex Fridman Podcast. Lex Fridman. https://lexfridman.com/edward-gibson
- Genkina, D. (2024, March 6). AI prompt engineering is dead. IEEE Spectrum. https://spectrum.ieee.org/prompt-engineering-is-dead
- Palmer, S. (2023, November 26). ChatGPT Prompt Frameworks. Retrieved from: https://shellypalmer.com/2023/11/chatgpt-prompt-frameworks/.
- Willison, S. (2023, February 21). In defense of prompt engineering. Simon Willison’s Weblog. https://simonwillison.net/2023/Feb/21/in-defense-of-prompt-engineering/