Educational Opportunities
A.I. is here to stay.
In educational settings, this means grappling with some of its questionable uses. Some instructors are providing a model for what it looks like to use problem-based learning to “teach the problem.” Problem-Based Learning (PBL) is an instructional approach in which real-world challenges serve as the catalyst for processing concepts and principles – as opposed to direct instruction (passive learning). As reported in the Chronicle of Higher Education, a researcher on AI in writing courses at California’s College of Marin encourages her students to analyze ChatGPT output. Foregrounding A.I. as a tool – an imperfect one – allows students to identify signs of fabrication, biases, inaccuracies, or shallow reasoning. This in turn boosts confidence in their own human thinking. And it prepares them for a workforce environment that will undoubtedly require AI management skills.
Teaching the Problem: Example Activities for Thinking Critically about AI
Consider which content in your course requires a high degree of context, interpretation, and nuanced articulation. This content is complex enough that it can’t be easily “boiled down.” It’s what experts in your field enjoy debating. Students can explore the variability that results in response to slightly modifying the prompts. Most importantly, they can focus on evaluating the output in terms of quality. Does the piece demonstrate a sophisticated handling of complexities? Does it acknowledge stakeholder values? Does it thoroughly present the contexts that have shaped the issue? Does it offer an understanding of the biases surrounding the question at hand? Why might it be important to have human insight on this issue?
Once students have a working first draft for an essay, prompt them to use AI tools to search for holes or room for development. Does the AI tool locate omissions more effectively in certain portions? Certain modes of writing -like expository versus persuasive?
“You’re collectively trying to devise a name for a project, product, or program. You’re trying to improve upon an existing good or service. You’re trying to solve a community-based problem.”
Ask students to track their manipulations of the tools. What aspects of AI turn out to be useful? Was it necessary to make adjustments to input/output when “deep empathy” was required?
Output is reflective of the data with which the LLM is trained. AI bias occurs because human beings choose the data that algorithms use, and also decide how the results of those algorithms will be applied. Without extensive testing and diverse teams, it is easy for unconscious biases to enter machine learning models. After reading resources on AI training bias, ask students to identify datasets in their own research projects that could potentially show bias.
The hierarchy of Bloom’s taxonomy reflects a value system in which informed judgment – we might say “creative evaluation” – that most human form of thinking, subsumes the more mechanical forms of thinking. By definition, it requires the objective analysis of an issue, with a consideration for its complexities and contexts, to ultimately form a judgment. This level of thinking draws upon the top tiers of the taxonomy (analysis, synthesis, evaluation/creation), thus constituting “high order” thinking. After students have used AI to generate topic ideas, ask them to identify the complexities and contexts that AI missed. What judgments might be contained in a substantive thesis statement? How might two or more topics be combined in ways that AI can’t appreciate?
AI models, including text-generating ones, typically lack a deep understanding of the content they generate. As a result, they cannot identify the original sources of information. Moreover, AI cannot synthesize the shared themes and arguments across sources. Ask students to create an annotated bibliography that showcases their evaluative skills by requiring a synthesis component. They can either provide a short synthesis statement after creating a full set of annotations. Or, they can produce a lengthy synthesis statement that incorporates several sources in a single paragraph.
Implement AI-driven role-playing simulations where students can take on different roles in ethical scenarios. AI can serve as virtual characters or advisors, providing diverse perspectives for students to consider.
Develop case studies using AI tools to incorporate realistic and dynamic elements. AI can generate data, simulate evolving situations, and provide various perspectives on the same scenario.
After students have generated an essay on a topic of interest, ask AI to design a quiz based on that content. Then, have them take the quiz. How effective were the questions in testing for comprehension of the material? Where were the voids that could be addressed in conversation?
Conversations regarding the increasing impact of AI within your area of expertise can serve as a beneficial asset for your students’ professional journey. Delve into the capabilities of generative AI tools in your field, including the advantages they offer and the potential drawbacks they may carry. Combining these dialogues with practical experiences using these tools can significantly enhance professional growth.
Consider how you might contextualize this conversation using a source like the World Economic Forum’s Future of Jobs 2023 Report, which reflects an amalgam of tech literacy and adaptive skills: “The socio-emotional attitudes which businesses consider to be growing in importance most quickly are curiosity and lifelong learning; resilience, flexibility and agility; and motivation and self-awareness. Systems thinking, AI and big data, talent management, and service orientation and customer service complete the top 10 growing skills.”
…aiding them in acquiring the skill of formulating iterative queries to fine-tune a response. The ability to construct a well-framed question showcases comprehension of intricate tasks, and the aptitude for “prompt engineering” will only become more valuable.
- Our students, when shown the arc toward contextual knowing, are naturally motivated to strive for the culmination of critical thinking. Consider incorporating a reflection addressing Blackboard Ultra’s Learner Analytics. Following a Discussion Board assignment, Blackboard Ultra Analytics will automatically provide data rating the student’s post in terms of substantive contribution, sentence complexity, lexical and word variation, and critical thinking. Part of what students may find thought provoking is that this data is relative, as it compares each learner to the overall class performance. I know what you’re thinking – and what they should be thinking. How can we trust this kind of automated feedback? For instance, what constitutes a “substantive” post? Or the level of critical thinking? Is it truly reflective of this learner’s place within the larger continuum of their intellectual evolution? Good. Pursue these questions. And encourage your students to do the same. See below for a list of Student Questions for Reflection:
- If you were designing a rubric evaluating your Discussion Board contributions, what elements do you think would constitute a “substantive” post?
- How could you demonstrate independent thinking on Discussion Board posts? Try to point to specific examples in which you have been successful in doing so, or instances in which you could improve upon independent thinking.
- What kind of knowledge do we work with in class that is “uncertain”?
- How have other students addressed ideas in a way that deepened or broadened your understanding of the contexts surrounding it?
- If you look at your Discussion Board post(s), what evidence might you present for where your thinking currently lies within Magolda’s Epistemological Reflection model? (Absolute, Transitional, Independent, Contextual)
- How did this Discussion Board post require you to construct knowledge for yourself?
Resource Spotlight: “In What Ways Can Generative AI Impact Critical Thinking, Research, and Writing?”
In this episode of a Six-Part series, ASU’s Director of Creative and Emerging Technologies interviews faculty at the forefront of teaching with generative AI at the School for the Future of Innovation in Society and the School of Computing and Augmented Intelligence. They discuss the definition of critical thinking, specifically its distinctly human aspects, and how it might be “cognitively ergonomic” to interact with AI.
Resource Spotlight: Practical AI, “AI for Students” by Wharton Interactive
In Part-Five of this comprehensive series by Wharton Interactive, the Directors of Faculty and Pedagogy explore how students can harness AI to enhance their learning experiences. They equip educators with example prompts, effective communication strategies, and guidelines to facilitate discussions with students.
Faculty Resources
AI Required: Teaching in a New World, from the ASU+GSV 2023 Summit
In this talk, Professor Ethan Mollick (head of Wharton Interactive), explains why he has adopted a fully integrated approach to the use of AI in his classes. He provides detailed advice on how to craft strong policies that will help students to see AI use as an emerging professional skill.
AI Policy Statements for Your Course Syllabus
If you are looking for sample policy statements to include in your syllabus regarding AI, the Marshall University Office of Academic Affairs has several resources.
Practical AI by Wharton Interactive: Introduction for Teachers
In this introduction, Wharton Interactive’s Faculty Director Ethan Mollick and Director of Pedagogy Lilach Mollick provide an overview of how large language models (LLMs) work and explain how this latest generation of models has impacted how we work and how we learn. They also discuss the different types of large language models referenced in their five-part crash course: OpenAI’s ChatGPT4, Microsoft’s Bing in Creative Mode, and Google’s Bard. This video is Part-One of a five-part course in which Wharton Interactive provides an overview of AI large language models for educators and students. They take a practical approach and explore how the models work, and how to work effectively with each model, weaving in your own expertise. They also show how to use AI to make teaching easier and more effective, with example prompts and guidelines, as well as how students can use AI to improve their learning.
Arizona State University: Fostering a Positive Culture around Generative AI
In this episode of a six-part series of webinars, the Director of Creative and Emerging Technologies at ASU chats with the Associate Dean of Scholarship and Innovation and professors from the School for Innovation in Society and the School of Computing and Augmented Intelligence. They discuss emerging ethical considerations, particularly the importance of approaching AI with a humanistic “embedded ethics” approach to design. Some projections are also offered for how AI may be used to address inequities in information access and formal education.
New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments
Abstract: Chatbots are able to produce high-quality, sophisticated text in natural language. The authors of this paper believe that AI can be used to overcome barriers to learning in the classroom: improving transfer, breaking the illusion of explanatory depth, and training students to critically evaluate explanations. The paper provides background information and techniques on how AI can be used to overcome these barriers and includes prompts and assignments that teachers can incorporate into their teaching. The goal is to help teachers use the capabilities and drawbacks of AI to improve learning.
A Jargon-free Explanation of How AI Large Language Models Work
Just like the title says, it’s a primer on the technology behind AI. Even researchers are working to gain a full understanding of how the Large Language Models (LLM) truly function. What the general public may be interested to discover is that it uses “word vectors” to create word predictions. In other words, ChatGPT is not generating a holistic reply; rather, it’s just predicting the next word.
How Will Artificial Intelligence Change Higher Ed? Twelve Scholars and Administrators Explain
From admissions to assessment, academic integrity to scholarly research, university operations to disappearing jobs, here’s how 12 professors, administrators, and writers answer the question: How will AI change higher education?
A.I. Might Not Replace You, But a Person Who Uses A.I. Could
As generative AI tools like ChatGPT become more accessible, companies are looking to integrate them into their operations, and this responsibility often falls on prompt engineers, who specialize in formulating the right questions to get desired AI-generated outcomes. The World Economic Forum predicts that AI will create 97 million new jobs, despite automation eliminating 85 million, resulting in a net gain of 13 million jobs. However, there’s a significant skills gap in the AI field, with a shortage of qualified candidates. As AI tools like ChatGPT become more widespread, the demand for prompt engineers is expected to rise.
World Economic Forum: Future of Jobs Report 2023
Fortune Magazine highlights the World Economic Forum’s Future of Jobs Report, which suggests that while AI may not directly replace a worker, those with AI skills might. Addressing concerns among students, the report predicts that despite some workforce displacement, AI will result in a net increase of 13 million new jobs. Importantly, these new positions are expected to demand a certain level of proficiency in working with artificial intelligence. The Future of Jobs Report also reveals that concerns about future job prospects are not limited to students. Fortune Magazine cites the report, emphasizing that very few existing employees in the job market are adequately prepared for the transition to AI-powered roles. It notes, “The A.I. field has the biggest skills gap in the tech industry, meaning there are few qualified applicants for roles despite a rapidly growing need.” Consequently, both recent graduates and current employees will likely need to acquire new skills to effectively integrate AI into their work. To bridge this skills gap, educational institutions will need to adapt their approaches to preparing students for a changing job landscape. This challenge presents an opportunity for schools to equip their students with the necessary tools to thrive in this evolving work environment.
Six Examples of Real Businesses Using Dall-E for Visual Content
Just like it sounds, this piece is a straight-forward shortlist of in-demand practical applications, each accompanied by example images. These images are of course the products of carefully crafted prompts, so this one might serve as a good springboard for a conversation surrounding prompt engineering.
How to Recognize AI Training Bias
The article delves into the issue of bias in AI systems, emphasizing the need to recognize and address different forms of bias in AI. It provides five categories of bias in AI, each illustrated with childhood metaphors:
- Dataset Bias: This is akin to a child’s limited perspective, where AI is trained on a small, homogenous dataset, leading to underrepresentation.
- Associations Bias: Similar to children assuming gender roles, AI can perpetuate cultural biases, particularly in gender and ethnicity.
- Automation Bias: Like a makeover gone wrong, this bias occurs when automated decisions override social and cultural considerations, potentially against human diversity.
- Interaction Bias: This is depicted as when a kid intentionally changes information in the game of “Telephone,” highlighting the potential for malicious human input to taint AI systems.
- Confirmation Bias: Similar to how a child who receives a dinosaur toy gets more dinosaurs, this bias occurs when AI oversimplifies personalization and reinforces existing preconceptions, leading to limited and less diverse results.
The article underscores the importance of addressing these biases from the inception of AI design to create more inclusive and trustworthy AI systems.
Student Guide to Using ChatGPT, from the University of Arizona Libraries
This resource will provide answers to your students on: Why is it important to fact-check? How do I cite ChatGPT? How can I learn more about LLMs?
Generative AI in Teaching and Learning, UVA Center for Teaching Excellence Hub
A gallery of instructional support pieces ranging across disciplines and levels of technical expertise.