Educational Opportunities
A.I. is here to stay.
In educational settings, this means grappling with some of its questionable uses. Some instructors are providing a model for what it looks like to use problem-based learning to “teach the problem.” Problem-Based Learning (PBL) is an instructional approach in which real-world challenges serve as the catalyst for processing concepts and principles – as opposed to direct instruction (passive learning). As reported in the Chronicle of Higher Education, a researcher on AI in writing courses at California’s College of Marin encourages her students to analyze ChatGPT output. Foregrounding A.I. as a tool – an imperfect one – allows students to identify signs of fabrication, biases, inaccuracies, or shallow reasoning. This in turn boosts confidence in their own human thinking. And it prepares them for a workforce environment that will undoubtedly require AI management skills.
Teaching the Problem: Example Activities for Thinking Critically about AI
Resource Spotlight: “In What Ways Can Generative AI Impact Critical Thinking, Research, and Writing?”
In this episode of a Six-Part series, ASU’s Director of Creative and Emerging Technologies interviews faculty at the forefront of teaching with generative AI at the School for the Future of Innovation in Society and the School of Computing and Augmented Intelligence. They discuss the definition of critical thinking, specifically its distinctly human aspects, and how it might be “cognitively ergonomic” to interact with AI.
Resource Spotlight: Practical AI, “AI for Students” by Wharton Interactive
In Part-Five of this comprehensive series by Wharton Interactive, the Directors of Faculty and Pedagogy explore how students can harness AI to enhance their learning experiences. They equip educators with example prompts, effective communication strategies, and guidelines to facilitate discussions with students.
Faculty Resources
AI Required: Teaching in a New World, from the ASU+GSV 2023 Summit
In this talk, Professor Ethan Mollick (head of Wharton Interactive), explains why he has adopted a fully integrated approach to the use of AI in his classes. He provides detailed advice on how to craft strong policies that will help students to see AI use as an emerging professional skill.
AI Policy Statements for Your Course Syllabus
If you are looking for sample policy statements to include in your syllabus regarding AI, the Marshall University Office of Academic Affairs has several resources.
Practical AI by Wharton Interactive: Introduction for Teachers
In this introduction, Wharton Interactive’s Faculty Director Ethan Mollick and Director of Pedagogy Lilach Mollick provide an overview of how large language models (LLMs) work and explain how this latest generation of models has impacted how we work and how we learn. They also discuss the different types of large language models referenced in their five-part crash course: OpenAI’s ChatGPT4, Microsoft’s Bing in Creative Mode, and Google’s Bard. This video is Part-One of a five-part course in which Wharton Interactive provides an overview of AI large language models for educators and students. They take a practical approach and explore how the models work, and how to work effectively with each model, weaving in your own expertise. They also show how to use AI to make teaching easier and more effective, with example prompts and guidelines, as well as how students can use AI to improve their learning.
Arizona State University: Fostering a Positive Culture around Generative AI
In this episode of a six-part series of webinars, the Director of Creative and Emerging Technologies at ASU chats with the Associate Dean of Scholarship and Innovation and professors from the School for Innovation in Society and the School of Computing and Augmented Intelligence. They discuss emerging ethical considerations, particularly the importance of approaching AI with a humanistic “embedded ethics” approach to design. Some projections are also offered for how AI may be used to address inequities in information access and formal education.
New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments
Abstract: Chatbots are able to produce high-quality, sophisticated text in natural language. The authors of this paper believe that AI can be used to overcome barriers to learning in the classroom: improving transfer, breaking the illusion of explanatory depth, and training students to critically evaluate explanations. The paper provides background information and techniques on how AI can be used to overcome these barriers and includes prompts and assignments that teachers can incorporate into their teaching. The goal is to help teachers use the capabilities and drawbacks of AI to improve learning.
A Jargon-free Explanation of How AI Large Language Models Work
Just like the title says, it’s a primer on the technology behind AI. Even researchers are working to gain a full understanding of how the Large Language Models (LLM) truly function. What the general public may be interested to discover is that it uses “word vectors” to create word predictions. In other words, ChatGPT is not generating a holistic reply; rather, it’s just predicting the next word.
How Will Artificial Intelligence Change Higher Ed? Twelve Scholars and Administrators Explain
From admissions to assessment, academic integrity to scholarly research, university operations to disappearing jobs, here’s how 12 professors, administrators, and writers answer the question: How will AI change higher education?
A.I. Might Not Replace You, But a Person Who Uses A.I. Could
As generative AI tools like ChatGPT become more accessible, companies are looking to integrate them into their operations, and this responsibility often falls on prompt engineers, who specialize in formulating the right questions to get desired AI-generated outcomes. The World Economic Forum predicts that AI will create 97 million new jobs, despite automation eliminating 85 million, resulting in a net gain of 13 million jobs. However, there’s a significant skills gap in the AI field, with a shortage of qualified candidates. As AI tools like ChatGPT become more widespread, the demand for prompt engineers is expected to rise.
World Economic Forum: Future of Jobs Report 2023
Fortune Magazine highlights the World Economic Forum’s Future of Jobs Report, which suggests that while AI may not directly replace a worker, those with AI skills might. Addressing concerns among students, the report predicts that despite some workforce displacement, AI will result in a net increase of 13 million new jobs. Importantly, these new positions are expected to demand a certain level of proficiency in working with artificial intelligence. The Future of Jobs Report also reveals that concerns about future job prospects are not limited to students. Fortune Magazine cites the report, emphasizing that very few existing employees in the job market are adequately prepared for the transition to AI-powered roles. It notes, “The A.I. field has the biggest skills gap in the tech industry, meaning there are few qualified applicants for roles despite a rapidly growing need.” Consequently, both recent graduates and current employees will likely need to acquire new skills to effectively integrate AI into their work. To bridge this skills gap, educational institutions will need to adapt their approaches to preparing students for a changing job landscape. This challenge presents an opportunity for schools to equip their students with the necessary tools to thrive in this evolving work environment.
Six Examples of Real Businesses Using Dall-E for Visual Content
Just like it sounds, this piece is a straight-forward shortlist of in-demand practical applications, each accompanied by example images. These images are of course the products of carefully crafted prompts, so this one might serve as a good springboard for a conversation surrounding prompt engineering.
How to Recognize AI Training Bias
The article delves into the issue of bias in AI systems, emphasizing the need to recognize and address different forms of bias in AI. It provides five categories of bias in AI, each illustrated with childhood metaphors:
- Dataset Bias: This is akin to a child’s limited perspective, where AI is trained on a small, homogenous dataset, leading to underrepresentation.
- Associations Bias: Similar to children assuming gender roles, AI can perpetuate cultural biases, particularly in gender and ethnicity.
- Automation Bias: Like a makeover gone wrong, this bias occurs when automated decisions override social and cultural considerations, potentially against human diversity.
- Interaction Bias: This is depicted as when a kid intentionally changes information in the game of “Telephone,” highlighting the potential for malicious human input to taint AI systems.
- Confirmation Bias: Similar to how a child who receives a dinosaur toy gets more dinosaurs, this bias occurs when AI oversimplifies personalization and reinforces existing preconceptions, leading to limited and less diverse results.
The article underscores the importance of addressing these biases from the inception of AI design to create more inclusive and trustworthy AI systems.
Student Guide to Using ChatGPT, from the University of Arizona Libraries
This resource will provide answers to your students on: Why is it important to fact-check? How do I cite ChatGPT? How can I learn more about LLMs?
Generative AI in Teaching and Learning, UVA Center for Teaching Excellence Hub
A gallery of instructional support pieces ranging across disciplines and levels of technical expertise.