by Derek Bruff, visiting associate director
Earlier this week, I had the chance to host another event in the series on generative AI organized by CETL and AIG. Wednesday’s event was titled “Beyond ChatGPT: New Tools to Augment Your Research,” and it featured speakers Marc Watkins, academic innovation fellow and lecturer of writing and rhetoric and someone who seems to always be three steps ahead of me on generative AI, and Kellye Makamson, lecturer in writing and rhetoric and formerly reluctant adopter of AI in teaching. Marc provided some updates on generative AI technologies available to UM instructors, and Kellye shared examples of her use of a particular AI tool (Perplexity) in her writing courses. Below you’ll find a few highlights from the session.
First, Marc’s updates, which you can also find on this Google doc:
-
Bing Chat Enterprise is now available to UM faculty and staff. This version of Bing Chat runs on GPT-4, which is noticeably more powerful than the earlier version that powers the public version of Bing Chat. It also has an integration with DALL-E for image generation and, since it’s available through UM’s agreement with Microsoft, comes with Microsoft data protection. This means you can use it to analyze data that you shouldn’t share with other tools for privacy reasons. To access Bing Chat Enterprise, visit Bing.com and sign in with your UM credentials.
- UM’s Blackboard Ultra now has an “AI design assistant” available to UM instructors. This assistant can quickly build out your course shell with modules and descriptions and images, and it can generate quiz questions and rubrics based on your course content. Anthology is the company that provides Blackboard, and you can read more about the new AI design assistant on their website. Marc said that the
folks at the Faculty Technology Development Center (better known as FTDC) can assist instructors with getting started with this new AI tool. - Microsoft will soon be releasing an AI assistant called CoPilot for their Office 365 programs, which includes Word and Excel and lots more. In the demo video, CoPilot is seen reading meeting notes from a user’s OneNote files along with a Word document about a proposal that needs to be written. Then CoPilot generates a draft proposal based on the meeting notes and the proposal request. It looks like CoPilot will be able to do all kinds of practical (and sometimes boring) tasks! UM hasn’t decided to purchase CoPilot yet, since there’s an additional per-user charge, but Microsoft will be on campus to demo the new AI tool as part of Data Science Day on November 14th.
- Just this past Monday, OpenAI, the company behind ChatGPT, made a bunch of announcements, including a cost decrease for ChatGPT Plus (the pro version of the tool that runs the more powerful GPT-4), a new GPT-4 Turbo that runs faster and allows for much more user input, and a way to create your own GPT-powered chat tools, among other things. One of the sample “GPTs” purports to explain board and card games to players of any age! We’ll see about that.
Marc also recommended checking out Claude, a generative AI chatbot that’s comparable to ChatGPT Plus (the one that runs on GPT-4) but (a) free and (b) has a large “context window,” that is, allows greater amounts of user input. You can, for instance, give it a 30,000 word document full of de-identified student feedback and ask it to analyze the feedback for key themes. (Note that Claude is provided by a company called Anthropic, which is a different company from Anthology, the folks that make Blackboard. Don’t be confused like I was.)
After these updated, Kellye took us on a deep dive into her use of the AI tool Perplexity in her writing courses. See her slides, “AI- Assisted Assignments in Student Learning Circles,” for more information, but what follows is my recap.
Kellye attended the AI institute that Marc organized this past summer. She came in hesitant about using AI in her teaching and was a little overwhelmed at variety and power of these technologies at first, but now she is actively experimenting with AI in her courses. She has also become accustomed to feeling overwhelmed at the pace of change in generative AI, and she uses this to have empathy for her students who are also feeling overwhelmed.
Kellye uses a small-group structure in her courses that she calls student learning circles. These are persistent groups of 3-5 students each that meet during class time weekly throughout the semester. She called these class sessions “authority-empty spaces” since she encourages the students to meet around the building without her. She’s available in her office and by email for assistance during these class sessions, but she’s encouraging student autonomy by removing herself from the front of the room.
One of the AI-supported activities in her first-year composition course involves “stress testing” claims. She opens this activity by considering a common claim about learning styles, that each student has a specific way of learning (verbal, visual, and so on) in which they learn better. She’ll ask her students if they know their learning style, and most report being visual learners, with verbal learners in a distant second. Then she’ll ask Perplexity, a generative AI tool like ChatGPT but with better sources, “Are learning styles real?” Of course, there’s piles of research on this topic and all of it refutes the so-called “matching hypothesis” that students learn best when the instructional modality matches their learning style. It becomes clear from Perplexity’s response, replete with footnotes, that the claim about learning styles is questionable.
Then Kellye turns her students loose on a list of claims on various topics: crime rates, campus mental health, pandemic learning loss, and much more. First students work in their groups to analyze a given claim. What do they know about? Do they agree with it? What assumptions are baked into the claim? Then the student groups ask Perplexity about the claims, using whatever prompts they want. Then Kellye provides students with specific prompts for the claim, ones aimed at uncovering the assumptions the claim makes. The students enter these prompts into Perplexity and then analyze the results.
Here’s an example. One of Kellye’s claims reads, “With record crime rates across the nation, cities should invest in robust community programs designed to increase awareness of prevention methods to keep residents safe.” One group of students noted in their pre-Perplexity analysis that there are already a lot of such community programs that don’t seem to be lowering the crime rates, so the claim needs more work around the types of programs to be launched, perhaps matching them with particular kinds of crime. When the group asked Perplexity about the claim, the bot said something similar, noting that such programs need to be targeted to types of crime. But then Kellye provided the group with her prompt: “Are crime rates at an all-time high?” Perplexity quickly provided lots of data indicating that, in fact, crime rates are far lower than they’ve been historically. There was an assumption baked into the claim, that crime rates are at record highs, that neither the students nor Perplexity picked up!
I find this activity fascinating for a couple of reasons. One is that it shows how hard it can be to “stress test” a claim, that students need opportunities to learn how to do this kind of work. The other is that the AI chatbot wasn’t any better than the students in identifying faulty assumptions baked into a claim. Perplexity did a great job backing up its statements with real sources from the internet (that students could follow and analyze for credibility), but it only answered the questions it was given. What you get from the AI depends on what you ask, which means it’s just as important to be asking good questions with an AI as it was without an AI.
It’s possible that other AI tools might be better at this kind of questioning of assumptions. Bing Chat, for instance, will often suggest follow-up questions you might ask after it answers one of your questions. On other hand, I’ve found that the quality of sources that Bing Chat uses are often low. Regardless, I love Kellye’s activity as a way to teach students how to think critically about the claims they encounter and how to think critically about the output of an AI chatbot.
I’ll end with one more note from Kellye. She was asked how her students react to using generative AI in her course. She said that several of her students had a hard time believing her when she said it was okay that they use these tools. They had received clear messages from somewhere (other instructors?) that using generative AI for coursework was forbidden. But once these students started experimenting with Perplexity and other similar tools, they were impressed at how helpful the tools were for improving their understanding and writing. Kellye also noted that when students are working in groups, they’re much more likely to question the output of an AI chatbot than when they’re working individually.
This week’s event was our last in series on AI this fall, but stay tuned for more great conversations on this topic in the spring semester.