• Skip to primary navigation
  • Skip to main content

Center for Excellence in Teaching and Learning

University of Mississippi

Hide Search
  • About
    • Mission and Purpose
    • History
    • Meet Our Team
    • Location & Hours
  • Services
  • Programs
    • Faculty
      • Faculty Reading Group
      • Inclusive Teaching Learning Community
      • Teaching Development Grants
    • Graduate Students
      • Graduate Teaching Orientation
      • Fundamentals of Teaching
      • Graduate Reading Group
      • Graduate Teaching Credentials
      • Graduate Excellence in Teaching Awards
      • Graduate Consultant Program
      • Graduate Teaching Resources
    • Supplemental Instruction
      • Student Information
      • SI Schedule Fall 2023
  • Events
  • Resources
    • Teaching Resources
    • Teaching Awards
      • University Teaching Awards
      • Graduate Excellence in Teaching Awards
  • Blog
  • Thank an Instructor
  • Show Search

Events

Recap: Student Belonging in STEM

· Nov 15, 2023 ·

by Derek Bruff, visiting associate director
Last Friday, CETL hosted the third event in our STEM teaching lunch series. This conversation was focused on student belonging with a presentation by Rebecca Symula, instructional associate professor of biology; Susan Pedigo, professor of chemistry and biochemistry; and Jessica Osborne, principal evaluation associate at the Center for Research Evaluation (CERE). The presenters are all part of a grant from the Howard Hughes Medical Institute (HHMI) to understand, promote, and evaluate inclusivity in STEM education.

As part of the grant work, Jessica and her CERE colleagues conducted focus groups with undergraduate students using a World Cafe approach and surveyed STEM faculty members to get a better understanding of student belonging and the actions that can foster it. At the Friday lunch, Jessica shared some of the results of that research while Rebecca and Susan connected those results to their individual teaching practices and invited discussion among the faculty, staff, and graduate students in attendance.

What is student belonging? Rebecca Symula shared a definition from researcher Terrell Strayhorn: “Sense of belonging is a basic human need, and at the most basic level, is whether or not a person feels respected, valued, accepted, cared for, and included.” I’ve often cited the research of Geoffrey Cohen who studies belonging in education, and in a recent interview he described belonging as simply the sense that we are a valued part of a community. Cohen has a new book out called Belonging: The Science of Creating Connections and Bridging Divides, and it’s on my desk ready for me to read soon! There’s also a model developed by Zumbrunn et al. (2014) specifically for science education which situates belonging as a pre-condition for self-efficacy in a course and a valuing of course tasks, both of which in turn lead to motivation to learn and the development of a science identity.

During the presentation, Jessica shared a few instructor behaviors that were identified by study participants to help foster belonging, including opening up the class to discussion, encouraging students to help each other, sharing what’s happening in the department, and providing consistent due dates and reminders. She asked those of us in the audience to guess whether each behavior was identified by a student participant or a faculty participant, and we generally got the answers wrong! Jessica made the point that there’s a lot of overlap between what students and faculty say about fostering belonging, which our guesses demonstrated in a way.

From the list of instructional approaches identified by study participants as fostering belonging, we drilled down on a couple of practices during the discussion. Knowing student names was a hot topic because it helps students feel valued and encourages them to seek help (Cooper et al., 2017), but it’s hard to pull off practically, especially if you have more than 20 or 30 students. CETL director Josh Eyler pointed out that using student names what matters, not memorizing student names, which means that having students use name tents during class can help with belonging. I have a standard discussion rule that the first time someone speaks up in class they should introduce themselves, which is another way to actively use student names. There was also a recent graduate in the room who added that names aren’t entirely necessary; just recognizing a student you pass on the sidewalk as one of your students can help that student feel they have a place at the university.

We also talked about what I like to call “first week of class work,” that is, what we say in our syllabi and how we talk about our courses to students early in the semester. Students are more likely to see instructors as approachable when instructors use warm and welcoming language in these settings (Hamish and Bridges, 2011). Susan Pedigo mentioned that her syllabus used to read like a legal contract, but as she’s learned more about this line of research, she’s adopted a much more invitational tone in her course documents. Other lunch participants talked about the importance of helping students understand what they’re likely to get out of a course so that they see value in participating actively in the course. And I’ll recommend a resource I just learned about today, a “Who’s in class?” form developed by educational developer Tracie Addy and colleagues that’s useful for learning about your students and also communicating to them that you care about them as people and as learners.

One topic we didn’t dive into was the ways that students’ social identities (gender, race, first-generation status, neurodivergence, and so on) can affect their sense of belonging. This is a big topic and an important one, and I hope we’ll explore it at a future STEM teaching lunch. For now, I’ll point to some of Geoffrey Cohen’s research showing that a relatively brief intervention can have lasting effects on college student success. The intervention helped students see that worries about belonging are normal and tend to improve over time, and just thirty minutes of this was enough to improve first-year completion rates by two percentage points for students in groups that historically persist at lower rates. Using students’ names and adopting a welcome tone the first week of class are useful moves to make, but there are many more strategies instructors can leverage to help students develop a sense of belonging.

Thanks to our presenters and to all of those who came and participated in the discussion. And stay tuned to the usual CETL channels to hear about our spring semester slate of STEM teaching lunches.

Recap: Beyond ChatGPT – New Tools to Augment Your Research

· Nov 10, 2023 ·

by Derek Bruff, visiting associate director

Earlier this week, I had the chance to host another event in the series on generative AI organized by CETL and AIG. Wednesday’s event was titled “Beyond ChatGPT: New Tools to Augment Your Research,” and it featured speakers Marc Watkins, academic innovation fellow and lecturer of writing and rhetoric and someone who seems to always be three steps ahead of me on generative AI, and Kellye Makamson, lecturer in writing and rhetoric and formerly reluctant adopter of AI in teaching. Marc provided some updates on generative AI technologies available to UM instructors, and Kellye shared examples of her use of a particular AI tool (Perplexity) in her writing courses. Below you’ll find a few highlights from the session.

First, Marc’s updates, which you can also find on this Google doc:

  • Bing Chat Enterprise is now available to UM faculty and staff. This version of Bing Chat runs on GPT-4, which is noticeably more powerful than the earlier version that powers the public version of Bing Chat. It also has an integration with DALL-E for image generation and, since it’s available through UM’s agreement with Microsoft, comes with Microsoft data protection. This means you can use it to analyze data that you shouldn’t share with other tools for privacy reasons. To access Bing Chat Enterprise, visit Bing.com and sign in with your UM credentials.

  • UM’s Blackboard Ultra now has an “AI design assistant” available to UM instructors. This assistant can quickly build out your course shell with modules and descriptions and images, and it can generate quiz questions and rubrics based on your course content. Anthology is the company that provides Blackboard, and you can read more about the new AI design assistant on their website. Marc said that the
    folks at the Faculty Technology Development Center (better known as FTDC) can assist instructors with getting started with this new AI tool.
  • Microsoft will soon be releasing an AI assistant called CoPilot for their Office 365 programs, which includes Word and Excel and lots more. In the demo video, CoPilot is seen reading meeting notes from a user’s OneNote files along with a Word document about a proposal that needs to be written. Then CoPilot generates a draft proposal based on the meeting notes and the proposal request. It looks like CoPilot will be able to do all kinds of practical (and sometimes boring) tasks! UM hasn’t decided to purchase CoPilot yet, since there’s an additional per-user charge, but Microsoft will be on campus to demo the new AI tool as part of Data Science Day on November 14th.
  • Just this past Monday, OpenAI, the company behind ChatGPT, made a bunch of announcements, including a cost decrease for ChatGPT Plus (the pro version of the tool that runs the more powerful GPT-4), a new GPT-4 Turbo that runs faster and allows for much more user input, and a way to create your own GPT-powered chat tools, among other things. One of the sample “GPTs” purports to explain board and card games to players of any age! We’ll see about that.

Marc also recommended checking out Claude, a generative AI chatbot that’s comparable to ChatGPT Plus (the one that runs on GPT-4) but (a) free and (b) has a large “context window,” that is, allows greater amounts of user input. You can, for instance, give it a 30,000 word document full of de-identified student feedback and ask it to analyze the feedback for key themes. (Note that Claude is provided by a company called Anthropic, which is a different company from Anthology, the folks that make Blackboard. Don’t be confused like I was.)

After these updated, Kellye took us on a deep dive into her use of the AI tool Perplexity in her writing courses. See her slides, “AI- Assisted Assignments in Student Learning Circles,” for more information, but what follows is my recap.

Kellye attended the AI institute that Marc organized this past summer. She came in hesitant about using AI in her teaching and was a little overwhelmed at variety and power of these technologies at first, but now she is actively experimenting with AI in her courses. She has also become accustomed to feeling overwhelmed at the pace of change in generative AI, and she uses this to have empathy for her students who are also feeling overwhelmed.

Kellye uses a small-group structure in her courses that she calls student learning circles. These are persistent groups of 3-5 students each that meet during class time weekly throughout the semester. She called these class sessions “authority-empty spaces” since she encourages the students to meet around the building without her. She’s available in her office and by email for assistance during these class sessions, but she’s encouraging student autonomy by removing herself from the front of the room.

DALL-E-generated image of "a robot fact-checking a story in a newspaper"

One of the AI-supported activities in her first-year composition course involves “stress testing” claims. She opens this activity by considering a common claim about learning styles, that each student has a specific way of learning (verbal, visual, and so on) in which they learn better. She’ll ask her students if they know their learning style, and most report being visual learners, with verbal learners in a distant second. Then she’ll ask Perplexity, a generative AI tool like ChatGPT but with better sources, “Are learning styles real?” Of course, there’s piles of research on this topic and all of it refutes the so-called “matching hypothesis” that students learn best when the instructional modality matches their learning style. It becomes clear from Perplexity’s response, replete with footnotes, that the claim about learning styles is questionable.

Then Kellye turns her students loose on a list of claims on various topics: crime rates, campus mental health, pandemic learning loss, and much more. First students work in their groups to analyze a given claim. What do they know about? Do they agree with it? What assumptions are baked into the claim? Then the student groups ask Perplexity about the claims, using whatever prompts they want. Then Kellye provides students with specific prompts for the claim, ones aimed at uncovering the assumptions the claim makes. The students enter these prompts into Perplexity and then analyze the results.

Here’s an example. One of Kellye’s claims reads, “With record crime rates across the nation, cities should invest in robust community programs designed to increase awareness of prevention methods to keep residents safe.” One group of students noted in their pre-Perplexity analysis that there are already a lot of such community programs that don’t seem to be lowering the crime rates, so the claim needs more work around the types of programs to be launched, perhaps matching them with particular kinds of crime. When the group asked Perplexity about the claim, the bot said something similar, noting that such programs need to be targeted to types of crime. But then Kellye provided the group with her prompt: “Are crime rates at an all-time high?” Perplexity quickly provided lots of data indicating that, in fact, crime rates are far lower than they’ve been historically. There was an assumption baked into the claim, that crime rates are at record highs, that neither the students nor Perplexity picked up!

I find this activity fascinating for a couple of reasons. One is that it shows how hard it can be to “stress test” a claim, that students need opportunities to learn how to do this kind of work. The other is that the AI chatbot wasn’t any better than the students in identifying faulty assumptions baked into a claim. Perplexity did a great job backing up its statements with real sources from the internet (that students could follow and analyze for credibility), but it only answered the questions it was given. What you get from the AI depends on what you ask, which means it’s just as important to be asking good questions with an AI as it was without an AI.

It’s possible that other AI tools might be better at this kind of questioning of assumptions. Bing Chat, for instance, will often suggest follow-up questions you might ask after it answers one of your questions. On other hand, I’ve found that the quality of sources that Bing Chat uses are often low. Regardless, I love Kellye’s activity as a way to teach students how to think critically about the claims they encounter and how to think critically about the output of an AI chatbot.

I’ll end with one more note from Kellye. She was asked how her students react to using generative AI in her course. She said that several of her students had a hard time believing her when she said it was okay that they use these tools. They had received clear messages from somewhere (other instructors?) that using generative AI for coursework was forbidden. But once these students started experimenting with Perplexity and other similar tools, they were impressed at how helpful the tools were for improving their understanding and writing. Kellye also noted that when students are working in groups, they’re much more likely to question the output of an AI chatbot than when they’re working individually.

This week’s event was our last in series on AI this fall, but stay tuned for more great conversations on this topic in the spring semester.

Recap: Getting Started with Alternative Grading Approaches

· Nov 6, 2023 ·

by Emily Donahoe, associate director for instructional support

Last week, Josh Eyler and I facilitated a reprise of our workshop on “Getting Started with Alternative Grading Approaches,” a topic we’re both pretty passionate about. You can review our slides here. What follows are some highlights from this workshop.

We’ve defined “alternative grading,” in contrast to “traditional grading,” as a set of practices designed to center student growth through some combination of revision, multiple attempts, significant feedback, student reflection, and mastery of skills and dispositions. These models strive to decenter the grade itself and instead prioritize learning. 

Because grading touches so many areas of our practice as teachers, and because implementing new models requires a significant time investment, we thought it was important to consider three things before deciding what kind of alternative grading approach might work best for you:

  • Your beliefs about learning: How do people learn best? What conditions are most conducive to learning? What student and instructor actions best promote learning? What roles do student agency, failure, metacognition, etc. play in learning?
  • Your desired outcomes for student learning: What competencies or dispositions do you most want students to develop? Agency and autonomy? Metacognition? Self-motivation? Love of learning? The possibilities are endless.
  • Your teaching contexts: What are the expectations of your colleagues? What are your social identities? At what career stage are you? What are your typical class sizes? What time do you have to devote to new teaching methods?

Answering questions like these can provide a strong foundation for making decisions about your grading system. 

We also introduced some common features of alternative grading models, as delineated by Robert Talbert and David Clark, authors of the new book Grading for Growth. Talbert and Clark suggest that grading for growth is based on four pillars:

  1. Clearly defined standards
  2. Helpful feedback
  3. Marks that indicate progress
  4. Reattempts without penalty

You can read more about these pillars in Talbert and Clark’s book or on their Substack.

Finally, Josh and I introduced four different alternative grading approaches:

    • Labor-based contract grading: A system in which student grades are based on the amount of labor they undertake in the course rather than evaluations of the quality of individual assignments, with expectations laid out at the beginning of the semester through a grade contract. 
    • Specifications/Mastery/Standards-based grading: Three closely related systems in which student grades are determined by demonstrations of competency on specific outcomes, through multiple attempts and continuous feedback across the semester.
    • Ungrading (or collaborative grading): In which students engage in supported self-assessment and determine their final course grade collaboratively with the instructor, often by presenting evidence of their learning in an individual conference.
    • Portfolio grading: In which students continuously revise and improve their work across the semester for inclusion in a final portfolio, the overall quality of which determines their final grade. 

These approaches aren’t necessarily “plug and play”: instructors should feel free to take the elements of each practice that work for them and leave the rest. And if you’re not ready to jump feet first into a new grading system, there are plenty of things you can do to start minimizing grades in your courses: assign work that receives feedback but no grade, for example; build in opportunities for self-assessment; or allow revisions and re-takes when possible. 

To help instructors learn more about these practices, we also put together an alternative grading bibliography with lots of great resources. You can also read about these practices in our workshop slides. If you’d like additional support in developing a new approach to grading in your course, don’t hesitate to reach out to us at cetl@olemiss.edu to schedule a consultation!  

Recap: A Pedagogy of Kindness with Dr. Cate Denial

· Sep 26, 2023 ·

by Emily Pitts Donahoe, associate director of instructional support

This past Wednesday, CETL was thrilled to host guest speaker Dr. Cate Denial, the Bright Distinguished Professor of American History and Director of the Bright Institute at Knox College. Cate is the principal investigator on a Mellon Foundation grant exploring “Pedagogies, Communities, and Practices of Care in the Academy After COVID-19.” She is also the author of a forthcoming book titled A Pedagogy of Kindness. In short, she has a lot of wisdom to share about compassionate teaching.

Academia and Kindness

Cate began her talk by defining kindness, both by what it is and what it is not. Kindness, Cate stresses, is not the same thing as “niceness.” Being “nice” often means lying about or minimizing the deep wounds caused by precarity, power imbalances, burnout, and other toxic elements of academia. Instead, practicing kindness means being honest about the challenges we face and the ways in which academia has been, and continues to be, hostile toward so many marginalized populations. Kindness also involves accountability, requiring us to take responsibility for the impact of our mistakes and to prioritize the pursuit of justice over the pursuit of comfort. Finally, kindness is a discipline that we must practice even when we don’t feel particularly compassionate.

The academy, as we know, is often unkind. It socializes us into distrust, encouraging competition over collaboration, perpetuating exclusion, and promoting antagonism, especially toward our students. Cate shared her own story about how this distrust harmed her experience as a graduate student and early career faculty member, and the ways she learned to do things differently. This involved critically questioning the pedagogical choices she made, with one key question in mind: “Why not be kind?”

A Pedagogy of Kindness

Cate’s “pedagogy of kindness” has three main tenets:

Justice: Cate shared some pretty stunning numbers about the challenges our students are facing. For example, almost half of undergraduates in a recent study experienced housing insecurity while seeking their degree, and a third experienced food insecurity. Many students—especially our disabled, LGBTQIA+, and BIPOC students—are navigating academic environments that were not designed with their needs and identities in mind. 

In light of these challenges, justice asks us to question our assumptions about students, to give them the benefit of the doubt, and to get rid of the hoops that stand in their way. 

Believing Students: When a student says that they’re facing a challenge that prevents them from engaging with our class, we could either start from a position of suspicion or from a position of belief and support. Cate advocates for cultivating trusting relationships with students: it’s better to be occasionally taken in by student dishonesty than to risk harming students who are genuinely in crisis. 

Believing in Students: Believing in students means investing in the idea that all students are capable of success with the proper support and that students really want to learn and grow. In practice, that means two things: rethinking our course materials with trust in mind and collaborating with students in their learning.

When Cate critically reflected on her course materials a few years ago, she discovered that she was designing for students she didn’t trust; that she was projecting an image of unassailable authority and unapproachability; and that her materials were not accessible to a broad range of students. She redesigned her syllabi and assignment sheets in ways that made the course more transparent and welcoming. She also redesigned her assessments to allow students to demonstrate their learning in a variety of ways—not just in writing.

Additionally, her revisions made opportunities for students to contribute to, and even co-create, her course. This means that students have input on the course policies, assignments, and even grading and that they complete authentic assessments that have meaning beyond the classroom. Cate encouraged attendees to find similar ways to be kind to the students in their own courses. 

Being Kind to Yourself

Finally, Cate emphasized that students should not be the only recipient of your kindness. You can be kind to yourself in a lot of different ways, but perhaps the most important are by setting boundaries and giving yourself space to rest and grow—not only in your teaching but also in your life outside of work. 

We’re grateful to Cate for sharing her time and wisdom with us last week! To learn more about Cate’s work, you can follow her on Twitter and Bluesky or visit https://catherinedenial.org/. Be on the lookout for A Pedagogy of Kindness, forthcoming July 2024! 

Recap: Teaching in the Age of AI (What’s Working, What’s Not)

· Sep 21, 2023 ·

by Derek Bruff, visiting associate director

A robot writing on a sheet of paper on a cluttered desk, as imagined by MidjourneyEarlier this week, CETL and AIG hosted a discussion among UM faculty and other instructors about teaching and AI this fall semester. We wanted to know what was working when it came to policies and assignments that responded to generative AI technologies like ChatGPT, Google Bard, Midjourney, DALL-E, and more. We were also interested in hearing what wasn’t working, as well as questions and concerns that the university community had about teaching and AI.

We started the session with a Zoom poll asking participants what kinds of AI policies they had this fall in their courses. There were very few “red lights,” that is, instructors who prohibited generative AI in their courses and assignments. There were far more “yellow lights” who permitted AI use with some limitations. We had some “green lights” in the room, who were all in on AI, and a fair number of “flashing lights” who were still figuring out their AI policies!

Robert Cummings and I had invited a few faculty to serve as lead discussants at the event. Here is some of what they said about their approaches to AI this fall:

  • Guy Krueger, senior lecture in writing and rhetoric, described himself as a “green light.” He encourages his writing students to use AI text generators like ChatGPT in their writing, perhaps as ways to get started on a piece of writing, ways to get additional ideas and perspectives on a topic, or tools for polishing their writing. Guy said that he’s most interested in the rhetorical choices that his students are making, and that the use of AI, along with some reflection by his students, generated good conversations about those rhetorical choices. He mentioned that students who receive feedback on their writing from an instructor often feel obligated to follow that feedback, but they’re more likely to say “no, thanks” to feedback from a chatbot, leading to perhaps more intentional decisions as writers. (I thought that was an interesting insight!)
  • Deborah Mower, associate professor of philosophy, said she was a “red light.” She said that with generative AI changing so rapidly, she didn’t feel the time was right to guide students effectively in their use or non-use of these tools. She’s also teaching a graduate level course in her department this fall, and she feels that they need to attend to traditional methods of research and writing in their discipline. Next semester, she’ll be teaching an undergraduate course with semester-long, scaffolded projects on topics in which her students are invested, and she’s planning in that course to have them use some AI tools here and there, perhaps by writing something without AI first and then revising that piece with AI input. (It sounds like she’s planning the kinds of authentic assignments that will minimize students’ use of AI to shortcut their own learning.)
  • Melody Xiong, instructor of computer and information science, is a “yellow” light in the 100-, 200-, and 400-level courses she’s teaching this fall. She teaches computer programming, and while she doesn’t want students to just copy AI output for their assignments, she is okay with students using AI tools as aids in writing code, much like they would have used other aids before the advent of AI code generators. She and her TAs offer a lot of office hours and her department provides free tutoring, which she hopes reduces the incentive for her students to shortcut the task of learning programming. She also knows that many of her students will face job interviews that have “pop quiz” style coding challenges, so her students know these are skills they need to develop on their own. (External assessments can be a useful forcing function.)
  • Brian Young, scholarly communication librarian, and also described himself as a “green light” on AI. In his digital media studies courses, generative AI is on the syllabus. One of his goals is to develop his students’ AI literacy, and he does that through assignments that lead students through explorations and critiques of AI. He and his students have talked about the copyright issues with how AI tools are trained, as well as the “cascading biases” that can occur when the biases in the training data (e.g. gender biases in Wikipedia) then show up in the output of AI. One provocative move he made was to have an AI tool write a set of low-stakes reading response questions for his students to answer on Blackboard. When he revealed to his students that he didn’t write those questions himself, that launched a healthy conversation about AI and intellectual labor. (That’s a beautiful way to create a time for telling!)

Following the lead discussants, we opened the conversation to other participants, focusing on what’s working, what’s not, and what questions participants had. What follows are some of my takeaways from that larger conversation.

One faculty participant told the story of a colleague in real estate who is using generative AI to help write real estate listing. This colleague reports saving eight hours a week this way, freeing up time for more direct work with clients. This is the kind of AI use we’re starting to see in a variety of professions, and it has implications for the concepts and skills we teach our students. We might also find that generative AI can save us hours a week in our jobs with some clever prompting!

Several faculty mentioned that students are aware of generative AI technologies like ChatGPT and that many are interested in learning to use them appropriately, often with an eye on that job market when they graduate. Faculty also indicated that many students haven’t really used generative AI technologies to any great extent, so they have a lot to learn about these tools. One counterpoint: Deborah Mower, the “red light,” said that her graduate students have been content not to use AI in their course with her.

Course policies about AI use vary widely across the campus, which makes for a challenging learning landscape for students. I gather that some departments have leanings one way or another (toward red or green lights), but most course policies are determined by individual instructors. This is a point to emphasize to students, that different courses have different policies, because students might assume there’s a blanket policy when there is not.

This inconsistency in policy has led some students to have a fair amount of anxiety about being accused of cheating with AI. As AIG’s Academic Innovation Fellow Marc Watkins keeps reminding us, these generative AI tools are showing up everywhere, including Google Docs and soon Microsoft Word.

Other students have pushed back on “green light” course policies, arguing that they already have solid processes for writing and inserting AI tools into those processes is disruptive. I suspect that’s coming from more advanced students, but it’s an interesting response. And one that I can relate to… I didn’t use any AI to write this blog post, for instance.

A few participants mentioned the challenge of AI use in discussion forum posts. “Responses seemed odd,” one instructor wrote. They mentioned a post that clearly featured the student’s own ideas, but not in the student’s voice. Other instructors described posts that seemed entirely generated by AI without any editing by the student. From previous conversations with faculty, I know that asynchronous online courses, which tend to lean heavily on discussion forums, are particularly challenging to teach in the current environment.

That last point about discussion posts led to many questions from instructors: Where is the line between what’s acceptable and what’s not in terms of academic integrity? How is it possible to determine what’s human-produced and what’s AI-produced? How do you motivate or support students in editing what they get from an AI tool in useful ways? How can you help student develop better discernment for quality writing?

One participant took our conversation in a different direction, noting the ecological impact of the computing power required by AI tools, as well as the ethical issues with the training data gathered from the internet by the developers of AI tools. These are significant issues, and I’m thankful they were brought up during our conversation. To learn about these issues and perhaps explore them with your students, “The Elements of AI Ethics” by communication theorist Per Axbom looks like a good place to start.

Thanks to all who participated in our Zoom conversation earlier this week. We still have a lot of unanswered questions, but I think the conversation provided some potential answers and helped shape those questions usefully. If you teach at UM and would like to talk with someone from CETL about the use of AI in your courses, please reach out.  We’re also organizing a student panel on AI and learning on October 10th. You can learn more about this event and register for it here. And if you’d like to sign up for Auburn University’s “Teaching with AI” course, you can request a slot here.

Update: I emailed Guy Krueger, one of our lead discussants, and asked him to expand on his point about students who have trouble starting a piece of writing. His response was instructive, and I received his permission to share it here.

I mentioned that I used to tell students that they don’t need to start with the introduction when writing, that I often start with body paragraphs and they can do the same to get going. And I might still mention that depending on the student; however, since we have been using AI, I have had several students prompt the AI to write an introduction or a few sentences just to give them something to look at beyond the blank screen. Sometimes they keep all or part of what the AI gives them; sometimes they don’t like it and start to re-work it, in effect beginning to write their own introductions.

I try to use a process that ensures students have plenty of material to begin drafting when we get to that stage, but every class seems to have a student or two who say they have real writer’s block. AI has definitely been a valuable tool in breaking through that. Plus, some students who don’t normally have problems getting started still like having some options and even seeing something they don’t like to help point them in a certain direction.

  • Go to page 1
  • Go to page 2
  • Go to Next Page »
The University of Mississippi logo
EEO Statement Give Us Your Feedback Accessibility Ethics Line UM Creed
Copyright © 2025 The University of Mississippi. All Rights Reserved.