Thanks to the recent explosion of artificial intelligence (AI) technologies, SMU has now developed three separate policies on AI integration to help professors establish expectations about the software’s usage in the classroom.
Across campus, professors have been given the choice to follow SMU-provided pre-written policies that either ban or only permit small amounts of the technology, or, if possible, create a unique policy on AI integration in their classroom.
“Chat GPT is a powerful tool that can help people communicate more effectively with technology, and it’s a great example of how AI is changing the way we interact with computers and each other,” according to an SMU webpage that lists contemporary AI op-eds, as well as lists tools for learning and using various AI programs.
Samantha Marbry, a writing and rhetoric professor, prohibited AI in her class to encourage her students to learn to generate and develop their ideas as writers.
“What I have seen in my limited use, yes, AI can generate ideas, but it does not do so with clarity and grace, and it does not do so with specificity,” Professor Marbry said. “When I get responses that I think maybe AI-generated, they are bland, and they sound like they have no person behind them.”
Marbry also expressed her frustrations with the AI database learning by stealing others’ intellectual property – especially after two of her novels were used to train AI without her permission.
“I think that’s terrible, on an ethical or professional level, there’s no check I’m getting for that or anything,” she said. “So I don’t like the ethics behind the way that they scrape information to train their bots.”
Cheryl Medenhall, a graphic design professor at SMU, said the graphic design department has embraced AI integration into their teaching. Both the graphic design department and the advertising department encourage students to use AI to help storyboard ideas or generate basic images, while industries continue to develop their standards around the technology.
“It’s so new in the industry to say ‘absolutely you can’t use it’ is crazy,” Mendenhall said. “It’s good to start to learn how those prompts work and see how we can start to work them into a prompt flow where they are helping us come up with ideas but not necessarily giving them the idea.”
Mendenhall also stressed integrating platforms with ethical AI as a standard within and across the graphic design and advertising departments.
“I’m steering students more towards Adobe,” she said. “They’ve sourced from their stock library, which they have all the permissions and copyright for, and they’re not sourcing and learning from random everybody out there on the internet.”
Mendenhall also mentioned that the advertising faculty had met with a local ad agency, TRG, last fall as a workshop to discuss industry practices involving AI.
“In the industry, there’s a lot of copyright and ‘is it legal’ questions,” she explained. “We had a discussion with [TRG] and the industry on how they’re using AI and how it’s working for them to see if they’d really been able to do much with it yet legally, ethically.”
Jennifer Culver, senior academic technology services director for the Simmons School of Education and Human Development and adjunct for technology in education and humanities classes, follows her own unique AI policy. In Culver’s courses, students use AI as an assistant for their STEM-oriented assignments, as it allows students to refine their thoughts.
“AI is a great thinking partner, it can help brainstorm ideas, suggest improvements, come up with more options to a problem, and organize thoughts,” Culver said. “The back and forth nature of the generative AI helps users refine their thoughts as the ‘conversation’ continues.”
Culver sees potential for AI integration to help students with a range of issues from helping to write or debug complex code to helping recommend ideas for students. But, she is also asking students to critically think about AI usage and its ethical concerns.
“We also had assignments to evaluate and audit certain AI tools so that students had a better sense of the tools being used, both the positives and negatives,” she said. “We talk about [AI concerns about things like bias and stolen work], and this is part of the evaluation or audit of the tools.”
Culver also pointed out that while generative AI may be new, people have been using AI to help with their work for years now.
“It’s in Grammarly, It’s the algorithm that helps recommend movies to watch or books to read, it’s in some of the chatbots that companies use to answer easy questions,” Culver said. “The big difference is in this generative ability and the implications around that.”