Generative artificial intelligence has been a topic of considerable discussion – excitement, trepidation, curiosity – since the release of OpenAI’s ChatGPT for free public use in November 2022. Will these tools perform mundane tasks for us, enhance our productivity and creativity or someday replace us?
Leaders at UofL are taking steps to help faculty and students explore the possibilities and determine when and how it should be used at the university.
Generative artificial intelligence (AI) is computer technology that creates text, images, video or other materials in response to a user prompt. ChatGPT, a large language model powered chatbot, is one of the most recognized generative AI platforms for creating text, along with Google Bard and Microsoft CoPilot. Users input questions, requests or instructions to these platforms and, in return, receive seemingly complete articles, reports or refined text.
So, does this tool allow students to bypass assignments or offer a whole new way to learn?
Jose Fernandez, associate professor of economics in the College of Business who co-chairs the university’s committee on generative AI in academics, believes generative AI will allow students to spend more time in creative projects and less on mundane processes.
“I can have my students have worldly debates with dead people who didn’t exist at the same time,” Fernandez said. “You can push the envelope a little bit in the classroom because students will spend less time learning-by-doing where they make multiple mistakes, like computer coding, before they get it right, and can spend more time being creative and critical.”
Generative AI models are trained with large amounts of existing information from the internet. This information is then processed through a digital neural network with the user’s prompt, returning what may seem like a complete and accurate composition. However, it also may confidently return information that is not factual, a phenomenon known as “hallucination.”
“This is going to be your new ‘Google,’ but it’s a Google that lies to you sometimes, so you have to be smart enough to know when it’s lying,” Fernandez said. “We have to understand its strengths and its weaknesses.”
To help faculty better understand those strengths and weaknesses, UofL’s Delphi Center for Teaching and Learning has hosted workshops and discussions over the last year to consider ways to use – or not use – generative AI in their teaching, and hosts web pages with resources for teaching in the ChatGPT era. The center’s goal is to make instructors aware of the models, support their use when appropriate and consider problems that may arise.
While some faculty members have reservations about the new technology, Kelvin Thompson, vice provost for online strategy and teaching innovation, said the Delphi Center aims to support its beneficial use.
“Everybody is hungry for information on how to process this. It is far from a settled thing,” Thompson said. “I want to encourage folks to be open-minded and see but also wisely and knowledgeably make decisions. I would characterize our approach as respectfully positive. How do we look for opportunities that weren’t there before and encourage faculty to do that?”
Thompson hopes that support will spill over to students, allowing them to learn to use generative AI tools as part of their education and at the same time helping faculty members save time and improve processes.
Here are three examples of how Cardinals across campus are helping to effectively
employ the new technology.
Legal writing
Susan Tanner, an assistant professor who teaches legal writing at Brandeis School of Law, was an early adopter in using generative AI in her teaching. She is developing a toolkit to help other law instructors use the technology in their legal writing curricula.
“Lawyers are going to have to become familiar with the new technology; there actually is a duty for technological competence baked into the rules of professional conduct for attorneys,” Tanner said. “But students need to use it ethically, so I think about this from an ethical standpoint of being responsible to our students, to each other and to future clients.”
Tanner said some students have been hesitant to use generative AI, whether because they are less comfortable with technology or they worry about unintentionally using it in a dishonest way. To help them over that hump, Tanner provides specific scenarios for them to use in ChatGPT, such as to get feedback on their writing or to convert a legal memorandum into a style that is more understandable for a client.
She also uses the tool to save herself time. For example, she can create hypothetical cases for her students in about 20 minutes using ChatGPT as opposed to several hours without using AI.
AI in business
The College of Business offers an undergraduate marketing course that explores AI’s impact in the marketplace, and a new online MBA course, “Business Applications of AI,” will equip students with knowledge and skills to evaluate, implement and manage AI initiatives in business practices and strategies.
Nat Irvin, assistant dean for thought leadership and civic engagement in the College of Business, believes the use of generative AI in business is just beginning.
“We’re just getting started. We’re in the wave, but we’re at the early part of the wave,”
he said.
Irvin jumped into that wave with both feet, focusing his popular “Managing the Future” class in early 2023 on using generative AI in business management, asking his students to use ChatGPT to create a health care system that solves the problem of cholera in the year 1854.
“I put it in my class immediately,” Irvin said. “The question for me is, how do I design a learning experience that will force you to think critically, now using this new tool? When you ask it to help you explore ideas that you would have never considered, you suddenly have at your disposal unlimited possibilities.”
Addressing concerns
Along with all the possibilities, concerns about expanding generative AI go beyond academic dishonesty and fabricated information to data privacy, the ecological cost of the vast computational power required to run the models and systemic bias inherited from information used to train the models.
Irvin believes some of those concerns will be resolved in the private sector but that universities also have a role.
“The problem of whether or not something was written by AI is going to be solved in the private sector because it benefits the market to be able to determine whether something is authentic or not,” Irvin said. “The ethical issues are huge, of course, and institutions like ours will have to deal with that with policies and procedures.”
In August, the Office of the Provost created a university committee to examine how generative AI should be used and taught at UofL. While the initial mission was to address academic dishonesty in assignments and research, there also is a desire to ensure students are prepared to use the technology in the real world.
“Our goal is to explore how AI can enrich learning experiences and empower our students and faculty to be at the forefront of technological advancements, while maintaining the highest standards of ethical conduct and academic integrity,” said Beth Boehm, vice provost for graduate affairs, who co-chairs the committee with Fernandez. “We are leaning away from the idea that you should ban it, because we don’t think that’s possible, and leaning toward the notion that you should teach it and be honest and open about it and ask your students to be honest and open if they use it.”
Irvin believes that in time, using generative AI will be as common as using a calculator.
“We’ll have changes in the way we think, what’s important to know, what’s important to memorize,” Irvin said. “The idea of not using generative AI – that’s over. We will use it and we will adapt and we will never look back.”