AuthorEllen Wagner

What Is Top of Mind for Higher Education Leaders about AI?

W

On June 27, 2023, Whitney Kilgore, (iDesign) and Ellen Wagner (North Coast EduVisory), hosted a video summit with a group of US higher education leaders. The purpose of this Summit was to learn more about how participants and their institutions were preparing for the immediate and longer-term changes as AI makes its way into higher education. 

Our discussions focused on the impact of generative artificial intelligence, including ChatGPT. We invited 23 who we identified as “leaders who can get things done”, of whom 18 individuals joined the discussion. We came from a variety of institutions and associations from the US.  These individuals included professional staff, research faculty, and university administrators. Some serve as department chairs, other run Centers for Teaching and Learning, others serve in central institutional administrative roles.  Their names and institutional affiliations can be found in our Summary Report.

We wanted to learn more about what these participants were thinking and doing as we all ease into the transformative cycle that generative AI is unleashing. While we appreciate that “Hype Cycles” capture excitement about new innovations, we acknowledge the cyclical nature of emergent trends as they peak and flame out and morph into what they ultimately become. In that respect, this is likely to be a long and winding road, with various stops and starts on the road toward AI enlightenment.

  1. We need to learn to sort through the “noise” related to Generative AI
  2. How can we apply new ideas/tools to advance the student experience and to support faculty?
  3. How can we harness AI to improve operations of the team, and to do the monotonous work that teams don’t enjoy doing.
  4. We need to develop Generative AI literacy. We can tap into Generative AI for improving STEM learning.
  5. What does this mean for faculty research, what does this mean for using Generative AI in instruction, and how do teach students how to effectively use it? 
  6. How can GenAI help with upskilling/reskilling in addition to how to support faculty and students. 
  7. Using Open Educational Resources and GenAI for creating specialized learning activities 
  8. Information about how others are approaching the use of LLMs and ChatGPT in classrooms and research.
  9. How are we rethinking curriculum – our responsibility changes – what are the KSAs that we should focus on… How do we support students making good decisions related to the use of the tools. 
  10. How can we leverage this so that faculty are more comfortable with it? 
  11. Where are the hurdles that everyone else is facing?
  12. The role of policy: building a bill of rights to assist faculty and campuses.
  13. Discrimination/Bias/Marginalized populations – how are they impacted?
  14. What happens when we put copyrighted work into ChatGPT to get a summary?
  15. How can these technologies allow us to improve society and our students’ learning experiences?
  16. Can we address the needs of the students who have some college but no degree? 
  17. Where can we find guiding principles? 
  18. We are interested in contemplating what is the nature of content in the future. We want to consider the efficacy of the learning experiences for learners, and to rethink the role of the institution (assessment, resource utilization). For this conversation where do we lean in and where do we rethink content and processes?
  19. How we serve the neurodiverse, looking for shared promising practices, and possible collaborations related to research and teaching. We need to tap into creativity rather than just focusing on doing more.
  20. Can we help faculty separate the content from the skills that students need to learn?

Where to Begin? 

What are the some of the big things on our collective professional horizon where AI is concerned? We identified the following “Optimization Opportunities”:

  • The entire Teaching/Learning dynamic will need to be revisited, with particular attention paid to assessment
  • Learning Design in its various forms will need to consider strategies that scale with AI
  • Process improvement and Addressing Accessibility and Inclusion
  • Research methods
  • Institutional Operations

Megan Raymond shared a report from WCET highlighting members’ survey feedback about their adoption of AI. In summary, institutions do not have policies and are still reacting to the AI = cheating reaction. Institutions are looking for examples of how others have begun to navigate this space and how effective policies and practices can reduce the noise. https://wcet.wiche.edu/frontiers/2023/07/20/ai-campus-adoption-new-report-from-wcet/  

John Orr shared Miami University’s Navigating AI

https://www.dli.dcie.miami.edu/educators/navigating-ai/index.html

UCF AI resource was provided by Tom Cavanagh:

https://fctl.ucf.edu/technology/artificial-intelligence/

How does research fit into the priorities? Marcia Lovett gave us great examples of using both quantitative and qualitative data re: student experience to adapt courses at Carnegie Mellon University developing cognitive models so that the tools and courses can be improved. 

What Issues Must be Addressed as Institutions Get Started with AI?

  1. What is Cheating? 

Assessments and assignments will need to change. How much support is too much support? How will institutions ensure fair oversight of classroom practices related to AI if there are no institutional policies?

We are interested how K-12 education will change in terms of foundational learning. With in-class, active learning designs, will younger students do a lot more intensive building of foundational writing and critical thinking skills before they get to college?

  1. Role of Ethics

Ethics don’t manage morality, and they don’t establish law.  Ethical Frameworks do establish and articulate the values of communities and cultures. We are likely seeing a lot of virtue signaling about what will be “allowed.”  Considering how much of higher education’s existence is predicated on biased decision-making that we discovered as predictive analytics were being introduced into institutional student success initiatives back in the 2010s and early 2020s, perhaps we can look to AI to offers ways to find and confront our less optimized operational traditions.

Karen Vignare suggested using this opportunity as a time to push on ethics and equity, as did Patrice Tocivia Prusko. Karen noted that most people hired into institutions of higher education are here because they believe in the power of education to transform all lives…yet we’ve built an inequitable system or systems. The pervasive view that AI will only make this worse strikes as unnecessarily negative. Machine learning will find patterns we have never seen before, through generative AI, we will have tutors on demand as soon as student struggle is quantifiable. Those things are really positive opportunities to become better.

  1. The Human in the Loop: AI is built using math: think of applied statistics on steroids. Humans will be needed more than ever to manage, review and evaluate the validity and reliability of results. Curation will be essential. 
  2. We will need to generate ideas about how to address AI factors such as privacy, equity, bias, copyright, intellectual property, accessibility, and scalability.
  3. Have other institutions experimented with AI detection and/or have held off on emerging tools related to this? We have just recently adjusted guidance and paused some tools related to this given the massive inaccuracies in detection (and related downstream issues in faculty-elevated conduct cases)

What Will the Road Toward AI Enlightenment Look Like? 

Will AI integration look more a “Moonshot”? Or a “Road Trip”?  Past experiences suggest that we will need to approach making the transition toward AI more of a “moonshot” or nobody will think it is important enough to pay attention. Without that motivation (and demonstration of executive commitment) it will be harder to motivate people to engage, and it will be impossible to keep people focused on the steps to implement changes for the time it takes to make the changes.  

However, every one of us who has led an institutional change effort knows that things don’t change until the work gets started, and refined, and re-articulated. Stakeholder groups need to be identified and empowered.

Even though we learn repeatedly that innovation has a lot to do with effective project management and a solid message that helps people understand what they can do to implement change, people really need innovation to be more exciting and visionary than that.  This is the place where we all need to help each other stay the course of change. 

Institutional change generally comes from small steps taken over time that successfully demonstrate how to do things in the new world order that AI is undoubtedly going to provoke. How are we going to be able to generate “Moonshot enthusiasm” for being persistent about changing things deliberately, step by step? Is it possible to make “Sustaining Incremental Change” an important institutional value? Where will educational enterprises drive the most value with AI solutions? Conversely where will AI push institutions? Where will AI create the greatest risks? What will those risks be? How will institutions prioritize?

Next Steps

  • We should plan to offer examples of (small and big) impressive uses of AI in the wild. 
  • We come back to our change and technology adoption models. Do we need to revisit our change models? New business models and revenue models?
  • Examples of institutional model policies would be extremely beneficial to our communities.
  • Use cases are helpful. Examples will help build a case for guiding change. These will require data examples so.
  • Everyone needs to see themselves in the story that we tell about the path forward. A story framework is a first steps in understanding how to articulate the vision, give people a chance to see themselves in the vision, and then gives everyone the steps for how to make the vision come to life in their institution.
  • We need to be great at telling their stories to be good transformation partners. We’ve had fake moon landings. People with the capacity to change and respond will do the moonshot. Sometimes fast is easy and slow is hard.
  • We closed with “Tom Cavanagh’s Grenade”: Generative AI can almost write a course now, use Dall-E create the imagery, and then have Morgan Freeman teach it using an avatar generator. How do we maintain the humanity and keep the human front and center? 

Why These Discussions are Important

Conversations like this enable opportunities to brief each other and to educate ourselves on what others are doing, so we can all better advise and support our leaders and colleagues to develop their vision to move the conversation forward. 

Roundtable Participants

Tonya Amankwatia, North Caroline A&T; Tom Cavanagh, University of Central Florida; Suzanne Dove, Bentley University; Susan Elwood, Texas A&M University – Corpus Christi; Paul Huckett, Johns Hopkins University; Connie Johnson, Colorado Technical University; Whitney Kilgore, iDesign; Kinshuk, University of North Texas; Lin Lipsmeyer, Southern Methodist University; Marcia Lovett, Carnegie Mellon University; Rolin Moe, University of California Office of the President; Matthew Mooney, University of Utah; John Orr, University of Miami; Megan Raymond, WCET; Patrice Tovicia Prusko, Harvard University; Karen Vignare, APLU; Ellen Wagner, North Coast EduVisory; James Walker-Myers, Southern New Hampshire University.

#Fin Aug 30, 2023

Ellen Wagner

Get in touch

Quickly communicate covalent niche markets for maintainable sources. Collaboratively harness resource sucking experiences whereas cost effective meta-services.