AI & the Liberal Arts
Thomas VanDrunen, Ph.D., Associate Professor of Computer Science
I attended the CACE faculty seminar, “AI and the Liberal Arts,” May 10–12, 2023. This was the first CACE faculty seminar I attended. The seminar was particularly timely for me for two reasons. I am coming off of teaching one course related to this topic (CSCI 381 Machine Learning S23) and preparing to teach another (CSCI 384 Computational Linguistics F23); moreover, I am chairing the committee planning the Spring 2024 Science Symposium, which is on machine learning and its multidisciplinary applications.
One take-away for me was how differently I approach this topic from how my colleagues do. Many seminar participants were interested in questions about what intelligence is, under what circumstances we could attribute sentience to a technology, and what it means for AI-generated art to be “soulless.” As someone who teaches machine learning from the inside, I view these technologies as tools to achieve some function and am not inclined to call them intelligent in any typical meaning of the word. I even avoid the term artificial intelligence, since in my mind AI does not form a cohesive methodological category within the field of computer science.
One point in the seminar when this came out was in discussing portions of Larson’s book that discussed the limitations of machine learning in that tools are trained for one task alone and hence don’t make progress towards artificial general intelligence. My response was, why is that a bad thing? A self-driving car doesn’t need to be able to write an essay. Seeing this difference in perspective was very useful to me.
The discussions on how ChatGPT and similar technologies will affect our teaching were also useful. Here, too, the difference between my experience and those of my colleagues in the humanities and social sciences was striking. Most of the other participants were grappling with ChatGPT’s ability to write passable papers and essays. My courses involve only a small amount of writing, and what writing students do has only a small effect on their grade. My students’ work mostly entails writing code and proofs, which are also the most important assessable artifacts in my courses. Accordingly, a big concern for me is how well ChatGPT does at producing code and proofs, especially the short programs and basic proofs in my introductory and even mid-level courses.
Finally, I come away from this seminar convinced of the responsibility my students and I have for explaining how “generative AI” technologies work. My role in teaching the computer sciences courses in machine learning and natural language processing is to ensure that my students understand the technologies from a deep level and can use that understanding as they contribute to the campuswide discussion on what these technologies mean for society—especially for Christian engagement in ethical questions and our understanding of the image of God.
Contact Us
Center for Applied Christian Ethics
117 Blanchard Hall
501 College Ave
Wheaton, IL 60187