Better AI and Better Humans
Enoch Hill, Ph.D., Associate Professor of Macroeconomics
You know what? I really like checklists. It’s surprisingly rewarding to document the accomplishment of discrete tasks that I’ve arbitrarily identified as worthy of inclusion on my daily list. I sometimes even add already completed items to my checklists so that I can cross them off immediately. I’ve already accomplished 12 tasks today. I suppose if you count this opening paragraph, I could call it 13. Yes, 13 is a big number, I’m really productive. If I don’t reflect too deeply, all this action provides a simulacrum of meaning. It feels good to be so empirically productive.
Of course, this is pretty silly. My meaning and worth are not a function of how many self-defined tasks I’ve accomplished. Although, it turns out defining and attempting to measure things like meaning or worth is really challenging.
Earlier this month, I spent the better part of three days learning about and reflecting on AI with a wonderful group of colleagues at this year’s CACE seminar. Maybe I’m hopelessly anthropocentric, but for me, the core question of the seminar was: what makes us human? I mean, if something is on the verge of general intelligence1, of being able to do everything we do…but better…then it’s natural to ask why we matter and what makes us who we are.
We spent a portion of our time on the last day of the seminar reflecting on what it means to be a liberal arts institution. The version of the answer to that question I resonated with most deeply might be stated as “a liberal arts education is training in human formation.” By human formation I mean things like virtue formation, character formation, Christian formation, etc. Understanding what makes us human is particularly important if our job is to help train people in becoming more human.
I’m not sure I have an answer, but it feels pretty human (and valuable) to reflect on the question. Here are some thoughts.
Looking back at my list of tasks from today, I’m not sure any of them couldn’t be completed by an AI program (or some other combination of technologies). If I am defined by what I accomplish, then maybe AI are humans too.
The computer scientist among us suggested that Chat GPT could already score a high A on his exams. However, he still thought it worthwhile to teach students coding and logic. Just because someone (or some algorithm) can do what you’re doing (but better), there may still be value in doing it.
Playing with and discussing the current manifestations of AI, it seems there is quite a bit that I assign in the classroom that could be competently completed by an existing technology. While there are some concerns, such as citing non-existing texts, or attributing poetry to the wrong author, often the product of our playful interactions with AI appeared to be at or above the quality of the median submissions I am accustomed to receiving.
Fortunately, it has always been the case that the experience of the tasks rather than their completion was the point of schoolwork. Almost all of the homework I’ve completed as a student or assigned as a teacher has been tossed or discarded. The purpose of the work done in most courses is the effect it has on the student. Could it be that the answer of what it is to be human is more about who we are than about what we do. Maybe my checklists are important, but more so because of how they form us rather than what we’ve accomplished.
Consider physical training. I try to work out several times a week. Now imagine, I had a robot that I programmed to run around a track. Maybe the robot could complete the task of my running for me. But of course, that would be silly. I don’t run to check off completion of the loop. I also don’t run because I’m better at running than everyone else. Rather, it’s the way running affects me, the way it forms me, that motivates me to do it.
Finally, the question, “what makes us human” is important because it helps identify what tasks we gratefully pass onto AI tools and which we retain for ourselves. No matter how well AI can sing a song, I don’t want to outsource praise for God. No matter how cleverly and thoughtfully AI can converse on interesting topics, I don’t want to outsource the relationship I have with my wife. And even if AI figures out the optimal algorithm for child rearing, something core to who I am feels wrapped up in parenting my children.
I’ll end with two of my takeaways on what AI means for the liberal arts.
First, AI advances illuminate the relevance of the liberal arts. When more and more of what we do can be replicated by a machine, it starts to feel less relevant how well a certain educational program can train you to complete a particular task. Finding meaning in efficiently completing tasks on a checklist loses significance when many of those tasks could be completed in seconds by a fancy computer program. Instead, who you become in the process of your education, how your education forms you, feels more clearly central. This strikes me as something the liberal arts tradition is particularly well suited for.
Second, I have no desire to become an expert in policing AI cheaters. It’s a battle that already appears overwhelming and I don’t anticipate it will become easier over time. On the other hand, AI is here, it is accessible to our students, and I don’t think the correct course is to pretend like it doesn’t exist. Rather, AI advances more clearly illuminate the need to cast a compelling vision for the importance of the tasks we set for our students. Perhaps in the process, we’ll discover that some of the things we are doing are not all that important. But other things are clearly worthy of our efforts, even when they can be more effectively completed by technology. Just like my example of a robot running in my place, I’m inspired to call my students to look to the reason they are in my class and to view the efforts they expend as worthwhile because of who they will become in the process.
To conclude, the CACE seminar was a wonderful experience. Hands down, the best part of the time was interacting with my colleagues and getting to know them better, in person, face to face. They’re a wonderful group, and as far as I’m aware, all of them are human.
________________________________________________________________________________
1 Of course, the author of the core text we read argues that we are nowhere near general artificial intelligence, and that it’s categorically different from the accomplishment of recent advances in machine learning.
Contact Us
Center for Applied Christian Ethics
117 Blanchard Hall
501 College Ave
Wheaton, IL 60187