şüŔęĘÓƵ

Brian Cantwell Smith on what alarms him – and what doesn't – about AI

Photo of Brian Cantwell Smith
'We need to be cautious about our enthusiasms, rein in our exaggerations of what AI is going to do, and dig a lot deeper,' says Brian Cantwell Smith, the new Reid Hoffman Chair at the Faculty of Information (photo by Meg Wallace Photography)

As holder of the new Reid Hoffman Chair at the Faculty of Information, Brian Cantwell Smith thinks, writes and speaks about the nature of artificial intelligence (AI) and its impact on humanity.

Smith, who has taught in science and engineering departments as well as humanities and social sciences, has a rare multidisciplinary perspective. At a time of unprecedented historic development in the field of AI, he is well-positioned to help lead the global discussion of how to manage AI ethically and in a humanistic way.

Smith speaks to Ann Brocklehurst about his new role and where the AI discussion is headed.


Is it fair to say that you believe it’s not just the public that has misconceptions about AI, but also scientists and experts as well?

I think that all of us – scientists and experts as well as the general public – need a deeper appreciation of the stuff and substance of various aspects of human thinking and human intelligence. So that is one thing I want to do: bring attention to the gravity and the stakes of the development of AI, and to the incredible accomplishment humans have wrought, over millennia, in developing our ability to be intelligent in the ways that we are.

Many of those accomplishments remain a long, long way in front of anything the technical branches of AI have even taken stock of and envisioned, let alone accomplished.

You seem very non-alarmist when you talk about AI. What makes you so calm?

It’s not that I’m not alarmed. But I’m not alarmed by what some people are alarmed by.

I think there are two parts to the general alarm that people have. One is that AI’s going to be bad – its going to enslave us, it's going to divert all our resources, we’re going to lose control, etc. That’s a late night, horror movie kind of worry – an “Oh, it's going to be awful.”

There’s another form of alarm people have, which is “Goodness, AI’s going to best us in all sorts of ways, do what we do better, replace all of our jobs, and take over everything that’s special about us.” That doesn’t require AI to be evil or bad, but it is still a threat, in that it challenges our uniqueness and our compass, and suggests that what it is that we humans do will be encroached upon by AI. I don't think that second worry is entirely empty.

Could you elaborate a bit on the concerns you do have about AI?

My overall concern has to do with whether we are up to the task of understanding, realistically and without alarm, what these systems can and can’t do – what they are genuinely capable of, on the one hand, and what they are not authentically capable of, on the other, even if they can superficially mimic or simulate it. I am concerned about whether we will be able to determine those things – and orchestrate our lives, our governments, our societies, our ethics in ways that accommodate these developments appropriately. I think this is a huge challenge, with all kinds of dangers.

Read about Reid Hoffman’s gift to the Faculty of Information

This leads to a bunch of specific worries. One is that we will overestimate the capacity of AI, outsourcing to machines tasks that actually require much deeper human judgment than machines are capable of. Another is that we will tragically reduce our understanding of what a task is or requires (teaching children, providing medical guidance, etc.) to something that machines can do. Rather than asking whether machines can meet an appropriate bar, we will lower the bar, redefining the task to be something they can do.  A third and related worry, which troubles me a lot, is that people will start acting like machines rather than doing anything that we would historically have considered worthy of being labelled human. I feel as if we can already see that happening.

Do you see the current discussion about AI as being too polarized or extreme?

Yes, you certainly see extreme views in both directions – doomsayers and triumphalists. Either it’s all going to be terrible, or it’s all going to be wonderful. Very rarely do such wholesale proclamations prove to be the deepest and most enduring views.

I’m particularly concerned that many of the people who have the deepest understanding of what matters about people and the human condition have only a shallow understanding of the technology and its power. And vice versa – those who have a deep understanding of the technology often have a shallow understanding of the human condition. What we need is a deep comprehension of both.  It’s as if we are at (0,1) and (1,0) on a graph, when we need to be at (1,1).

We need to be cautious about our enthusiasms, rein in our exaggerations of what AI is going to do, and dig a lot deeper. My primary caution is not about the technology itself, but about our interpretation of the technology. That’s where I think we need neither enthusiasm nor panic, but cautious, serious, reflective deliberation. That’s my real focus, what I think urgently needs attention: our understanding of what's happening, our understanding of who we are, our understanding of how we want to live in the new age.

Ideally, how would you like the discussion to proceed?

 If we frame the debate in people versus AI, we’re sunk. That’s not an adequate conceptual frame in terms of which to take stock of what’s happening, or foresee the future. Rather, we need to set aside the whole people vs. machines dialectic, and figure out what kinds of tasks require what kinds of skill. When we figure that out, then we can say, “Well, here’s a situation. What’s the best way to bring to bear the requisite kinds of skill? What arrangements and combinations of people and machines can best provide that sort of skill? Calculate pi to a million decimals?  Clearly a machine.  Teach ethics to school children? Obviously a person. Read an X-ray? Tricky.  It may soon be that the best strategy will be for an AI to do the initial classification and pattern recognition on the image, but for a seasoned MD to interpret its consequences for a lived life and recommend a compassionate treatment strategy.  The point is that, as machines start to be able to do certain things better, we should include more of them appropriately in the mix.

The point is that the debate shouldn’t be about whether people can or should do X, Y or Z. Or whether AIs can or cannot do X, Y or Z. What we need to do is figure out is what does it take to do X, Y and Z – if indeed X, Y, and Z should be doneWe shouldn’t presume that people necessarily do X, Y, and Z well, now. Not everybody who does these things at the moment necessarily does so to a high standard (driving is a great example). If we want those things to be done to the highest possible standard, we should figure out how that can happen – using the best combination of people and machines.

Something else. Maybe we can use the emergence of AI to up the ante on people. Let’s leave to the machines what they can do, set those things behind us, and raise the standard on the parts that require people – the parts that require humanity, depth, justice, and generosity. 

Are you optimistic as you embark on contributing to this discussion?

Like many, I am sobered by the present state of the world – not just AI, but the context in which it is emerging. But I think there are some encouraging signs … even about AI.  Just yesterday, somebody sent me an article from the BBC asking whether an AI can have a soul.  There are lots of articles around of this sort.  Few are inspiring; in fact many are pretty vapid. Still, I think their appearance is good, because it shows that serious issues are being foregrounded in the public sphere, in the public imagination. So I take some optimism from the fact that there is energy for having serious deliberations that are both technically sound and serious about the human condition. That is great, and something we need more of.  That is something I want to participate in.

As for the details, I think it’s too early to say. 

Brian Cantwell Smith will hold the Reid Hoffman Chair until Aug. 31, 2024.

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Faculty of Information