Can AI change life for the better?
A UBC prof is cautiously optimistic.
By Richard Littlemore, UBC Magazine. May 24, 2024.
Kevin Leyton-Brown is an optimist about artificial intelligence (AI), but he doesn’t present himself as such. On the contrary, the professor of computer science and director of UBC’s Centre for AI Decision-making and Action (CAIDA) assumes a position that is deeply philosophical. When talking about the potential of AI as a force for good, he says, “You have to have humility. You have to avoid the colonialist impulse to say this is going to make life better.”
But when he starts talking about his own work advancing AI for social impact, for benefiting currently underserved populations, and even about the potential spillover effects of AI that has been developed for purely commercial purposes, Leyton-Brown keeps giving himself away. He’s pretty sure AI is going to make life better.
His philosophical urge seems to be hardwired. In the mid-1990s, finishing high school in the Toronto suburb of Richmond Hill after five years as one of those band kids (clarinet and guitar), Leyton-Brown says his first inclination was to study philosophy. But once at McMaster University, he cogitated on his prospects and switched his major to computer science, graduating with a BSc (philosophy minor) and enough academic acclaim to go to Stanford University, where he studied under computer scientist Yoav Shoham and collaborated with the Nobel Prize-winning economist Paul Milgrom.
Even there, Leyton-Brown maintained his philosophical bent. But instead of working directly on the meaning of life, he turned his attention to game theory, the mathematical study of how other people make meaning of life – or at least how they make strategy.
If you think of economics as a branch of math that tries to understand how people make decisions, the calculations are fairly straightforward if you’re only looking at one person or at people who all share the same interest. Game theory steps it up to looking at the dynamics of conflicting interests, which gets hard when there are two people, and harder still when there are more.
But, Leyton-Brown understates, “The internet facilitates a wide range of interactions that are larger and
more complex than traditional analysis can handle. My research extends game theory analysis to internet scale. It focuses on computational tools, auctions, and fast algorithms for solving hard problems.”
This is where you require the brute force – and, often, the colossal expense – of AI. As Leyton-Brown soon learned, you can get computers to beat chess grandmasters or to create large language models that write pretty credible essays. But it will cost you. A recent paper from Stanford University's Institute for Human-Centered Artificial Intelligence reports that, “OpenAI's GPT-4 used an estimated $78 million worth of compute to train, while Google's Gemini Ultra cost $191 million for compute.” Given that level of expense, Leyton-Brown says, “It’s clear that AI research is focusing on problems where there is money.” He also says, “It would be nice to put the same energy where there isn’t money.”
At least part of the time, that’s what he does, pointing out that it’s a privilege of being an academic: “You don’t have to work only on problems that are financially important for corporations.” In fact, he sees a moral obligation and an historical opportunity to leverage AI to benefit underserved communities, particularly in the developing world.
In addition to teaching a UBC graduate-level course about AI for social impact, Leyton-Brown has done a fair amount of hands-on work on what he calls “socially beneficial market design.” For example, during a sabbatical in Uganda, he noticed that a lot of subsistence farmers, and especially those in rural areas, were failing to sell their produce or having to accept a terrible price, while buyers elsewhere were overpaying or going wanting, and he thought, “We could do better.”
Working with