Artificial Intelligence

The Next Steps for Responsible AI – Sponsor Content


The past 18 months have seen the rapid worldwide adoption of artificial intelligence by workers, scientists, entrepreneurs, doctors, and dreamers. I spoke with Kent Walker, president of global affairs at Google and Alphabet, about the most important issues that we face in harnessing this new technology. How do we keep AI development vibrant, and more importantly, how will AI assist in human development?

This conversation has been edited for length and clarity.

Martin Ford During the past year, most of the discussion around artificial intelligence has centered on chatbot systems such as Gemini. What do you think is the most overlooked aspect of artificial intelligence?

Kent Walker That chatbots only scratch the surface of what this technology is capable of. AI is not just a scientific breakthrough; it’s a breakthrough in how we make breakthroughs. Look at the example of AlphaFold, the protein modeling technology, where we’re making generational advances in the tools used by medical researchers around the world. This is just one example of many where AI will accelerate scientific progress, whether in materials science, new forms of energy, quantum computing, health care, or water desalination.

Ford One of the tangible benefits of AlphaFold is the cataloging of every protein molecule important in biology. I’m not sure chat systems have produced anything comparable. Do you think there’s a danger that we’re overhyping chat systems and not giving enough attention to other important innovations?

Walker Yes, I think there is a framing challenge, and it’s natural. Not everyone can be a biology Ph.D. researcher advancing the frontiers of science. But because chatbots are so accessible and tangible, anyone can interact with them and feel as though they’re experiencing artificial intelligence. However, the work being done by thousands of medical researchers worldwide will likely have a more foundational impact on how we make progress against diseases like cancer or develop personalized medicine. From a regulatory perspective, it’s important not to think of AI as social media 2.0 and fight the last war. Instead, we should think of this technology primarily as a significant scientific breakthrough, such as mRNA vaccines, that has broad applications for improving human welfare.

Ford What trends do you see emerging in the regulation of AI? Are there any developments that concern you and could potentially hold back the technology?

Walker As I mentioned in the last issue of Dialogues, AI is too important not to be regulated but also too important not to be regulated well. We’ve seen various governments and private actors around the world take significant steps and generally think constructively about this issue. There’s a concept known as the Collingridge dilemma, from a book called The Social Control of Technology by David Collingridge, which points out that it’s possible to regulate a new technology too late, once it’s already locked in, but it’s also possible to regulate it too early, before understanding its full potential and risks. The challenge is finding that sweet spot where you regulate a fast-moving technology in the right ways, recognizing its potential for scientific progress while also considering risks like discrimination, unsafe applications, or labor displacement. That’s an important balance to strike.

Our suggested framework for good AI regulation is that it should be FAB, focused, aligned, and balanced. Let me break those down.

Focused means recognizing that AI is a general-purpose technology, more like electricity than a special-purpose tool. We don’t have a single agency for all uses of electricity or a single law governing all engines in society. Instead, we approach it case by case. The issues in banking will differ from those in health care or transportation. Traditional regulators in these areas have years of expertise, but they need to become familiar with AI so they can understand the new challenges in their domains. Every agency needs to become an AI agency.

Aligned speaks to the need for coherence among different groups working on these issues—whether it’s the G7, the United Nations, the United States, or other countries. While regulations don’t need to be identical, they should be broadly consistent to allow the wider adoption of these tools, including in the Global South and developing countries.

And by balanced, I mean evaluating the relative risks of different kinds of applications. The use of AI in search results is different from the use of AI in high-risk applications such as health care or transportation. You need different regulatory regimes for these different settings. It’s also important to remember that high-risk applications often bring high value, such as the potential to save lives in medicine. Even as we regulate these areas, we want to ensure that we’re not slowing down progress in saving lives and solving problems.

Ford In some areas, such as self-driving cars, there is a department of transportation that is well-equipped to handle regulation. The same goes for medical or financial sectors. But there are areas where gaps exist, and some issues that are entirely new to AI, like disinformation … there isn’t a clear regulatory body for those problems.

Walker I think that framework is exactly right—you should be looking for gaps in existing laws. The starting point for analysis should be that if something is illegal without AI, it’s probably illegal with AI. This covers many of the concerns about fraudulent applications of AI. However, there are new questions of degree, as with content moderation. It’s always been against Google’s policies to have manipulated media, such as deepfakes created by slowing down videos to make someone look drunk or editing videos in misleading ways. AI enhances this manipulation and scales it up, thus raising the question of, When does the risk become significant enough that we need new laws, and when is it better and more consistent to extend the enforcement of existing laws?

Ford Are there examples of governments around the world that are doing a particularly effective job of regulating AI—places that might serve as a role model?

Walker Yes, Japan has been a leader in this area. Japan faces a significant demographic challenge, and it has more robots per capita than any other country because it needs to increase productivity. They view AI as powering a new generation of robotic assistance that can apply to a variety of jobs to improve quality and reduce costs. Singapore has also been very forward-looking, focusing on understanding new technologies before imposing formal rules. They are clarifying existing laws, introducing flexible frameworks and pro-innovation policies like copyright exemptions for AI training, and working with companies to promote AI adoption.

Ford One criticism of significant AI regulation is that large companies like Google can afford the burden of compliance, whereas start-ups might find it more difficult. This could create a regulatory moat that protects established players and stifles innovation. Is that a legitimate concern?

Walker Yes, we have to be very careful about that. We’ve seen examples, like some of the data legislation in Europe, where regulatory barriers to entry have made it harder for smaller companies to flourish. It’s important to be mindful of the impacts of regulations not just on large companies but on smaller ones as well. Each generation of technology creates a new opportunity for new companies to emerge. Regulators must consider not just their specific sectoral concerns but also the wider need to promote innovation, productivity, and global competitiveness.

Ford One of the most difficult issues facing artificial intelligence right now is the use of copyrighted material to train AI systems. It wouldn’t surprise me if the Supreme Court eventually has to make a ruling on this. How should regulators and legal scholars approach this problem?

Walker It’s a critical issue. We wouldn’t have the AI breakthroughs that we’ve had without the concept of fair use, which allows for the transformative use of publicly available information. The challenge is to figure out how to appropriately compensate creators contributing material value without creating a blanket restriction on the use of online information in AI models. We’re working with publishers to better understand the value of different types of content in model training and operation. I’m hopeful we can find a way to recognize contributions while continuing AI development.

I also think it’s important to focus on regulating outputs rather than inputs. The question is whether a given image or text infringes on someone’s rights, not how it was created, whether with a pencil, a computer, or AI. It’s about the real-world impact of these tools.

Ford How do you think AI will impact the job market and the nature of work?

Walker While we know it will, it’s much harder to project precisely how. It may change not just how we work but how we prepare for work. We might see a shift from a single four-year degree to multiple shorter, skills-focused programs spread throughout a career. AI generally automates tasks rather than whole jobs, so the impact varies by job. We’re trying to democratize access to AI tools through initiatives like our AI Essentials course, which helps people learn how to use these tools effectively.

Ford But is AI just another tool, like the typewriter or spreadsheet, or is it in a class by itself—something that will be super transformative?

Walker Amara’s Law says that we tend to overestimate the impact of a new technology in the short term and underestimate it in the long term. Right now, people are using AI to streamline existing tasks—saving costs. The next step will be to increase revenues by doing things we couldn’t do before. Over a decade, we’ll likely see broader applications that change how we think about job requirements and the fundamentals of knowledge work.

In the words of Andrew McAfee, our inaugural Technology & Society visiting fellow, general-purpose technologies like AI tend to “reduce demand for some skills, increase demand for others, and create demand for entirely new ones.”

Remember that 60 percent of the jobs that we have today didn’t exist 80 years ago. As we saw with the advent of electricity or the internet, we should expect to see new categories of jobs on the horizon—jobs that we can’t yet predict.

Ford Do you think AI could lead to significant job displacement in some industries, like call centers?

Walker A study we recently commissioned estimates that around 61 percent of jobs will be augmented by generative AI, with 7 percent transitioning over the long term. That’s why it’s so important that the public and private sectors work together to lay the groundwork for AI-driven job evolution.

You mention call centers, and that’s a place where we’re actually seeing higher employment satisfaction because of AI. Call agents spend their days pattern-matching problems—identifying underlying problems and thinking about the most common solutions. With AI in the mix solving the more routine problems, agents have more time to focus on the most complex requests.

By way of example, Google Cloud has been working with partners at Discover Financial Services to build generative AI into Discover’s call centers, training and tuning our LLMs with a set of frequently asked questions. Early results are incredibly promising, with agents reporting that they’re more productive and able to significantly reduce the time it takes to get to the root of customer issues. It’s a new way of thinking about customer interaction—one that not only improves customer satisfaction but also has the potential to spark completely different ways of working.

Ford Do you think traditional strategies like retraining will be enough to address AI’s impact on the workforce, or will we need to consider more radical policies like universal basic income?

Walker It’s an open question, although I tend to think people will always find meaning and value in work. We’re investing in initiatives to explore this issue. We have our Global AI Opportunity Fund, which will invest $120 million to make AI education and training available in communities around the world, and collaborations like the one we have with MIT RAISE to offer no-cost AI courses to educators. It’s early in the development of this technology, and we want to contribute to finding solutions.

Ford Do you use AI in your work, and how has it changed things for you?

Walker I do! Right now I’m using it to distill long articles into key bullet points, which is like having a personal summarizer. I’m also working on a project to have an AI read all my emails from the last 18 years at Google and then learn to respond in my voice, helping me to more quickly create first drafts that I can then refine and improve. The more AI takes on the routine stuff, the more time I will have to focus on the nonroutine and rewarding parts of my job. And I think, like me, many people are going to appreciate the ways AI frees them up to dig into the sticky questions, think about what’s next, and focus on all the fun stuff that makes their job feel meaningful.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.