Transcript: Tech expert Ben Buchanan talks with Michael Morell on "Intelligence Matters"
In this episode of "Intelligence Matters," host Michael Morell speaks with Ben Buchanan, Assistant Teaching Professor at Georgetown University's School of Foreign Service and Senior Fellow at the University's Center for Security Studies. Morell and Buchanan discuss the intersection of technology and statecraft, focusing on the potential effect artificial intelligence-driven technologies may have on the geopolitical dynamics among nations. Buchanan reviews some of the central questions surrounding AI, including how autocratic governments and democratic governments may leverage it and how offensive cyber operations may come to rely on it. Buchanan also shares elements of his forthcoming book, "The Hacker and the State."
Download, rate and subscribe here: iTunes, Spotify and Stitcher.
HIGHLIGHTS:
- ON QUESTIONS SURROUNDING AI: "[T]he biggest question that remains unresolved is, what will the United States do, what will democracies do? Can we get AI talent to come to our shores, not just for education, but for work? Can we get AI talent to work in national security in a way that's consistent with their principles? Can we get a federal government that's often slow and bureaucratic to embrace this technology? Can we think about what this technology can and can't do in the context of long range strategic decisions?"
- ROLE OF CYBER OPERATIONS: "I think cyber operations are much more the domain of shaping and nations, rather than bluffing at the poker table, they're stealing aces and stacking the deck. And they're using these shaping operations well below the threshold of conflict, to change the state of play."
INTELLIGENCE MATTERS - BEN BUCHANAN
CORRESPONDENT: MICHAEL MORELL
PRODUCER: OLIVIA GAZIS, JAMIE BENSON
MICHAEL MORELL:
Ben, thanks for joining us on Intelligence Matters. It's great to have you.
BEN BUCHANAN:
My pleasure, thank you.
MICHAEL MORELL:
I want to start, Ben, by asking what got you interested in this intersection between technology and national security?
BEN BUCHANAN:
When I was younger, I thought the coolest thing in the world was technology, and that was all I was interested in. And then 9/11 happened, the Bush administration happened.
MICHAEL MORELL:
How did that interest in technology play out?
BEN BUCHANAN:
Like every kid, I wanted to make video games, I wanted to do the dorky stuff. And then 9/11 happened, Bush administration happened, war in Iraq, and I thought, this technology stuff is fun, but it doesn't really matter. So I went hard into international affairs, studied Arabic, forgot about technology.
And remarkably, the two fields really came together, where international affairs started to be a domain in which technology really mattered. Not in the sense of video games and the like, but in the sense of cybersecurity, in sense of artificial intelligence, and it's very fortunate to see these two things, that have been animating my interests for a long time, come together in particularly complex and interesting ways.
MICHAEL MORELL:
So post 9/11, you deep dive into international relations and national security, and then you see the importance of technology. How did you get smart on all these technologies that are so important?
BEN BUCHANAN:
I was a Ph.D. student when I made this decision to focus, again, on technology. And the great thing about Ph.D. students is they have a lot of time. In the cybersecurity context, I think I read every single word of every single report put out by private sector companies. And one of the things that's remarkable about cybersecurity is there was just this wealth of information that the private sector was producing on Russian and Chinese and North Korean hackers.
And reading these technical reports was my education during the Ph.D., in how to think about these fields and this intersection. And eventually, in the latter part of the Ph.D., I started getting very interested in artificial intelligence, and thinking about how can artificial intelligence change not just the cyber game between nations, but more generally, the geopolitical game between nations.
MICHAEL MORELL:
So Ben, there's always been this intersection between technology and national security. If you think about the beginnings of warfare, gunpowder, and turning a church bell over and making it a cannon, and the Manhattan Project, the launch of Sputnik, the U-2 spy plane, there's always been this intersection. Is there something fundamentally different now about that intersection, compared to history? Or is it just a matter of degree, do you think?
BEN BUCHANAN:
It's probably a matter of degree, and it's probably also a matter of pace. I think what's remarkable to me, as much as reading on the last 20 years of cyber operations who's thought about AI, is that the developments are happening very quickly, and the developments are also happening in the private sector. And I think particularly when you talk about artificial intelligence, the key difference between that and the technologies you mentioned, like the Manhattan Project, is how much of the truly cutting edge research is happening not in government labs, but it's happening in private sector companies.
MICHAEL MORELL:
So as you know, I wanted to have you on to talk about AI and its impact on national security, but I wanted to mention that you have a new book coming out in the next couple weeks, it's called The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics. Can you take just a minute and tell us what the book's about?
BEN BUCHANAN:
My goal in writing this was to synthesize my thoughts of studying cyber operations for the last ten years, how and why nations hack each other, and to put it in sort of one stop shopping. To understand how it is that nations project power, what cyber operations are good for, and what hacking can't do. So it's very narrative, each chapter is a different story of how nations project power in cyberspace. From cases we probably know a little bit about, like Stuxnet, to cases that we probably don't know much about at all, or certainly don't know many of the technical details about, unless you really go deep in this field already. Like the blackouts in Ukraine, or like encryption, back doors between big nations.
MICHAEL MORELL:
Is there a main theme that comes out of it?
BEN BUCHANAN:
The theme, I think, is that too often in international relations scholarship, and also, to some degree, in policy, we look at cyber operations like they're nuclear operations. And we look at them like they're these tools for signaling between nations, the kind of signaling that animated the Cold War. And I don't think that's right. I think cyber operations are much more the domain of shaping and nations, rather than bluffing at the poker table, they're stealing aces and stacking the deck. And they're using these shaping operations well below the threshold of conflict, to change the state of play. And I think it's quite interesting in the scholarship, and maybe, to some degree, the policy hasn't quite caught up with that.
MICHAEL MORELL:
So Ben, AI, artificial intelligence, maybe the place to start is by asking you what is it? Because I think a lot of people don't understand what it is. So in layman's terms, how would you describe AI?
BEN BUCHANAN:
The first thing we need to do when we think about AI is differentiate AI from its cousin, machine learning. And you'll hear both these terms used interchangeably, and they're not quite interchangeable. Machine learning is the current paradigm for AI. We've had other ones in the past, but machine learning shows the most promise, and that is quite simply, using machines to learn from data. And this inverts our previous paradigm of computer programming, where we give computers very clear instructions.
In machine learning, we tell the machine, using an algorithm, how to learn and we give it data, from which to learn, and we give it computing power that enables this learning to happen. So the three parts of an AI system these days are the algorithm, the data and the computing power.
And we've seen remarkable advances in each of those three parts over the last ten years or so. And frankly, that's why we're here, that's why we're talking about AI, because in this machine learning paradigm, the data, the algorithm and the computing power have gotten much better, and have let us do things that maybe ten or even five years ago we didn't think were possible.
MICHAEL MORELL:
So a basic calculator, would that be considered AI, or a kind of crude form of AI?
BEN BUCHANAN:
There's an old joke that once something starts working, we stop calling it AI, and start calling it software. So there probably was a time when we thought a calculator would be really intelligent, but it certainly doesn't work with machine learning, it doesn't work with this modern paradigm of giving computers the data and telling them how to learn from that data. A calculator, or any kind of basic math computer program is the traditional linear model of computing.
MICHAEL MORELL:
So give us a non-national security example of AI.
BEN BUCHANAN:
Probably the most famous example is what a company called DeepMind did in 2016 when they beat the world champion at Go. And what's remarkable about Go is how complex it is.
MICHAEL MORELL:
This is the Chinese--
BEN BUCHANAN:
The ancient Asian board game. How complex it is relative to other board games. So there are more possible combinations on the Go board than there are atoms in the universe. In fact, there are more possible combinations of the Go board than total atoms, if every atom in our universe had a universe of atoms within it. And you add all those up, still more possible combinations on the Go board.
MICHAEL MORELL:
That's amazing, actually, to think about.
BEN BUCHANAN:
It's just a remarkable number. And what this means is that you can't calculate in Go. You have to intuit the right answer, because there's so many possibilities you can't calculate through. And I think for a long time we thought this was something that only humans could do, only humans could have the sense of feeling the right answer, intuiting the right answer.
And what DeepMind showed in 2016 is that machines can do it too, or at least machines can mimic it. And that was a remarkable success for pushing forward in what AI can do, and I think really put this on the map for a lot of people. Not least the Asian countries like China, that have played Go for 4,000 years.
MICHAEL MORELL:
So I suppose, Ben, like all technologies, AI has pluses and minuses from a national security perspective. So maybe we can kind of break it into those two. And maybe start with the downsides. How can it be used against us?
BEN BUCHANAN:
I think one of the things that concerns me the most, in the context of a nation like China, is how AI can enable authoritarianism. And this is the question I've asked my students on their final exam every single semester, because I think it's so fascinating, which is, will AI benefit democracy more than autocracy?
And there's a case to be made that AI will solve or help to solve some of the central problems of authoritarian regimes, namely centralizing power. And if you look at AI as a tool of surveillance, AI as a tool of facial recognition. AI as a tool of essentially enabling an ever more aggressive and ever more intrusive police state. Then I think that works against the interests of democracies. And we need to think quite seriously about how do we combat this, and take this technology and use it for democratic purposes?
MICHAEL MORELL:
You've also written and talked a lot about AI and cyber together.
BEN BUCHANAN:
That's right.
MICHAEL MORELL:
The link between, can you talk about that?
BEN BUCHANAN:
Yeah, so we're starting a project at Georgetown, at the Center for Security and Emerging Technology, focused on this intersection. And there's a lot of hype, and that's the first thing we need to say, I think there's a lot of hype in Silicon Valley around AI and cyber on defense.
But there's also some promise there. And one of the questions we're exploring, still in the early stages, is what could AI do for cyber offense? And I think a vision people often paint is that you'll get to some world in which you've got AI in defense and AI in offense playing this out in cyber operations. Again, there's some hype to that, but there probably are pieces of that that are true. And we're trying to sort the signal from the noise on that.
MICHAEL MORELL:
And what are we finding so far?
BEN BUCHANAN:
It's early days, but I think if nothing else, putting aside the machine learning piece and looking a little more generally at automation, it's fair to say that some of the most powerful cyber attacks in history have been the ones that have some automated component. That have the capacity to spread themselves or push decision making to the code.
Stuxnet's a well-known example here. Another one, less well known, but quite interesting is the 2016 blackout in Ukraine. This is the second blackout that happened in Ukraine in the span of a year. The 2015 blackout was incredibly manual. 2016 blackout had much more automation in the code. And one of the questions that we're asking is, what does that mean? How do we interpret this increasing level of automation, pushing closer to a fire and forget, kind of, cyber attack, where the code could find the configuration of the industrial control system, and attack it without too much guidance. That might be a sign of things to come.
MICHAEL MORELL:
And what is the difference between a manual cyber attack look like, versus an AI enabled cyber attack? What's fundamentally different?
BEN BUCHANAN:
If you look at the 2015 versus 2016 blackout, in 2015, you had the hackers themselves executing each step. So they'd give a command, the malicious code would execute, they'd give another command. The 2016 blackout was pushing much more towards, they would launch the malicious code against the target, and it would make some decisions on its own about how the target is configured, and how to carry out an attack against that target. It's worth saying, the 2016 blackout code was not terribly effective, it failed in this capability in some key ways, doing less damage than it might otherwise have done. But I think it might be a sign of things to come. Which you're pushing more decision making power to the malicious code, allowing you to go faster and with fewer loops to human operators.
MICHAEL MORELL:
So we see the Russians doing this, do we see any other nation states trying to link these two together, AI and cyber?
BEN BUCHANAN:
I think it's probably fair to say that Russia has a high risk tolerance, and is content to be aggressive, and we see the most of Russian activity. I don't think we've seen a lot of public discussion of American activity in this area. It wouldn't surprise me if folks in intelligence agencies are thinking about what this could do. Not only on the attack side, in the case of the United States, but also on the data analysis side. Same with intelligence programs, bring back an enormous amount of information, processing that information is a key part of the business of folks at NSA and the like, and you can imagine AI will help, but this is not something that's widely discussed.
MICHAEL MORELL:
And you've also written a little bit, and I saw you gave a testimony on the link between AI and terrorism/counterterrorism.
BEN BUCHANAN:
That's right.
MICHAEL MORELL:
Can you talk about that?
BEN BUCHANAN:
I think where this comes up the most is in two areas, the first is internet moderation, and I think the view that some people have, who are a little more technically utopian than I am, that AI will solve the challenges of internet moderation on platforms like Facebook and the like. And I'm pretty skeptical, I think the technology is not near the point where it needs to be to judge things like context and the like. The same terrorist video in one context could be a recruiting tool, another context could be a legitimate news report.
One should go down, the other one should stay up on a platform like Facebook. So I was pushing back in that testimony a little bit, on the notion that technology's going to solve our counterterrorism problems on the internet.
Certainly, I think the second piece of this is how recommendation algorithms and the like on platforms like You Tube and Facebook are essentially governed by AI. It determines what you see on the internet. And that's certainly, as you know far better than I do, a breeding ground for terrorism, and a recruiting tool.
MICHAEL MORELL:
One of the things I hear over and over and over again, and I'm wondering, A, if you do, and B, what you think of it, is the potential risks of a link between AI and biotechnology?
BEN BUCHANAN:
I think it's a little too soon to say. I don't worry about that as much in the terrorism context, in WMDs. I don't think the capability is there. There's interesting work to be done with AI and bio and AI and science. Probably the most promising work is something we saw from DeepMind, again, the same company that did AlphaGo, something they called AlphaFold.
And this made progress on one of the hardest problems in biotech called the protein folding problem, which we don't need to parse the details of, but it's a problem that essentially relates to how proteins can combine and entangle themselves. Making progress on this problem is key for drug discovery and the like. And I think AlphaFold showed there's some significant progress for AI in the biotech context, away from national security, but just in pure science, to advance beyond what humans have been able to do. DeepMind has made a lot of investments in the science area, and biotech is one of the areas in which they're making progress.
MICHAEL MORELL:
So any other big areas of AI's impact on national security that you think about, that we haven't talked about?
BEN BUCHANAN:
Probably the biggest one that people talk about is AI and autonomous weapons. And the degree to which artificial intelligence should be used in kinetic weapon systems that kill, the degree to which those systems should have the authority to decide on their own to fire. What the role of the humans should be. There are deep questions here that I think the national security state is only beginning to figure out. And you can imagine that democracies and autocracies would come to different answers on these questions.
MICHAEL MORELL:
And are people thinking about this, not only here, but also in places like China and Russia, are those conversations happening?
BEN BUCHANAN:
No doubt. And Russia has a very long history, in particular, of trying to push more automation to their systems. Even before the modern machine learning paradigm, if you look at some of what Russia's done with their nuclear systems, there's I think strong hints at automation there. China's a little harder to read, but you can imagine that this is something that would be of interest to the Chinese leadership.
Particularly if you don't have the strong officer corps the U.S. has, if you don't trust your people in the same way the U.S. military might, autonomy in control, again, serves the authoritarian purpose.
MICHAEL MORELL:
And at this early stage in the discussion, which way do you tend to lean?
BEN BUCHANAN:
I think the best analogy I know of is from Paul Charette, who writes about this, and he says there's a distinction in how we think about this thing on December 6, 1941 and December 8th. And part of me feels like the real question with autonomous weapons is not what do we think about it in the abstract, but when people are actually dying, are there principles here for which we're willing to send American warriors to die? In other words, we will not use autonomy because we believe in these principles.
That is the much more profound question. You don't often see that much tougher question asked, essentially, in a conflict that involves autonomous weapons that are better, which is, again, speculative, would we be willing to not use the weapons and to have people die instead? One of the things I like about AI is how it raises very deep ethical questions, as well as hard power national security questions.
MICHAEL MORELL:
So let's switch gears a little bit, Ben, if we can, and who has the lead today, United States, China, somebody else, in the use of AI for national security purposes?
BEN BUCHANAN:
I think it probably depends on which part of AI you're thinking about. If you look at image recognition, for example, facial recognition, it seems pretty clear the Chinese are ahead in that area. As I said before, there's some domestic reasons why they would pursue that.
If you're looking about integrating AI into other systems, the U.S. has made some pretty big investments in the last couple years, standing up the Joint AI Center at the Pentagon, for example, spinning out Project Maven.
We haven't always seen the results of that. You can imagine they're locked away in places, but that's an area where the U.S., I think, has invested more than things like facial recognition for national security. So it depends on the piece of the AI puzzle that you're talking about.
MICHAEL MORELL:
And what about Russia? Are they doing anything in this area that concerns you?
BEN BUCHANAN:
There's a famous quote from Vladimir Putin where he said about AI, "Whoever controls this technology will rule the world." So there's probably some directive from the top, but I don't think that we've seen that play out at the level of investment that the United States and China have. For reasons, probably, that the United States and China just have more resources than a nation like Russia. But certainly, the Russian interest in autonomy, as I said, goes back quite a while.
MICHAEL MORELL:
And what skillsets do you, as a nation state, need to maximize the benefit you can get out of tying these two together?
BEN BUCHANAN:
There's two pieces to this. The first is the capacity to develop AI algorithms and software themselves. And the people who can do this, in some sense, are like NFL quarterbacks, there's just not enough of them, and the demand is really high. And those people are primarily concentrated in the private sector right now. The second piece of this that's particularly important in national security, is integrating these algorithms and these advances into your national security systems.
And that's something, I think, that's overlooked because it's not flashy, but it's really important. AI is not just magic pixie dust that you sprinkle on something, and automatically makes the technology that you're using already better.
Integration and design, test and evaluation, those are all key parts of integrating AI into national security. And I think that's going to be where the rubber hits the road for using this technology well.
MICHAEL MORELL:
And you would think that the government needs a certain skillset that it might not have today, in order to do that.
BEN BUCHANAN:
That's right. And marrying geopolitical sensibilities with technical skill, putting the team together to do this, these are all hard organizational problems that are not really about the technology. They're about scaling integration of the technology into a much broader defense apparatus than many other applications of AI have to integrate with.
MICHAEL MORELL:
So Ben, what's the appropriate role of the government here, and the private sector, and then most importantly, how do you think about what the appropriate link is between the government and the private sector?
BEN BUCHANAN:
The camps are far apart. We've talked about DeepMind a couple time today, in the terms of sale for DeepMind to Google was the provision that they would do no defense work. And as a pretty broad generalization, many AI researchers have cosmopolitan views in which they feel like they're here to work on hard science problems.
Problems that, if solved, will benefit everyone, and not so much this hard power competition between nations. There's no better example of this than the dustup between Google and the Pentagon over Google's work on Project Maven, the Pentagon's AI project.
So the two sides are pretty far apart. There's certainly a frustration, I think, with some in the tech companies about activities of the intelligence community, in the past, activities of the military in the past. And I don't see a lot of reason for near term optimism about bridging that gap. Probably it makes sense that for the country, it'd be best if the two sides could reconcile a bit, and understand where each is coming from, and why. But right now, the gap seems to be pretty far.
MICHAEL MORELL:
It seems to me, anyway, like there's mistrust on both sides.
BEN BUCHANAN:
For sure.
MICHAEL MORELL:
If you're in the tech community, you worry about what the government's going to do with your technology. And if you're in the government, you're looking at these tech companies saying, "We don't want to work with you, because of our concerns about what you're going to do. But we are working with the Chinese on helping them perfect their surveillance state." So the government folks are sitting there scratching their heads saying, "Hey, what's going on here?"
BEN BUCHANAN:
That's right. And I think that captures the gap well. It's worth saying, not all tech companies feel this way. Amazon has been cleared, and they'll do more government work. But certainly, many of the AI engineers that I interact with, especially folks at Google and the like, have this view that we need to keep ourselves separate from national defense work.
MICHAEL MORELL:
Are there any lessons, Ben, to be learned from the relationship between the government and the private sector now on these technologies, and the early years of the Cold War? When the government and the private sector came together to really make some significant advances, that enhanced national security?
BEN BUCHANAN:
I think the biggest difference, and probably the biggest lesson, is that in those early years of the Cold War, the government was a very good customer, and in many cases, the only customer. Companies like Google and the like, tech companies in general, have many other customers that are easier to work with, that have fewer contracting restrictions that are, in many cases, just a much bigger market. So I don't think that for a tech company, as opposed to a defense contractor, there's necessarily economic incentive to contort themselves to work with government.
To say nothing of the trouble it creates with some of their employees. So on the government side, I think there's a realization that the game's actually quite different than the old days of the Cold War, in that the U.S. is not the only or the best customer.
MICHAEL MORELL:
So Ben, we've talked a lot about AI. Is it the most important technology being developed today, as it relates to national security? Or is there something else that's more important? Are there other things that are just as important? How do you think about that?
BEN BUCHANAN:
Often I get asked to give a talk on AI and blockchain and 5-G and quantum all in one. And I think we make a mistake in Washington, D.C., a lot of putting these things in this emerging technology bucket. Of that bucket, I think AI stands alone in its importance.
I think compared to the other things I just mentioned, AI's far more significant for national security than quantum computing, which is a long way away, blockchain, which is not nearly as useful as people seem to think. And I think in this sense of the emerging technology bucket, so counting stuff like cyber operations as emerged and already here, AI stands alone as the most significant.
MICHAEL MORELL:
And why is that?
BEN BUCHANAN:
I think it's, A, the nearest term. So we see enormous fundamental progress, where it's things like AlphaGo or AlphaFold or other key advances. So we know there's something here, in a way, we're not quite sure with quantum computing and the like. And I think it's so broad in its application.
AI can touch many different parts of the national security establishment, from intelligence analysis, to cyber operations, to autonomous weapons, that I think there's a lot to study here, and that's why it's so rewarding to work on this.
MICHAEL MORELL:
Is it also of great significance, because it potentially impacts all those other areas we talked about?
BEN BUCHANAN:
Exactly.
MICHAEL MORELL:
You said something interesting about blockchain just now, which is that maybe it's not everything it's made out to be.
BEN BUCHANAN:
I'm a skeptic.
MICHAEL MORELL:
Can you explain to people, what is it, and then why are you a skeptic?
BEN BUCHANAN:
One way of thinking about blockchain is just what they'd call technically a distributed ledger. So it's a high fidelity way of recording information that's pretty transparent. And there's a lot of hype around this, there's a lot of hype around bitcoin, but relative to what AI can do, it seems to me the blockchain applications are far more narrow than other technologies, far more questionable on whether or not they actually work, and far less likely to be integrated. You'll hear people talk about how blockchain will solve voting and the like, or will increase transparency in voting. I think it's probably that the less technology we have in voting, the better. I'm a fan on that front of paper ballots and going old school. I'm just skeptical of a lot of what people say about how blockchain will change society.
MICHAEL MORELL:
And then I'm really interested in asking you how your students think about all this. How they come at it. Do they come at it the way Google employees might come at it, or do they come at it the way somebody sitting at the Pentagon might come at it? Or is it a mix?
BEN BUCHANAN:
The great thing about teaching in the security stage program at Georgetown is it truly is a mix. So I have students who are, for their day jobs, at CIA, at NSA, and they have, I think, some perspectives from that place. And then I have students who are coming out of the tech sector, and thinking about national security. And not (UNINTEL) and saying, "None of this matters," they wouldn't be taking my class if they didn't want to touch national security.
But I love the debates I get to have every Tuesday with students, where we set up these discussions, and people will have different views. What's really rewarding is that I think, at least the students I see, have a sense that this technology's going to matter.
And have a sense that it's very hard to make good policy or make good strategy without first understanding the technology. And that's something we push even in a fairly nontechnical program. You have to understand the technology to get the right answer on strategy.
MICHAEL MORELL:
So is there, in these classroom discussions, with two different points of view, do they ultimately come together in a new point of view? Or what happens?
BEN BUCHANAN:
A good classroom discussion ends with each being able to state the other side's argument so well that the other side will say, "Yeah, I wish I had put it that way." There's not always a change of views, people have their perspective.
But what I seek is some capacity for fairness in discussion where people say, "Yeah, that captures my views well, and I understand the other side's views as well." And I do think ultimately when you're looking at thorny problems like AI and national security, we've got to get to some consensus on what the facts are, what our opinions are, how they differ. And I wouldn't say every class is perfect, but the nice thing about academia is you have the time and the space to foster that kind of conversation.
MICHAEL MORELL:
All right, so let's, if it's okay Ben, finish up here by playing a bit of a mind game. I want to ask you to imagine yourself as a historian maybe 50 years from now, or 100 years from now, looking back at this time on this issue. What do you think the themes would be 100 years from now, looking back?
BEN BUCHANAN:
The biggest question is how democracies are going to adapt to this. I think we know what the authoritarian playbook is going to be. It's a question of how well they can integrate the technology, and what the technology can do.
But the biggest question that remains unresolved is, what will the United States do, what will democracies do? Can we get AI talent to come to our shores, not just for education, but for work? Can we get AI talent to work in national security in a way that's consistent with their principles? Can we get a federal government that's often slow and bureaucratic to embrace this technology? Can we think about what this technology can and can't do in the context of long range strategic decisions?
From buying the B-21 bomber, to thinking about how we structure and analyze seamless intelligence, we're going to make a number of decisions as a country over the next five to ten years in a wide range of national security fora, in which this technology is going to be hyped, and we need to sort that hype out from what's real.
And we need to get that right. And that goes beyond just national security fora. Again, immigration is one of the key AI things that we could do to improve our competitiveness, bring talent here.
MICHAEL MORELL:
How short of talent are we?
BEN BUCHANAN:
Everyone's short of talent. And I think the biggest difference is that other nations, including our allies like Canada and Britain and France, have explicitly made pursuing AI talent a strategy. The United States has some very strong advantages, in that we educate a lot of the people. And many of them want to say, and indeed, many of them do stay. But this is an area where I think we should continue to prioritize getting talent to the country.
MICHAEL MORELL:
So are you, at the end of the day, optimistic that we're going to be able to answer all of those questions? Or are you more pessimistic?
BEN BUCHANAN:
I'm optimistic that we have the chance, and that's not a fait accompli. I'm pessimistic because I think time and again we've not always made these choices correctly before. I did my Ph.D. in Britain, they're very fond of Winston Churchill's quote, "Americans can always be trusted to do the right thing after they've tried everything else." Let's hope that works out here too.
MICHAEL MORELL:
So Ben, thank you very much for joining us. Great discussion about AI. I also want to remind people again that Ben has a book coming out in a couple weeks called The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics. Ben, thanks very much.
BEN BUCHANAN:
My pleasure.
* * *END OF TRANSCRIPT* * *