Starr Forum: Artificial Intelligence and National Security Law: A Dangerous Nonchalance

MICHELLE ENGLISH: Well, welcome to today's MIT Starr Forum on Artificial Intelligence and National Security Law, A Dangerous Nonchalance.

Before we get started, I just wanted to remind everyone that we have many more Starr Forums, and if you haven't already, please sign up to receive our email updates.

We also have fliers of upcoming events, including our next event which is on March 15, The Uncounted Civilian Victims of America's Wars. The speaker will be Asmat Khan, who is an award-winning investigative journalist. She's also a New York Times Magazine contributing writer, and a Future of War Fellow at New America, in Arizona State University.

Her reporting has brought her to Iran, Egypt, Pakistan, Afghanistan, and other conflict zones. And I'm Michelle. I'm from the Center for International Studies.

Today's talk is going to be followed by a Q&A, so we'll ask you to probably just raise your hands, and ask only one question, due to time. And when you're called upon, if possible, please identify yourself and let us know your affiliation, and so forth.

Now I'm thrilled to introduce the speaker of today's event, James E. Baker. Judge Baker is a Robert E. Wilhelm Fellow at the MIT Center for International Studies. He retired from the United States Court of Appeals for the Armed Forces in July 2015, after 15 years of service, the last four as Chief Judge.

He previously served as Special Assistant to the President and Legal Advisor to the National Security Council, and Deputy Legal Advisor to the National Security Council, where he advised the president, vice president, national security advisor, and NSC staff on US and international law related to intelligence counter-terrorism, foreign assistance, and military operations, among other things.

He served as general counsel to the NSC, and advised the Principals Committee, which included the secretaries of state and defense, the national security advisor, the Director of the Central Intelligence, and Chairman of the Joint Chiefs of Staff, and the Deputies Committee.

Judge Baker has also served as counsel to the president's Foreign Intelligence Advisory Board, and Intelligence Oversight Board. He's also served as an attorney at the Department of State, a legislative aide, and acting chief of staff to Senator Daniel Patrick Moynihan, and as a Marine Corps infantry officer, including his reserve service in 2000.

Alas, he has done so much more, but I will end there. Please join me in welcoming Judge Baker.

JAMES E. BAKER: Well, thank you very much. Thank you. Thank you, Laura. Thank you, Michelle. Thank you, Center for International Studies, for allowing me to come up this year, and try and teach myself something about AI. You all will be the judge, in a few minutes, as to whether I have learned anything.

This is the first time out for this presentation. So let me tell you, right up front, that it's somewhere between seven minutes in length and seven hours. I'm not quite sure how long yet, so we'll find out.

As you know, my topic is artificial intelligence and national security law, emphasis on the law part. I know better than to come into MIT and purport to lecture anybody about algorithms or math equations.

In fact, I've chosen not to use slides, because if I put up a slide of a math equation or a game of Go, someone would ask me to explain the rules, and I would falter immediately.

So I am not a technologist. I am a bridger, and by bridger I mean, not a civil engineer, but someone who bridges communities, and translates.

My job is, I believe, to translate for you, into plain English, why it is you should care about artificial intelligence in the national security sphere. And my job is to explain to national security policy makers why they should care about national security law, as it applies to AI, and why they should care now.

So my goal is to try and speak in plain English on that, and to verify whether I have succeeded or not, I've invited my high school English teacher here, whose job it is to validate my use of English.

So here's what I'm going to do. I'm going to start with a few quotes and data points about AI. I'm then going to talk about the national security uses of AI. I will then describe some of the risks, and then in light of those risks, talk about how law and governance might address them. Is that a fair way to proceed?

Depending on how we do, we may have some time for questions and some answers at the end, maybe mostly questions. So let me start by referring to your first homework assignment, which is the Belfer IARPA study on national security and AI.

IARPA the intelligence community's version of DARPA, which all of you at MIT will know of. And the study in 2017 concluded that AI will be as transformative a military technology as aviation and nuclear weapons were before. So that should get your attention-- as transformative a military technology as aviation and nuclear weapons.

In July 2017, China's state council, in theory, their highest governing body, released an AI plan and strategy calling for China to become the world's leader in AI by 2030, to reach parity by 2020 with the leading countries, i.e., the United States, to become one of the world's leaders in 2025, and to be the primary AI power in 2030.

And they vested $150 billion into this. As you'll see in a moment, when you say $150 billion, one, it's hard to count $150 billion across the technology field, but secondly, a lot of it depends on how you define AI.

As many of you know-- how many of you actually know about AlphaGo and Google? OK. Just wanted to get a sense. Right. So in 2016, Google's AlphaGo computer, a DeepMind computer at the time, beat the world's best Go player.

And this was viewed, and is viewed, by many as a Sputnik moment in China, and by Sputnik moment, the moment in the United States when we got notice that we are in a whole new technological world and place.

And in China, this was viewed as a Sputnik moment, in part because AlphaGo-- Go is a Chinese game-- and in part because it was a US Google computer that beat Go.

And how many people watched, live, the AlphaGo play the world champion in Go in 2016? Anybody in here watch that, by the way? Right. Not a Sputnik moment in the United States, but 200 million people watched the AlphaGo computer play the Go champion, live on television.

So they got the memo about AI in China. We did not. Lest Russia feel left out, here are two reassuring quotes from Vladimir Putin. "Whoever becomes the leader in this sphere, AI, will become the ruler of the world." That's happy.

And then my personal favorite, "When will it eat us? When will it eat "us?" Vladimir Putin asks, and I'll explain a little bit more about being eaten in a moment.

And then, of course, you've heard Stephen Hawking and Elon Musk, on a daily basis, say things like this. "In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity." That's Stephen Hawking.

So what is it? And again, I'm conscious that I'm in a room where people not only can better define it, they can actually create it, but I'm going to give you a couple of definitions, and then tell you why it's hard to define.

First, here's the Stanford 100-year study definition. "Artificial intelligence is a science, and a set of computational technologies, that are inspired, but typically operate quite differently from, the way people use their nervous systems and bodies to sense, learn, reason, and take action."

It's hard to define, for many reasons, in part because we anthropomorphize it. I just wanted to use that word and see if I could say it. But we tend to look at it as a human quality, right, and by using the word intelligence, we're thinking human intelligence. And what AI really is, is machine optimization.

So don't think intelligence, and don't think of love and kindness, right? Machines do what they're programmed to do. And AI is a form of machine optimization. It also incorporates various fields and subfields.

And that's another reason it's hard to define. More on that in a second. The premise behind AI is that if you can express an idea, a thought, or an action in numeric fashion, you can code that purpose into software and cause a machine to do it.

So one question for technologists is, what exactly are the limits on coding things? I don't know the answer to that question. I sure would like to know.

So I define the ability-- I like to stick with AI's definition as machine optimization, and get away from that anthropomorphizing. So let me tell you a couple more things about AI.

How long has it been around? It's been around since Alan Turing in Bletchley Park. You've all seen the movie, The Imitation Game. The Imitation Game is Alan Turing's game where you win the game when you can get a computer to talk to a human in another room, and the people in the other room, the humans, think they're talking to a human.

That's The Imitation Game. I never really quite figured out why the movie was named that. But that's Alan Turing, and that's The Imitation Game. The first AI conference was held at Dartmouth, it pains me to say, in 1956. They wrote a grant and it's remarkable. I mean, it's interesting, the hubris.

They said, we're going to have a conference of 10 scientists and figure out AI. We're still working on that piece, but that sense of confidence can be both a positive force as well as a negative force.

If you wade into the literature on AI, there's a lot of talk about seasons. It feels very New Englandy, and those of you who have read about AI will know, there's this constant discussion about there is an AI winter. And then there's a pause, and then there's another winter, and they keep on having winters.

And that basically means a pause in the funding for AI, as well as a pause in the progress of AI. But what I'm trying to tell you, and what I would be telling a national security official in the United States government, if I could, is not only are we in a spring now, summer is right around the corner. And it is going to be a hot summer.

So forget the winter piece. We're in, at least, spring, if not summer. There are several factors for that. One is the growth of computational capacity. And a iPhone 5 has 2.7 times the computational capacity of a Cray-2 supercomputer in 1985.

I started in the Marine Corps, as you heard, and we whispered about Cray-2 supercomputers. They were the most unbelievable secret tool in the world, and not realizing that the iPhone would be coming along.

Data and big data. AI works, in many cases, on data, and think about the internet of things, and all the data you're currently sharing with the world on Facebook, Twitter, and every other place.

Cloud computing. How many of you have been working with Fake App? Don't admit that, by the way. If you read the New York Times yesterday, you read the article about Fake App and how you can use it to morph pictures near perfectly.

And of course, because this is the United States, what are we morphing near perfectly? Pornography. That's its best use. But cloud computing, how'd they do it? The reporter went and he contracted out with the cloud. He needed computational capacity. He didn't have it on his home computer. You can now get that through cloud computing, which solves a lot of the problems.

Algorithms and software is the next area that has helped blossom this field. This is why we've gone from winter through spring, and are about to hit summer.

Knowledge and mapping of the brain-- in the last 10 years, we've learned more about the brain-- I'm making this up-- but more about the brain than 50 times what we knew before. That's all made up, but in part, because of the study of brain injuries like TBI, and post-traumatic stress disorder coming out of the wars, and also football studies, we're learning things about the brain that we did not know before.

And a lot of the AI people are trying to model neural networks from the brain, some people, literally, by creating 3D versions of brains, and other people, metaphorically, by creating neural networks.

The dotcom Facebook phase, right? That's led to people pumping money into this in a way they didn't before. You know Napoleon's phrase about, in every soldier's backpack, there's a Marshal's baton? Of course you know that. You all seek to be Marshals in the front. No.

In every dorm room, we now think, is a billionaire's portfolio, if you just come up with the right idea. So that's part of the drive behind AI. And then all these factors come together in what is known as machine learning.

And here, I would encourage you to go speak to an MIT professor to explain machine learning to you, but what machine learning, in general, is, is the use of algorithms, data, and training of a computer to allow that computational machine to detect patterns, classify information, and predict information, better than humans can in certain spheres.

I'll give you a couple of examples of those in a second. What are the drivers going forward? In AI, hardware, right, supercomputers and chips, data, and algorithms, and then the power of your commercial sector. We have Silicon Valley.

So here, a comment on China. China is not monolithic. It's interesting to think $150 billion is a lot of money. They have advantages in focus, centralized authority. They have data.

And notice that in 2015, they passed a law prohibiting any data involving Chinese persons from being shipped overseas. And that's not just a US v Microsoft jurisdictional matter. That is about AI data and training, among other things.

And they have Badoo, Alibaba, and Tencent. So I would love to say we have the advantage of market incentive, and they have the disadvantage of communist centralism, but they also have the advantage of market incentive, in the form of Alibaba and Badoo.

But here I would say, break it down, and look at the particular sectors, like hardware, software, algorithms, and that sort of thing, and figure out who's good at what, and where the edges come.

Now finally, a word about being eaten. Remember Putin? "When will it eat us?" In the field, there's generally three categories of AI that people talk about. There's the current state of affairs, which is narrow AI, and that's when an AI application is generally better than a human at a particular task, especially pattern analysis.

AGI, or artificial general intelligence, also known as HLMI, human level machine intelligence, is that point in time-- and it's not a literal point in time. It's a phase-- where an artificially empowered machine is generally better than humans at a number of tasks, and can move fluidly from task to task, and train itself.

And then there's artificial super intelligence, and that occurs when machines are smarter than humans across the board. So that machine that is smarter than a human can plug into the internet, fool you into thinking it's not plugging into the internet, and do all sorts of things.

Now how does that end up with you getting eaten? It comes in the form of what the literature calls the paperclip machine, and they picked the paperclip machine because the emphasis here is on a machine that neither hates nor likes, but simply does what it's trained to do. And that is make paperclips.

And eventually the paperclip machine, which is optimizing making paperclips, runs out of energy, so it looks around and says, where can I find more energy since I've already tapped the grid off the internet? And it looks in a room like this, and it sees sources of carbon, and then powers other machines to turn those sources of carbon into energy, to make more paperclips.

Now if I were briefing a national security, an official in the United States government, would I start with the paperclip machine? No, I'd be escorted out by whomever, the Secret Service or the Marshals, within 17 seconds.

What concerns me about Putin's statement about being eaten is that it means he's being briefed at the highest levels of the Russian government on super artificial intelligence. Now I'm delighted that he's worried about the paperclip machine and getting eaten. That could solve a number of problems, but it worries me that he is that far into the conversation, that that's the level of detail. You follow?

So why might Vladimir Putin get briefed on AI? What are the national security applications? You will have heard many of them, but perhaps not all of them. First, military applications, right? And here people tend to jump immediately to robots.

And the Russians are indeed making a robot they call Fedor, which can shoot and carry heavy weights, basically an infantry-- or if they're really heavy and they're really good shots, they're essentially Marines.

But there's a lot more to it. So we talk about autonomous weapon systems, and lethal autonomous weapon systems. That's what you probably have contemplated. A couple of points about these things.

First, we've had autonomous weapon systems since the 1970s. So to the military, at least, the concept of an autonomous weapon system is not a new thing. Some of you will know the Phalanx, and Iron Dome, C-Ram. These are all weapons systems that, in one way or another, have an autonomous capacity to them.

The second, DoD has gotten the memo about AI. Department of Defense. The Department of Defense got the memo about AI. They initiated something in 2014 called the Third Offset, and an offset in DoD speak is the effort to harness the nation's technology to offset an opponent's advantage.

So the first offset, for example, took place in the 1950s. And it was addressed to the Soviet Union Warsaw Pact's advantage in manpower in Europe. And the effort was to develop tactical nuclear weapons to offset the Soviet advantage in Europe.

DoD came out with the third offset in 2014, and started implementing it. And what's notable about it is that it's a technological offset that doesn't seek to offset the opponent's advantage in a weapon, but through the general use of technology and, in particular, AI.

So DoD is putting its money where its mouth is, and its energy as well. The leader in this area was Deputy Secretary of Defense, Bob Work.

Next point, if you want to imagine how AI might enable offensive and defensive weapons, the featured tool here that all sides are working on-- the Chinese for sure, probably the Russians-- are swarms.

And the use of swarms of AI birds, objects, robots-- call them what you will-- pieces of metal, to work in unison, both as offensive elements, perhaps to attack a aircraft carrier. You can imagine. Think about kamikaze pilots, but now think about kamikaze pilots without the supply problem, or the moral dilemma problem.

And you can use AI as chaff. Chaff is metal objects you throw out the back of an aircraft to try and get a missile to chase the metal objects, rather than the aircraft itself. You could deploy a swarm instead, and send the missile on quite a goose chase.

So that's what a lot of the literature is about right now. You probably aren't spending your time, as you were not spending your time watching Go matches. You might not be spending your time reading military law journals and doctrinal publications, but they're spending a lot of time on swarmology.

But what else can AI do in the military context? Logistics. You want to plan D-Day? Imagine planning the D-Day invasion, but now you have ways. You all know ways, right, for traffic?

So it's ways that tells you exactly what to load on which ship, and then the weather comes in and it's lousy, so you reprogram ways, and it says, OK, go here instead, on this day, at this time, with the ship loaded this way.

Training. You can use it for war gaming. Testing. You can test weapons with it. Imagine anything that involves danger, repetition, or something that can be diminished by human fear, how AI-optimized machine can better address those situations.

So now intelligence, how can AI be used for intelligence? Remember that we're talking about pattern recognition. And one of my favorite writers in this area, a person named Ryan Calo-- Ryan Calo is at the University of Washington.

And one of the reasons he's one of my favorite people in this field is because he does, in fact, write in English. I can understand what he writes, and he doesn't provide a math equation after each sentence.

And he said the whole purpose of AI is to spot patterns people cannot see. That's current narrow AI. So who wants to do that? Intelligence, right? That's the whole business of intelligence.

Think about image recognition, voice recognition, sorting, aggregation, prediction, translation, anomaly identification, and everything that occurs in cyberspace.

Collection. You can collect information. For example, if you are trying to figure out if a North Korea was violating a sanctions regime, you can figure out, aggregate all the information available electronically, about shipping routes, bills of lading, and so on, and track them and find patterns.

If you're doing analysis, you can get all the ISIS Facebook posts, and see if you can find patterns in them, patterns of message and patterns of location. With the Fake App from yesterday's New York Times, imagine what you can do there if you're engaging in an effort to destabilize another country's democratic elections.

And if you aren't scared by AI, and you shouldn't be scared by it, you should decide to regulate it. How many of you read the 37-page indictment from Bob Mueller, from two weeks ago?

After you finish reading the Belfer IARPA study on AI, I would request that you all read the 37-page indictment, and then ask yourself, how would AI enable, further enable, what occurred in that context?

Do not go to social media or to cable news networks to answer that question. Go to the indictment, and then apply whatever you think I said today. Counter-intelligence anomaly, right, the whole game of counter-intelligence anomaly is, who's doing something out of pattern, and why? AI is perfect for figuring that out.

Now let's go to Homeland Security. Think about watch lists. Think about facial recognition. Decision making. The hardest thing to do in government other than make a decision is diffuse the intelligence, in real time, to make a timely decision.

In my day, the best way to fuse intelligence was to have a principals meeting, and see what intelligence the Secretary of State brought, the Secretary of Defense, the DNI and so on. That's not a great method. AI can do this better than any other method I've seen. It can model as well.

All that's great news if you're on our side of the table, perhaps. Think about how AI and these methods can help if you're running an authoritarian regime, and you have something called a social credit system. You all know what I'm talking about there. That's the new Chinese mechanism.

If you jaywalk, you are called out, but lest you think this is just a Chinese feature, in the UK, if you talk on your cell phone, handheld, while on the highway, you will receive a license suspension in the mail, as well as a photo of yourself on your phone.

So AI has wonderful application for social control. And of course, non-state actors will want to empower-- think about how AI can be used to enhance IEDs. Take the swarm concept, and you do not need to use the cinderblock-on-gas-pedal method anymore. You can use other methods to empower IEDs.

So now, I've done the first two sections of what I claimed I was going to do. I described what AI is, and then I've described some of the national security uses for AI. Now I'm going to turn to some of the risks, and then I'm going to talk about some mechanisms for legal regulation and governance.

Now because I'm in the business of national security, I talk about risk, and you tend to do worst-case scenario planning. The record should be very clear that AI holds great promise for many things that are positive, including in the area of oncology.

AI-enabled machines can read tumors and predict cancer better than doctors, period, time after time after time, by huge percentage points. Why? Because it can see patterns. It can break pictures down into three-dimensional sectors in a way the human eye cannot do.

It can do pixels in a way the human eye cannot do. So if you are getting tested for cancer, you don't want the machine to print out a little thing saying, congratulations, you have cancer and will die next week, based on probability analysis. But you do want the machine to tell you whether your tumor is benign or malign, and so on, and then talk to a doctor.

So that's an example of just one promise of AI that is not involving shopping algorithms. So here are some of the risks in the national security area. Speed.

I talk about the pathologies of national security decision making, and the classic pathologies, besides cognitive bias, are speed, meaning you have too little time, lack of information, and the national security imperative, the drive to solve the national security problem.

Clearly. AI is going to help mitigate the issue about too little information in too little time. That's a positive. But it will also, grotesquely, potentially, shorten the amount of time people have to make decisions.

Think about hand-to-hand combat in cyberspace, and if that's AI enabled, and there's no person in the loop making a decision. That's an example. I worry that AI will both be used and not used, used in a wrong way, and then not used.

It is an advantage to have it as a probability predictor, and so on, but it will only work if the people you're briefing and the people you are working it with-- and that means people like me, not people like you, meaning people who are less comfortable with technology-- they have to understand why it is that there's a 76% chance probability that Jamie Baker is a terrorist, and not a former secretary of state, or his son.

And you can see how AI, which is good at telling you what the percentage possibility is, will be less good at telling you just which Jamie Baker you're dealing with.

Policymakers loved to ask intelligence people, are you 87% sure? Are you 93% sure? And the intelligence people want to say, it is our informed judgment that-- and I worry that AI will drive us into math national security decision making.

And some national security making is intuitive, whether to go or not on D-Day. Plug that into an algorithm, and we'd still be waiting for perfect weather and circumstances to make the cross-channel landing.

And then military command and control. The US military currently uses what they call the Centaur model, which is augmentation of human capacity, rather than completely autonomous weapon systems. You see the point in why it's a centaur. A centaur, part human, part animal. We're not prepared to make it totally animal.

And the previous secretary of defense stated, essentially, a no-first-use policy, which was, we will not go to fully autonomous weapon systems in the first instance, but we will reserve the right to respond in kind.

So that's interesting and risky. The law of intended consequences. Icarus learned the hard way that not all technologies work just as intended. But you do not need to go to mythology. You can go to the Mark 14 and 18 torpedoes.

Do you do you remember those? What do you call it when the torpedo comes back on you? It has a name, right? Other than, oh, my god, and that's not what you would say. Yeah.

So usually, when people say I was there, and they're telling you a military circumstance, and if every word doesn't start with F, you know they were not there, and it's not an accurate rendering of the conversation at the time.

But the Mark 14 torpedo had a propensity to, one, hit the enemy ship and not explode, which tends to clue the enemy ship into the fact that you're there. So that's an example. I'll give you a few others.

Post-Sputnik, our Sputnik moment, what did we do? We ran out and launched two satellites. Both of them exploded on the pad. Apollo 13, the Challenger, the Columbia. Stuxnet. Sort of worked as intended, but then it jumped the rail, didn't it?

And then Sydney Freedberg and Matt Johnson have pointed out that even when the technology works exactly as intended, there are enormous interface issues between humans and technology, when you hand off, and they use an example, the Air France 447 flight which crashed. You remember, it crashed in the Atlantic going from Brazil to Paris?

And there's a couple of friendly fire instances using the Patriot batteries in 2003, where the technology worked exactly as intended. But what happened is the technology passed the con back to the human in the loop, and they weren't sure what was happening until it was too late.

Are you all familiar with the Vincennes incident, which was the incident where a US Aegis carrier shot down a Iranian Airbus, commercial airliner? And that has been studied as both an example where technology worked, and the interface did not, and also a place where AI-enabled machines would likely have prevented that from happening.

One of the issues was, was the aircraft ascending or descending, ascending indicative of commercial air, descending indicative, potentially, of an attack. But if you really want to stay up tonight, remind yourselves about the Petrov incident. How many of you are familiar with the Petrov incident?

And this is an example of the importance of having a human in the loop, and the fact that technology does not always work as intended. And this is an incident from 1983, where Lieutenant Colonel Stanislav Petrov was the watch officer at the command and control center for Soviet rocket forces. And they got indication of a US first strike, and he was on the clock.

And under doctrine, he was required to immediately inform his chain of command, and up to the Politburu for responsive action, and he sat on it. He said it does not look like what it should look like. And he sat on it. And he was being yelled at, yelled at. He was a lieutenant colonel. It takes a lot of guts, especially in the Soviet system, as a lieutenant colonel, to not follow direction.

He did not, and eventually the machine corrected itself. It was a technical error. Lest you think this is good old Soviet technology, go to 1979, exact same scenario.

National Security Advisor Brzezinski is woken up in the middle of the night by the DoD command center, and told it's a Soviet first strike en route, and you have x minutes-- he was told the specific numbers of minutes-- in which the president had to be informed, and respond.

He sat on it until the system corrected itself. AI is instantaneous, one of its advantages. One of its national security advantages is, they can instantly do things. Now think about that.

Foreign relations impact. Larry Summers, who sometimes gets things right and sometimes does not, has predicted that by 2050, one third of the world's 25- to 54-year-old male population will be out of work, as a product of AI.

Doesn't matter if he's right or wrong. If a significant portion of the male populace of the world is out of work because of AI, that is a source of instability and, therefore, a national security issue.

AI can divide and further divide the north-south divide. It can give new power to smaller states, a version of the Singapore effect, perhaps, supply chain and counterintelligence issues. Think about today's news story about Qualcomm.

Jurisdictional issues. Think about USB Microsoft. You can ask me about that if you really care. Asymmetric threats, like interference in elections, and the benefit to authoritarian regimes. So that's risk number whatever, is foreign relations impact.

Then we have the arms race. The arms race. Here I would say that there is a community of AI think tanks that in great, good, and honest earnestness, hope that we will not be in an AI arms race. One of the SLMR principles, and that's one of the ethics rules on AI, is that the principle is in fact-- let me see if I have it here. Doo-bee-do. Ah, I do.

The principle is that an arms race in lethal autonomous weapons should be avoided. That is the principle. Goodbye. We are in an arms race, right? That's the national security imperative. The AI think tanks still wish and talk about a Baruch plan.

A Baruch plan was Bernard Baruch at the end of World War II. The theory was to turn over nuclear weapons capacity to the United Nations, and have them control it.

You are not going to have a Baruch plan in AI, because there is too much to be gained by it, too much security advantage. And who, by the way, is going to trust Vladimir Putin, who thinks that whoever controls AI will control the world.

So we are in an arms race. And there are all sorts of aspects about arms races that we will need to address. More on that in a moment, if we have time and you wish.

And then there's the existential risk, and this is where a lot of people have spent their time. And this is where Elon Musk spends his time. And I am not going to spend my time on it, but let me just tell you there's three camps here.

And this is the part that people love to yabber about. And this is, when you get to super artificial intelligence, will it be the end of humanity or not, right? This is the paperclip monster issue.

And there are three camps, and the first thing you should know is, there are generally three camps here. Camp one is represented by James Barrat, who wrote the book, Man's Last Invention. It's our last invention because you're gone afterwards.

The institute at Oxford that studies AI is called The Future of Humanity Institute. That's the stake they think that's in the game. So you have the camp that says, not only is this a bad thing, it will be the end of humanity.

I can talk about this, but I'm not going to for a reason I'll tell you in a second. Second camp. Well, we're not quite sure. It could be friendly, or it could be unfriendly. This is the fork-in-the-road camp. This is where Stephen Hawking is.

And the technologists say things like, we'll figure that out. Don't worry. We'll fork right, or left, whichever way we want to go at the time. And then there is the third camp-- these are my camp notations-- which is the stay-calm-and-carry-on camp, right?

And that's largely your technologists. That's everybody at MIT, probably, and that's the people who have the optimism that this is all going to work out, and that's because they're going to write the algorithm at the last minute that will make it work out.

But one thing I picked up on, partly because I'm a lawyer and a judge, and I kind of notice funny words sticking out-- here are some of the leading statements from the stay-calm camp.

This is the Stanford 100-year study on AI. "While the study panel does not consider it likely that near-term AI systems will autonomously choose to inflict harm on people, it will be possible for people to use AI-based systems for harmful, as well as helpful, purposes."

Contrary to more fantastic predictions for AI-- Elon Musk-- in the popular prose, the study panel found no causes for concern that AI is an imminent threat to mankind. I'm like, wait. It's not an imminent threat? So I can go to work tomorrow?

And then back to my friend, Ryan Calo. "My own view is that AI does not present an existential threat to humanity,"-- awesome-- "at least not in anything like the foreseeable future." So that's the optimistic camp.

But I don't worry about the existential threat because I think we're going to do ourselves in before that, because we're going to mishandle AI as a national security tool and instrument. So I really don't worry about getting to the existential threat piece.

So now this gets to us. How should we respond to these risks? Are you happy with the timing? Should I talk faster?

PRESENTER: We'll have 30 minutes at the end.

JAMES E. BAKER: Yeah, we will. That's about where we'll end up. OK.

OK. So how should we respond? And here I have a helpful 157-part program I'd like to present to you, but instead I've decided to go with two themes, because my goal here is to get your attention, not necessarily satisfy it, because that would just take too long.

So how should we respond? First, we should start making purposeful decisions. National security policy should not be made by Google. That's wrong, as a matter of democracy. It's wrong as a matter of how the equities are balanced.

And by the way, I don't think national security policy should be made by the FBI either, which is my next point, which is we should not make national security policy by litigation, which is where we're headed.

If you want to know what will happen if we don't fill the void on AI national security space with law, you will get endless iterations of the FBI Apple litigation from 2015, over the San Bernardino shooter's phones.

And litigation is a terrible way to make informed policy decisions. Yes, it can serve as a forcing mechanism, to do things, make you make decisions you don't want to make, but here are some of the problems with litigation.

First, the Apple FBI thing should also tell you that where national security is concerned, the government will make novel uses of the law. Do you remember what law was at stake in the Apple FBI dispute? If you get the answer, something very good happens to you. Ken will buy you dinner.

AUDIENCE: The All Writs Act.

JAMES E. BAKER: You've got dinner right there. The All Writs Act. You've got to finish the sentence though.

AUDIENCE: No, that's all I got.

JAMES E. BAKER: That's all you got. All right, and you did not give the complete and full name. What you meant to say was the Judiciary Act of 1789. Get that? The Judiciary Act of 1789, probably not intended to address an iPhone dispute in 2015.

But that was the law that was at stake in that case. Part of the Judiciary Act is the All Writs section which gives the courts the power to issue the writs necessary to enforce their jurisdiction.

So litigation, there will be litigation, right, because unlike Apple FBI-- Apple FBI is Little League, compared to the litigation that will occur in AI, where the stakes are so much higher.

Think about the amount of money Google has invested in it, Microsoft has invested in it, and so on. Litigation accents voices, the interests and voices of a few parties, not society at large.

And the government process for responding to litigation is very different. The government tends to-- it gives you lawyers making arguments that will win the case, rather than lawyers making arguments that will serve society best, going forward. And even in the FBI Apple case, FBI did not necessarily reflect the views of the United States government.

The largest critics, the most vocal critics, of the FBI position, in the end, turned out to be members of the intelligence community, who disagreed with FBI'S perspective on encryption. And I think it's safe to say, or we can stipulate for our purposes, that Silicon Valley, as a metaphor, and the government work least well at moments of crisis and litigation.

It's called the adversarial process for a reason, and if you want informed and sound policy, you need to do it in calm moments, not in litigation. Ethics is not enough. There's some ethical codes out there. I will leave it to the MIT professors in the room to tell you whether they feel constrained by the professional ethical codes that bind them.

But I read to you one of the principal statements of ethics from the SLMR code, and that's not going to do the trick. And I know, from my own Model Rules of Professional Conduct for lawyers, that defines the basement of conduct. Basically, don't steal from your client. And that's not going to do the trick.

Law serves three purposes. It provides essential values, national security as well as legal. It provides a central process, and it provides the substantive authority to act, as well as the left and right boundaries of action.

I'm going to give you an example of each, which is about all I can do here in the time we have. But this is where I want you to start thinking about how the law can serve to guide, in a proper way, in a wise way, in an ethical way, the use of AI in national security space.

Starting with values, the most important national security and legal values in this area come from the Constitution itself. The values that will be debated, and litigated, and tested, are the values in the First, Fourth, and Fifth Amendments, right? We all know that. Perhaps you know that.

The First Amendment, why the First Amendment? Think about analyzing Facebook postings for Russian interference. The government analyzing Facebook postings for Russian interference, how that might lead to a First Amendment challenge, that the government is impeding, restricting, or chilling your voice.

Think about the AI researcher who's traveling to the United States to work at MIT, and can't get in because there's a travel ban. And think about disputes over funding. I'm going to give you your government funding, or I am not, depending on what you're doing with AI. All have First Amendment dimensions.

The Fourth Amendment, right? Everybody familiar with the Fourth Amendment? In essence-- so that was silence-- how many of you could recite the Fourth Amendment? If you're working in this space, you ought to know the Fourth Amendment by heart.

The summary of it is, the Fourth Amendment essentially protects you from unreasonable searches and seizures by the government, and in many cases, but not all, requires the government to have a warrant before it engages in a search or a seizure.

Critically, it protects you from unreasonable. So the whole debate here is, well, what's reasonable? And since 1979, the answer to that question has come in the form of the third-party doctrine. Just by chance, has anybody in this room ever heard of the third-party doctrine? For real. You're not just playing it safe? I won't call you.

So the third part-- and this is critical, because think about all the data, all the information out there on which AI depends, depends, in part, in part depending on how we interpret the Fourth Amendment, on the third-party doctrine, which posits that if you share information with a third party, you do not have a reasonable expectation in its privacy.

It is like, but not similar to, the attorney-client privilege. If you have a privilege between you and your attorney, but you tell someone else the information, you are said to have waived that privilege. But the third-party doctrine is very different.

It says, if I have a cell phone, and I call someone, and the cell phone company needs the information, obviously the numbers to stick through, I've surrendered that information voluntarily to a third party, the phone company. I now have lost my reasonable expectation in it, from a privacy standpoint and a Fourth Amendment standpoint.

Never mind that the case this all depends on dates to 1979. It's a purse-snatching case, and the piece of information they were garnering there was information about who had called the witness. All it was was a trap and trace on the phone call. It wasn't content. It wasn't all this data. It was just, was the guy who was accused of the crime calling the witness to cause trouble?

And the Supreme Court said, nope, that gets in. That evidence is permitted because you had no reasonable expectation in its privacy. That is still the law. That's good law today, and it covers everything that you share. So that's point one.

Point two, the US government generally takes the view, and correctly so in my view, that if it lawfully comes upon your information, it may use it as it sees fit, right? So once it has the information-- perhaps it lawfully obtained it this way-- it now can use it for other purposes as well, provided two other things have not happened.

First, provided the government has the authority to use the information in the first place, which is not a Fourth Amendment question, but an authority question, and provided that Congress, that the law has not otherwise regulated that information.

So, for example, you have no Fourth Amendment privacy protection in that third-caller information, or in the data you send through a ISP, because you've shared it with the company. But Congress, in 1986, passed the Stored Communications Act, which gives you an additional right, a statutory right, to privacy.

And that's currently under dispute right now before the Supreme Court in the US Microsoft case, involving the storage of data in Ireland. Why does all this matter to you? Because the whole AI system is predicated on data and information that you're currently sharing with private parties, and with the government.

And if the United States Supreme Court, which is hearing a case right now called Carpenter, which could put this in play-- it shouldn't, but it will because they're very nervous about the aggregation of information-- it's just not the right case to do it. All of this could change the AI playing field.

The other thing is, technology could solve some of the privacy problems, and more on that in a second, or not. OK. And then the Fifth Amendment. The Fifth Amendment, for those who have not studied it recently, the Fifth Amendment essentially provides for due process of law, and it has the Takings Clause. How might both of those come into play?

Due process of law, essentially, the theory behind it is, the government cannot deprive you of your liberty, your liberty interests, or financial interests without due process of law, an opportunity to be heard and to make your case. Fair enough?

Part of the AI algorithms, with deep machine learning, is based on the concept of the black box. And the black box, it turns out, is the neural networks within the computer. So you put the input in, and you get the output, and somewhere between the input and the output, which predicts whether you have the tumorous cancer or not, the algorithms wait and classify the information in the pictures, thousands of times through the neural networks.

And so I'm thinking, OK, I get it. I don't understand a thing about this. Let me go ask this smart person how that works. And they say, oh, it's the black box. I say, this is really interesting, but how does it work? We don't know. We just know that the output is accurate 93.2% of the time.

Try explaining that to a judge. That's not going to work, and that might violate your due process. So that's where that comes in. Takings Clause. Everybody familiar with the Takings Clause? The Takings Clause say the government can seize private property for public good. There's no question they can do that. That's how they build highways.

But they have to pay just compensation for it. How might that come up in the AI context? What if you're a company in the United States, where most of the research is occurring, and you invent a great AI algorithm that can do great things, that have national security value? Maybe the government will want to have access to that. OK, so you can see where that comes in.

Process. I'm running out of time, so let me-- my processing is, the second purpose of law is to provide a central process. You can't have process without structure and organization. And here, if I were to pass one law right now, it wouldn't be a substantive law. It'd be a structure for the United States government to organize itself around AI.

Right now, there is no structure. The lead bill in Congress creates an advisory committee to advise on what the structure should be. This train has left. Time to get organized and actually make decisions. I've lived through this with the cyber thing. In 1996, we were debating who should be the lead agency in the government for cyber.

What authority should they have? And these are the same debates we're having now. Skip the debate part. Get to the decision part. And the question is how to do it. And if we don't do it, we're going to have a DoD-centric AI policy, which is great if you believe that national security should drive national AI policy.

But if you believe that all the equities of the United States should come to bear into the AI equation, you want a more unified decision mechanism. Who's going to go out to Silicon Valley and talk through these issues, and do so in a credible manner? My first pick would not be the Director of National Intelligence or the Secretary of Defense. It'd be someone who could speak for all of the United States government. That's my point.

So I have some suggestions here how to do that, if you're interested, or not. So then substantive authority to act, right and left boundaries. The lead law right here is the Stored Communications Act from 1986. I'll give you a hint. It's not up to date, and it will not be up to date for AI. Don't respond in this area with substance.

Respond with process, because law will always chase technology. The law never can keep up with Moore's law, for example. Note the use of law-- that was very clever. That's an MIT joke, and I'm very pleased with myself.

What statutory law provides the substantive authority? I'm talking about the Defense Production Act. How many of you have heard of that act? Good. You are now one of the leading experts in the United States on AI law, because that's the essential act, in some regards.

Why? Because partly that's where you get CFIUS from, the Committee for Foreign Investment in the United States. As you read in the paper today, one of the issues is whether a Singapore-based chip company can acquire a US-based chip company, and whether there is national security impediments to doing so.

That's CFIUS. That's the Defense Production Act, and I can talk about that all day long if you'd like, but the act has not been updated to address AI. And if I wanted something from the technologists in this room, among other things, I would want to know, what should we be worried about in the AI sphere?

I get it, if the foreign company wants to come in and build a factory next to the DoD base. I get it, if they want to buy Northrop Grumman's aviation component. I need to know what I should be worried about in AI.

So I have a whole section on arms control, but I'm going to skip that so that we can get to Q&A. And then I want to leave you with a couple of thoughts. The reason this presentation, in theory, was called a dangerous nonchalance was because here we are, with what the Belfer study said was probably the most transformative military technology, as much as aviation, and as much as the nuclear weapons.

And it's not what we're studying and talking about. I can guarantee you it's not what people are learning about and studying in national security law courses in law school. And if you will recall a time during the Cold War-- and I grew up in Cambridge. You couldn't graduate from fourth grade to fifth grade without being able to talk nuclear doctrine and throw-weight, right?

Everybody knew it. This is what you grew up with. Nobody's talking about AI doctrine, and such. We should be. When will AI exceed human performance? Remember, this is the artificial general intelligence. A group out of the Future of Humanity did a poll and asked that question of AI researchers, and industry people and academics, two years ago.

The median answer for AI-ologists from China was 28 years. The median answer for American specialists in AI was 76 years. They're optimistic about something that we're not as optimistic about, but everyone's optimistic in the field about getting to this artificial general intelligence.

We need to talk and bridge now, government to industry to academics, technology policy to law, and need to do so in plain English. I asked someone at the Future of Humanity to explain to me how you could do a due process algorithm to make something fair. And they gave me a 40-page article with math equations. I can't explain that to anybody, including myself.

If you're a technologist and you actually want to influence policy, and shape it, you need to speak and write in plain English. So that's one thing. We need to make purposeful decisions and we need to ask the big questions now.

One of the big questions, for example, is what is the responsibility and role of a US corporation? There is no common understanding, as there once was. I'm not saying there's a correct answer. I'm saying, right now the answer appears to be default, company by company, and commercial interests.

But we can't have that fight now, every time we have a national security San Bernardino iPhone issue. Do corporations have a duty, a higher duty, in the United, States to national security and to the common good? The answer might be no, no, no, it's all about shopping algorithms, and that's just fine. But we ought to purposely make that decision and make that an informed choice.

So that's it for now. I do want to leave time for Q&A, or at least one Q and maybe an A. So thank you very much for your time and attention. I hope that's not your question. OK. All right. Are there any questions? Yes, please.

AUDIENCE: So Jamie, you began your talk by noting that AI was going to have effects--

JAMES E. BAKER: Hang on. You violated Rule 9, name, rank, and serial number.

AUDIENCE: Kenneth Oye, Program on Emerging Technologies, MIT. So Jamie, you began your talk--

JAMES E. BAKER: Trying to distract them right now. OK. That's chaff.

AUDIENCE: You'd expect that AI would have effects that were on a par with, or even greater than, long-range aircraft and nuclear weapons.

JAMES E. BAKER: I said the Belfer study concluded that that would happen.

AUDIENCE: Ah, notice the careful distancing of the good word. If you go back to 1945--

JAMES E. BAKER: Right. One of the things I'm trying to do-- I want this debate-- so I don't really care what the answer is. I care that we have an informed debate about the answer. And if the only thing out there is the Belfer study, and everybody is like, OK, thanks, let's move on to shopping algorithms, not a good result. OK. So go ahead.

AUDIENCE: If you go back to 1945, expectations on the effects of those technologies were really bad. That would be talking about being eaten, or eating. And, consequently also, in domestic politics, so you had McCarthyism and other fear-driven domestic politics taking place.

What happened was more benign, over the long term, with the emergence of a period of-- I wouldn't call it peace, but the absence of central systemic war, in part by virtue of the technologies and the doctrines associated with them.

JAMES E. BAKER: OK. Question now.

AUDIENCE: Question. Can you give us ways, or discuss how we might avoid potential bad things, recognize on certain wrong predictions, and develop those doctrines that we're talking about?

JAMES E. BAKER: Sure. So thank you. One thing, or five things, I didn't say by skipping the arms control section was, one, the doctrine that we now-- we'll just stipulate. You don't have to agree with this-- but the doctrine that at least provided stability, and if not having nuclear war is the measure of success, was successful, known by some as mutual assured destruction, some, as others, of assured destruction.

By the way the Chinese never bought into either concept. They simply said, we have enough missiles to make you think and know that you will get hammered. That's enough. And the number of ICBMs they've had-- obviously, they're not in the business of telling you exactly how many, where, or when-- but was always, at the beginning, was in the 20s and 30s, and then at most was in the 200s, based on open-- I don't know anything, and you don't know.

So you don't have to buy into complete doctrinal unification, but here's what I'd say about the doctrine piece. At the end of the Cold War, we sort of accepted and assumed no one would do first use. Even if you had a declared first-use policy or you didn't have it, it was sort of understood you wouldn't.

Of course, we didn't know about the Soviet Dead Hand at the time, which sort of undermined the whole concept of mutual assured destruction. And secondly, if you go back and read the development of the doctrine, and here, I encourage you to read Fred Kaplan's wonderful book, The Wizards of Armageddon, we didn't wake up in 1947 with mutual assured destruction.

We woke up in 1947 with Curtis LeMay, and what did you get with Curtis LeMay? You got a policy of first use, as in, nuclear weapons weren't nuclear weapons. They were a better weapon, more powerful weapon. And secondly, Robert McNamara, not everybody's favorite secretary of defense-- got that-- but when he came in as secretary of defense, he asked to see what was then the PsyOps 1962, which was the second single integrated operations plan, which is the nuclear response plan.

And he was stunned to find a couple of things. One, he found it was a plan. It was one option. Like pick from the following options, A. And he was like, A? And what was A? A was, nuke the Soviet Union, and as an added bonus, all of China too. And he was in the tank.

And he said, China? And David Shoup, the commandant of the Marine Corps, was the only of the chiefs who said, what's China have to do with a Soviet first strike? We're going to kill 150 million Chinese that have no connection at all to the Soviet action?

That's what was in the plan. So doctrine looks OK, maybe possibly good, at the end. But doctrine's not born correct. So one of my things is, if you're writing doctrine when you deploy the system, whatever the system is, you got a problem. You got to think that through now, and you got to go through your 'let's nuke China' moment without it being on the table.

I'm joking. That's a joke for the record. That was national security humor. I do not want a repeat of PsyOps '62 briefing to McNamara. I want it to be nuanced, informed, and AI's going to speed everything up. So doctrine is going to be even more important.

At least with nukes, you had 22 minutes to have a very thoughtful and deliberative process. Not necessarily so with AI. So one doctrinal development, which is what you can do here. You can build in, and again, I'm out of my league, but I would want to know whether there are technological ways to build in fail safes and circuit breaks, so that we can choose to have them or choose not to have them.

But right now, I can describe that concept. One of the key concepts in the area of AI is man in the loop. And everybody is going, oh, you need to make sure there's a man in the loop. Well, that doesn't mean anything. Try putting that into an arms control treaty. There shall be a man in the loop.

They had years. It took them years to negotiate what a delivery system was in the context of arms control. That's easy, right? It's a bomb, or it's an ICBM, or it's a Polaris missile off a sub. That took three years?

Try defining man in the loop. That's the sort of thing we should be doing now. What does it look like and what are the options, so that when we agree that the law should say there shall be a man in the loop, or the two countries or the five countries agree, there should be a man in the loop, we know exactly what we're talking about. Is this sort of responsive?

AUDIENCE: Yes.

JAMES E. BAKER: And I had some other really interesting point, but I lost it.

AUDIENCE: An accentuating defense of the [INAUDIBLE].

JAMES E. BAKER: Yeah, but hard to do.

AUDIENCE: I understand.

JAMES E. BAKER: Yeah, hard to do. Yes, sir?

AUDIENCE: My name is [INAUDIBLE]. I'm from the law school.

JAMES E. BAKER: MIT has a law school?

AUDIENCE: No, the other one, the--

JAMES E. BAKER: Oh, BC?

AUDIENCE: The other end of the room. The question is if it's perhaps blasphemous in this context of American national security people, but I come from Europe. And there is some discussion in Europe that's a broad question, that the United States is not as reliable anymore as it used to be. And also there's now a question of the very companies and the very technology that you outlined is strategic and important, and that the Europeans, first of all, need to compete more effectively, which is, I think, a lost case, but also that they are now restricting the transmission of data to Europe.

And I'd be interested in whether [INAUDIBLE] protection regulation that's now being discussed in the European Union, and whether these issues, particularly with the European Union, are at all, and inform at all, your thinking on this.

JAMES E. BAKER: Yes. Next question, please. OK, fair enough. Let me just add the one thing I just remembered. One of the areas we can develop doctrine in is the principle of command responsibility. So the law of armed-- I'm not ducking your question. I'm just, I want to clean up my mess over here.

The law of armed conflict provides some parallels which, I think, could usefully be applied to AI, not because AI could be a weapons system, but to control, to give it structure and governance.

So the doctrine of command responsibility, among other things, holds commanders responsible for things their units do, that they should have known they were doing, whether they intended them to be done or not. This is the Yamashita principle from the Tokyo war crimes trial. It's one of the fundamental principles to come out of World War II law of armed conflict, which is, you are accountable for what your troops do, and fail to do, or whatever, and whether you intended it or not.

And there are other aspects to it. There's a component of civilian command responsibility as well, that has come out of the Balkan conflict. So the point is not that it applies mutatis mutandis, in all relevant parts corrected, and to make it work, but there's a principle there that could be applied.

There's also a principle in the law of armed conflict that requires, and 100 out of 100 lawyers would agree to this and 100 lawyers don't agree on anything, that any new weapon has to be reviewed for compliance with the law, and that's not just the weapon, but also the mode and method that it is being used for.

Why not apply and adopt that, in some manner, to other aspects of AI? It could be done. And then the law of armed conflict also requires training, and this is, in fact, done, and you can say, is it done successfully? That's a different question. In the US, generally so, but you could pass principles and rules that required training on AI if you're going to use it.

Again, it doesn't fit perfectly but those are the kind of things that we can talk about and develop now, and you cannot do in the heat of the moment. So on data, one of the drivers of AI is, of course, data. And that has a lot to do with machine learning, and also pattern recognition, and I'm not wonderful at math, but obviously, the more data you have, the more accurate your predictions are going to be, right?

If you only have one person who's bought underwear and socks, that's not going to kick underwear and socks to you every time you buy t-shirts. But if that's the pattern with millions of sales, then your algorithm's going to predict that.

In Facebook algorithms, likes and don't likes, there have been studies indicating they're better at predicting your social behavior and your purchasing things than even your spouse, and that's based on volume of data. And the Chinese have regulated their data, to a certain extent.

Now, do you want me to talk about USB Microsoft, because there's all sorts of places to go with this.

AUDIENCE: You're the boss.

JAMES E. BAKER: I'm the boss. I mean, you ask. I mean, what do you want me to talk about, because Europe and the US have very different concepts of privacy. And when I talked about the Fourth Amendment as a value, and what was reasonable and unreasonable, the US-- and I'm going to grossly simplify, so for the record, I'm aware I'm grossly simplifying-- US concepts of privacy generally run us versus government, right?

The privacy line is between what the government knows or has on you, and what everybody else does, which might have made sense when you had-- US v Katz was a case about putting suction cups on a telephone booth and listening, right, but it doesn't make as much sense when people are sharing everything they're sharing with Google, whatever, fill in your favorite place.

And using LINQ analysis and algorithms, you can figure out where you worship, what you had for dinner, who you're married to, virtually everything about you, and you're worried that the government might get that to prevent terrorism? I mean, come on.

The European grossly simplified model is, privacy is about your individual right vis-a-vis the rest of the world, not just vis-a-vis the government. But my right, so for example, the right to be forgotten, have something erased from the internet, it's not the government's database you want to get it out of. It's the internet itself.

And if you talk to the people at Facebook, for example, who do the law interaction around the world, in certain states, they have to allow certain information out. In certain states they have to block it.

And this gets to the corporate responsibility thing, which is, and not to put too fine a point on it, if China is doing a social accountability program, do you have an obligation, because you want to have access to 800 million consumers, to give them all the information they want to get?

And should you? And in Europe, they would never allow that kind of personal identifying information to be shared in that manner. So different apple and orange concepts. Both have the concept of privacy, and to a certain extent, government regulation and control.

And then it emerges in the US v Microsoft case, which is currently pending-- oral argument was last week in the Supreme Court, and could I just generally say, and I think I'm entitled to say this as a former judge, we did a lot of Fourth Amendment law.

Why did we do a lot of Fourth Amendment law in my court, probably more than any other court in the United States? Because of child pornography. Who looks at child pornography? 18 to 40-year-old men. Where have we collected 18 to 40-year-old men in one place and given them access to computers and the internet? The military. And in the military, you'll get caught.

And then there's a cat and mouse game, right? You get caught once looking at it on your work computer-- and that's the lance corporal you don't want anyway, because they're so dumb they'll fire the weapon the wrong way-- so everybody is then figuring out ways to hide the child porn in black space, on computers, all this unallocated space.

And so, how to say this politely with great respect for our judicial system? A 57-year-old judge is not going to be on the cutting edge of social media knowledge and technology. QED-- that's an MIT thing, right?

And so I cannot impress on you enough, the importance, if you're in the government, or if you're in academia, to articulate to a larger audience, in plain English, why principles like the third-party doctrine do or do not apply to new technology, because if you don't do that, the Supreme Court will, or my court will, and common law courts know to walk in baby steps, as Learned Hand said.

But supreme courts don't because they're supreme. And I don't know this, but I can be pretty confident, when they're deliberating US v Microsoft, they're not talking about-- but what about machine learning and the future of AI? Actually, the US v Microsoft case is not going to be that exciting, I would project, because the real principle in that case is, does the Stored Communications Act-- so I'll give you two seconds on the facts.

Drug investigation in the United States, a drug dealer, alleged drug dealer, who's being investigated, has e-mails and other data on his computer. But the data is stored by Microsoft, not in the United States, but in Ireland. And this is normal process, because you go where the computing capability is. This is cloud computing, right?

And so you can bet a lot of it's in Iceland, for energy reasons, and so on. And now you can also bet that companies understand that if they move their data all over the world-- and I think Google actually splits it up. Microsoft, at least, puts the whole message in Dublin, but Google is like brzzt!

And so if you're investigating a drug, a narcotics thing, and you want the guy's email, which is normal-- there's a search warrant. There's probable cause. So this is not a question about whether there's a Fourth Amendment issue-- they then go to Microsoft and say, congratulations, you have a subpoena. Please produce these things.

And the question is, is that in the United States or is it in Ireland? And the normal process by which you get evidence overseas is through the mutual legal assistance treaty mechanism. One of my first jobs at the State Department, among others, was to be the MLAT guy, and it literally is bows, like tying ribbons on packages. It takes forever.

It is a lousy way to go about getting evidence, but it is the agreed upon mechanism. Why? Because MLATs are based on treaties, and when you agree to the treaty, you can agree that that country has a court system that is fair enough, and is close enough to what we would expect, and why we would share information with them.

So we wouldn't share certain information with the Soviet Union. I meant it historically, because we know that would be misused, and so on. So what happened in the US v Microsoft case is, Microsoft came back and said-- first of all, the district court ordered them, upheld the magistrate's order, and said, produce the information.

The Second Circuit then split 4-4 on whether they had to produce the information, and the case is pending now. It was argued before the Supreme Court. Why do I say 4-4, and why is that important?

This is what happens when you use litigation to decide policy. You get 4-4 splits, and it's very, very hard to have a national policy if the Second Circuit has one view of the law, and the First Circuit another, and so on. And so the Second Circuit essentially punted, and then moved it up to the Supreme Court, but the issue there is really whether the Stored Communications Act has extraterritorial effect.

So people are all very excited about what's going to be decided here, and it's going to resolve all future data issues. And the Supreme Court will likely just say, the general principle of law involved with extraterritoriality is that unless Congress has expressly stated that something applies extraterritorially, the presumption is that it does not, because you do not want to unconsciously create foreign relations friction unless Congress has intended that you do so.

So the Supreme Court could duck the issue, and it's not really a duck. It's answering the legal question presented, which is, does the Stored Communications Act apply? That was an example of a law that requires a certain mechanism, and allows for the subpoena of particular evidence, and the question is, does it does it apply overseas?

So the Supreme Court could come back and say, nope, doesn't, but Congress could certainly pass that law. And there is a law pending, as you know. I'm sure you know there is a law pending, which would deal with the issue of shortcutting the MLAT process. Ireland and the UK, for example, could go directly to a US carrier, in certain situations, and not have to go through the MLAT process.

You see how complicated this stuff is? And why you can't decide it at the last minute. And you don't want it decided in litigation. Yes, sir?

AUDIENCE: I'd be interested in your opinion on the patent office playing a collective role in the extraterritorial use of patents that are granted in the artificial intelligence world.

JAMES E. BAKER: All right. It's fair to say, I do not have a particularly informed view. You can guide me, if you would like. There is something called the Invention Secrecy Act, which is a Cold War-era act, which I have not, but intend to, look at with greater care.

But the premise behind the Invention Secrecy Act, a lot of the law, the Defense Production Act, the Invention Secrecy Act, were passed in the-- the DPA, the Defense Production Act was passed in 1950-- in a Cold War context where there was a unity of role and a unity of what a corporation in the United States ought to do, which probably doesn't exist now.

The DPA is renewed every five years. There's a vehicle to address these issues, but we choose not to. Now the Invention Secrecy Act says that if there's a patent, a request for a patent that is filed, and it is one that has national security impact, and it is a patent that the United States government does not want to see on the market, or that it needs to have security regulation too--

So let's say it's a patent to build-- I'm being silly, but this is what they had in mind-- a patent to make a better nuke weapon, right? You don't want to let someone patent that like, forget plutonium, forget uranium. It turns out you can do it with apples.

That's a patent, you don't want like, great, you're going to now publish that, and the Invention Secrecy Act says, no you're not. And the Takings Clause says, you will now get just compensation for us taking your patent and sitting on it. Thank you so much for bringing it to our attention.

And what I don't know, and thank you for the question, but what I don't know-- it's a noble fact I don't know it-- is how often, if at all, the Invention Secrecy Act has been invoked, and for what?

AUDIENCE: About 6,000.

JAMES E. BAKER: 6,000 what?

AUDIENCE: About 6,000.

JAMES E. BAKER: Covered patents? I've done detailed research on this topic. And it turns out that there are about 6,000 so covered patents. To be more precise, 5,000.

AUDIENCE: Thank you.

AUDIENCE: What? Wow.

AUDIENCE: Only five [INAUDIBLE].

JAMES E. BAKER: Well, that's one of the issues with the Takings Clause, and this comes up in eminent domain, which is, what is just compensation? Guaranteed that comes up, and so, of course, the government always has the--

So here's how it would come up in this context. Here's this AI thing, and I'm the inventor, and I say, this is going to be the best algorithm in the history of the world, and it will make $700 billion.

And the government says, this is an invention that is really a intelligence aggregation device, and it's going to make nothing, and it never did sell anything. Prove to us that you would have made any money, and you can't prove. You're like, well, I never sold it to Microsoft, and I never sold it to Amazon. So that's worth $10.

And so there's a compensation issue, just like when they take your back yard to put the highway in. You say, nobody's going to ever buy my house. The house was worth a million dollars, and it's in Cambridge, so it's worth $10 million. And you took my backyard, so now it's worth nothing, because nobody wants to live on a highway.

And the government says, no, no. That's one-eighth of an acre and one-eighth of an acre in Cambridge is worth $7. It's not the house we're taking. It's just this one-- so you get into all these just compensation issues. Believe me you'll get into them with an invention secrecy issue, because it's prospective.

So that's all I got for you on that, because it's a fair question, and I will continue my research offline. Yes, ma'am? And then over here.

AUDIENCE: I want to ask you a question about your last statement in view of your previous answer from the law school. It's not a great example, but it's one that you keep hearing. The people who build nuclear power plants are not responsible for the nuclear weapons and so forth.

In your social responsibility, why do you feel that AI companies have more responsibility than nuclear companies?

JAMES E. BAKER: I don't. My goal here was to identify it as an issue, not to answer it. So the question is, should an AI company have more social responsibility as a corporation than a nuclear energy company, than a anything else company? And I didn't come here today to espouse a particular view, other than I don't want to resolve that in the context of FBI Apple litigation.

I want to resolve that in the context of dialogue between Silicon Valley and the government, and academia, in the calmness of a private room, without posturing, and without litigation at stake. That's all. I want to hear the arguments. I switched my view on encryption, for example, by listening to the arguments.

And if I'd come out and just came out swinging in litigation, I would have come out on one view on encryption, but having listened to the arguments, and done so in the glaze of San Francisco, I came away in a very different place on encryption. And that's what I'm asking for, not a particular outcome, but a purposeful and informed outcome.

So we have a question here, and then a question there. I'm happy to stay. I once did five straight hours of Q&A, so you're not going to wear me out, but there may be rules. So yes, Perry.

AUDIENCE: I'm Perry [? Shtotson. ?] I'm an educator, and so I consider this kind of the NPR question, meaning, could you-- for the intelligent normal person driving in their car trying to understand all of this-- say, at this juncture, and it seems very urgent and critical, given that-- if you could go to Washington-- and you're very experienced in Washington-- and you could get, around this table, the people from government, and industry, and perhaps some ethicists and humanitarian people interested in the well-being of the planet, who would you have at the table? And what would you tell them, and what would you do? Who would you want to discuss this?

JAMES E. BAKER: Right, right.

AUDIENCE: Is that too general, James?

JAMES E. BAKER: No, it's fairly general, but ultimately, this is a conversation that needs to occur. If it's going to find its way into US law and policy, it has to be a conversation that includes Congress. It has to be a conversation that includes the President of the United States, which is a complex thing, I understand. And it's hard to avoid it, but this is not the time to say-- it's an awkward time to say, we need the government involved.

But we need the government involved. The Obama administration was on the path toward creating a structure, but did not complete the process. So I'd have the chairmen of relevant committees, and the problem is, this is not one of those things. It's like trying to explain the law of armed conflict to a commander at 2:00 AM when you have 17 seconds to make the strike call. You're not going to get the opportunity to say what you need to say. You have to have done it ahead of time.

And so my sense of urgency is not one of imminent urgency. The imminence is trying to trigger the process, not resolve it, because heck, we have somewhere between 26 years and 74 years, depending on how you look at it. Now I'm joking, but so the president, members of Congress.

Silicon Valley is a metaphor. I understand Microsoft is not in Silicon Valley. I get it. A lot of Google's work on AI is happening in Pittsburgh. That's where the driverless car program is. So those are the people. And then academics, and especially academics with oversized voices, whether they should have oversized voices or not.

If you're from MIT, you have a louder voice than if you're from some other school, to be perfectly honest. I don't know how to say that other than to say it that way. And here's an example what happens when you don't have this conversation. When you don't have this conversation, all of a sudden, you're looking at a tax bill that doesn't understand the importance of fundamental research.

And there's nobody better at articulating that than the president of MIT, but it's too late to articulate it the night before the bill's going to get passed. That is not how policy is. You got to be relentless, and say it often, often and early, like Curly.

And so that's the third prong of the conversation. There are people who are looking at this. It's a fascinating topic for me, because it's more complex than any topic the government has addressed before. It's much more complex than climate change. It's more complex than Ebola. It's more complex than Deepwater Horizon, all crises the government was not organized or prepared for, and that were anomalies, or black swans as they say.

This is the deluxe. This has everything, which is both why it's interesting and why it's so hard. It has everything, starting with an inability to even say what it is. And then secondly, it cuts across horizontal organization. It cuts across vertical organization. It has an academic component, like others do not have. And so it is that much harder.

So I don't know if that's a full answer, but it's a fascinating problem, and it's got to be someone who can speak with the authority of the President of the United States, because if you go to Silicon Valley and say, here's the deal we want to cut, or here's our concern, or trust me, this really isn't about blah, it's useless if that has no standing.