The Pentagon Wants to Streamline Security Clearances by Using A.I. That’s a Dangerous Idea.

John Bowers:

This piece was originally published on Just Security, an online forum for analysis of U.S. national security law and policy.

In June, the White House announced that the government’s security clearance program, including for individuals in civilian roles, would be consolidated under the Department of Defense.

This reorganization, largely motivated by an enormous backlog of clearance investigations, is aimed at streamlining the clearance process, and in particular the “reinvestigation” of individuals with clearances that require periodic review. At the core of these new efficiencies, the DOD claims, will be a “continuous evaluation” system that autonomously analyzes applicants’ behavior—using telemetry such as court records, purchase histories, and credit profiles—to proactively identify security risks. The rollout is already underway: The DOD had enrolled upward of 1.2 million people in continuous evaluation as of November. But the program is far from uncontroversial, raising credible privacy concerns and the hackles of advocacy groups including the Consumer Financial Protection Bureau. As the DOD takes over millions of new civilian clearances, these worries will find a broader audience.

And, thanks to machine learning, a type of algorithm that allows an A.I. to learn by example rather than being explicitly programmed, it seems that things may soon get a lot more complicated.