Skip to content

Van Hollen Joins Markey, Wyden in Demanding Information on Government’s Use of AI and Other Technology to Label People as National Security Risks

Today, Senator Chris Van Hollen (D-Md.) joined Senators Edward J. Markey (D-Mass.), a member of the Commerce, Science, and Transportation Committee, and Senator Ron Wyden (D-Ore.) and their colleagues in sending two letters about the government’s use of artificial intelligence (AI) and other technologies to determine whether an individual poses a national security risk. 

The lawmakers write to Secretary of State Marco Rubio and Secretary of Homeland Security Kristi Noem, urging the Trump administration to reverse its decision to expand its social media screening of visa applicants. Those policy changes appear intended to chill dissent, discriminate against particular viewpoints, and punish individuals for speech the Administration finds objectionable. In another letter, the lawmakers wrote to the Government Accountability Office, requesting that it investigate how the Department of Homeland Security and the Department of Justice are using AI technologies to label individuals as potential threats to the public, including automated analysis of content people post online. The letters were also signed by Senators Cory Booker (D-N.J.) and Peter Welch (D-Vt.).

In their letter to Secretaries Rubio and Noem, the lawmakers write, “Although the national security benefits of social media screening may be unproven, the costs are very real. The wide-scale collection of social media information violates the free expression rights of foreigners and American citizens, infringes on applicants’ personal privacy, creates unnecessary processing delays, and creates risks of abuse and discrimination...Even in an administration intending to conduct social media screening in a fair and unbiased manner, the risks of mistakes are high. In an administration with malign intentions, these social media screening tools guarantee abuse.”

The lawmakers continued, “We are deeply concerned that State and DHS’s respective new policies around social media screening are a thinly veiled effort to discriminate against visa applicants and other noncitizens seeking to pursue their studies or obtain asylum or lawful residence in the United States.”

The lawmakers request answers by July 9, 2025, to questions including:

  1. Please provide any studies, analyses, audits, or other examination of the social media collection, screening, and vetting programs at State or DHS conducted between December 15, 2015, and the date of this letter.
  2. Is the State Department or DHS using artificial intelligence (AI) or any other automated system to collect, process, analyze, or otherwise review information collected from social media accounts of visa applicants and applicants for an immigration benefit?
  3. How many visa applicants or individuals seeking an immigration benefit have had their application denied solely or primarily due to the social media screening and vetting process, including those denied for failing to provide a social media identifier? 
  4. Please provide any State Department and DHS memos, guidance documents, or other written policies intended to guide career staff in interpreting social media indicia for a visa applicant or applicant for an immigration benefit.
  5. Has the State Department, DHS, or any other agency or component conducted any legal analysis or First Amendment review of the March 25 State Department memo or the April 9 DHS announcement?
  6. What safeguards, if any, are in place to ensure that personal bias, political viewpoints, or cultural misunderstandings do not influence visa adjudications or immigration benefit decisions based on social media content?
  7. Did the State Department’s Office of Civil Rights or DHS’s Office for Civil Rights and Civil Liberties or Privacy Office review the respective policies before their implementation?

In their letter to the Government Accountability Office (GAO), the lawmakers raise serious concerns about DHS and DOJ’s use of “technologies that make dubious automated inferences about individuals’ emotions, attitudes, and intentions,” including the administration’s deployment of “AI to scan the social media accounts of tens of thousands of student visa holders and flag some as supposedly supporting terrorist organizations.”

The GAO letter is also cosigned by Representatives Bennie Thompson (D-Miss.), and Rep. Pramila Jayapal (D-Wash).

The lawmakers write: “It is particularly dangerous to use AI for inferring mental states in law enforcement contexts, where false positives can subject individuals to baseless investigation and detention. Furthermore, since many criminal statutes require proof of intent or other state of mind, using AI in this way could lead prosecutors to bring more severe charges against individuals on the basis of pseudoscientific evidence. This technology is also ripe for deliberate abuse, providing a pretext for government officials to target groups they disfavor.”

The lawmakers request that GAO produce a report that addresses questions including the following:

  1. How many people have been the subject of an automated analysis conducted by DOJ or DHS personnel using AI technologies that infer people’s emotions, attitudes, or intentions?
  2. What kinds of law enforcement actions have been guided by DOJ and DHS personnel’s use of these technologies?
  3. What tests of these technologies did DOJ and DHS conduct before using them for law enforcement purposes?
  4. What DOJ and DHS policies govern the uses of these AI technologies to prevent violations of due process, freedom of expression, equal protection, and other constitutional rights?

The full text of the letter to State/ DHS is available here and below:

Dear Secretary Rubio and Secretary Noem,

Recently, the State Department and Department of Homeland Security (DHS) have sought to expand the role of social media screening in consular and immigration decisions. This policy change threatens to weaponize online speech against immigrants and foreign nationals, granting government officials broad and ill-defined authority to penalize individuals for their expression. Social media posts — often taken out of context and stripped of nuance — are an unreliable basis for such high-stakes determinations. Moreover, the government has never identified any evidence that these screenings help protect national security. In practice, this enhanced social media review appears designed to chill dissent, discriminate against particular viewpoints, and punish individuals for speech the Administration finds objectionable. We urge you both to immediately reverse this latest Trump administration attack on visitors to the United States and immigrants.

Over the past decade, State and DHS have increasingly experimented with using information gleaned from social media in consular and immigration decisions. In December 2015, DHS launched a task force to assess its social media policies and capabilities. Over the following year, U.S. Citizenship and Immigration Services (USCIS), U.S. Customs and Border Protection (CBP), and U.S. Immigration and Customs Enforcement (ICE) completed at least seven pilot programs to review DHS’s ability to conduct large-scale social media screening.

This use of social media information accelerated during the first Trump administration, with the rollout of its “extreme vetting” program. In May 2019, the State Department began requiring almost all visa applicants to provide their social media identifiers, and in September 2019, DHS proposed collecting social media information from applicants for immigration benefits, a major expansion of its social media vetting program. Although the Biden administration declined to proceed with DHS’s 2019 proposal, it maintained the State Department’s social media screening requirements. Most recently, on March 5, 2025, the Trump administration picked up where it left off, with USCIS seeking to expand the collection of social media identifiers on immigration forms. Consequently, with a few exceptions, the past four administrations have seen a steady increase in social media surveillance at State and DHS.

The federal government, however, has provided no evidence that wide-scale social media screening improves national security. As far as we know, neither State nor DHS has released any report or analysis proving the effectiveness of social media screening. In fact, the little public information available — obtained through Freedom of Information Act (FOIA) requests by civil society organizations — suggests that social media screening is ineffective. For example, in a 2016 transition memo, USCIS acknowledged that, in its pilot programs, social media vetting had not been used “solely or primarily” to deny any immigration benefits and that “authenticity, veracity, social context, and whether the content evidences indicators of fraud, public safety, or national security concern are often difficult to determine with any level of certainty.” Additionally, USCIS concluded that social media screening and vetting was “labor intensive” and “divert[ed] [USCIS personnel] away from conducting the more targeted enhanced vetting they are well trained and equipped to do.” Another FOIA release, obtained in October 2023, included an undated assessment by the National Counterterrorism Center acknowledging that social media screening had “very little impact” on screening accuracy. And in a New York Times report on that FOIA release, an unnamed senior administration official “agreed that collecting social media data had yet to help identify terrorists among visa applicants.”

Although the national security benefits of social media screening may be unproven, the costs are very real. The wide-scale collection of social media information violates the free expression rights of foreigners and American citizens, infringes on applicants’ personal privacy, creates unnecessary processing delays, and creates risks of abuse and discrimination. For example, a lawsuit filed against the State Department in 2019 documents how foreign film makers have limited their speech on social media or declined to seek a U.S. visa due to the social media screening requirement. This chilling effect also impacts Americans, who are unable to communicate with foreign friends and family that withdraw from social media and whose own communications with foreign visa applicants could be swept up in the screening and vetting process. Additionally, because content on social media is context- and relationship-dependent, it can easily be misinterpreted, creating significant risks of bias or discrimination. Even in an administration intending to conduct social media screening in a fair and unbiased manner, the risks of mistakes are high. In an administration with malign intentions, these social media screening tools guarantee abuse.

Based on its actions over the first few months, the Trump administration clearly falls into the latter category. On March 10, the U.S. government wrongfully removed Kilmar Abrego Garcia from the United States to a notorious prison in El Salvador. Despite the Supreme Court’s upholding a lower court order requiring the Administration to “facilitate” his return, the Trump administration refused to do so for months. Ten days later, six plainclothes ICE officials detained Tufts University student Rümeysa Öztürk and transferred her to a Louisiana detention facility, even though the State Department had determined — days before her detention — that it lacked evidence to revoke her visa. On April 14, ICE agents detained a ten-year lawful permanent resident at what he thought was a naturalization appointment. The same day, President Trump called for deporting American citizens to El Salvador. These are actions of an authoritarian government, not a constitutional democracy.

For that reason, we are deeply concerned that State and DHS’s respective new policies around social media screening are a thinly veiled effort to discriminate against visa applicants and other noncitizens seeking to pursue their studies or obtain asylum or lawful residence in the United States. On March 25, the State Department issued a memo with new policies governing “Enhanced Screening and Social Media Vetting for Visa Applicants.” Under those new policies, State officials are required to review the social media posts of all applicants granted a U.S. student visa between October 7, 2023 and August 31, 2024. Additionally, the memo warns that “conduct that bears a hostile attitude towards U.S. citizens or U.S. culture (including government, institutions, or founding principles)” may be evidence that an applicant advocates for terrorism and therefore is ineligible for a U.S. visa. A few days later — on April 9 — DHS announced that USCIS would begin screening the social media accounts of individuals applying for an immigration benefit for “content that indicates an alien endorsing, espousing, promoting, or supporting antisemitic terrorism, antisemitic terrorist organizations, or other antisemitic activity.” DHS has not provided any additional information about how it intends to conduct this social media screening.

The vague language in these new policies gives unchecked discretion to State and DHS officials, creating serious risks of abuse and discrimination. The State policy says nothing about the type of content that could demonstrate a “hostile attitude” towards U.S. culture or founding institutions, terms that are hotly disputed. That language gives nearly carte blanche to a consular employee to reject a visa application. DHS’s press release is similarly vague. Far from providing State and DHS career staff with clear guidelines and metrics for implementing social media screening policies, these policies are ambiguous and unbounded. Moreover, although we strongly oppose antisemitism in all forms, the Administration’s heavy focus on antisemitism on college campus may create implicit pressure on career employees to reject any student visa applicant who has posted any pro-Palestinian content on social media. In so doing, the directives seem designed to punish speech that the Administration dislikes and create fertile ground for abuse and discrimination.

We urge you to immediately reverse these policies. To the extent that State and DHS intend to continue conducting social media screening and vetting, we urge you to establish concrete and definite guidelines for the use of social media indicia in visa and immigration decisions. To help us better understand the Administration’s plans for the implementation of these new policies, we request written responses to the following questions by July 9, 2025. 

1. Please provide any studies, analyses, audits, or other examination of the social media collection, screening, and vetting programs at State or DHS conducted between December 15, 2015 and the date of this letter. This should include:

a. Any studies, analyses, audits, or other examination of the social media screening and vetting programs at State or DHS conducted in connection with the review undertaken during the Biden administration pursuant to Section 3(d) of Proclamation No. 10141.

b. Any legal analysis of social media screening efforts proposed in connection with President Trump’s “extreme vetting” program. 

2. Is the State Department or DHS using artificial intelligence (AI) or any other automated system to collect, process, analyze, or otherwise review information collected from social media accounts of visa applicants and applicants for an immigration benefit?

a. If so, please describe those systems and describe any processes and rules to ensure those systems are free of bias and discrimination.

b. Will AI or an automated system ever be the sole decision-maker in a visa application or application for an immigration benefit?

3. How many visa applicants or individuals seeking an immigration benefit have had their application denied solely or primarily due to the social media screening and vetting process, including those denied for failing to provide a social media identifier? Please provide the information from December 15, 2015 through the date of this letter and identify by type of applicant and year.

4. Please provide any State Department and DHS memos, guidance documents, or other written policies intended to guide career staff in interpreting social media indicia for a visa applicant or applicant for an immigration benefit.

5. Has the State Department, DHS, or any other agency or component conducted any legal analysis or First Amendment review of the March 25 State Department memo or the April 9 DHS announcement? If so, please provide that analysis.

6. What safeguards, if any, are in place to ensure that personal bias, political viewpoints, or cultural misunderstandings do not influence visa adjudications or immigration benefit decisions based on social media content?

7. Did the State Department’s Office of Civil Rights or DHS’s Office for Civil Rights and Civil Liberties or Privacy Office review the respective policies before their implementation?

a. If so, did either office raise concerns about the respective policy changes?

b. If so, please share any related documents, emails, or memos.

Thank you for your attention to this serious issue.

Sincerely, 

The full text of the letter to GAO is available here and below:

Mr. Dodaro:

We write to request an investigation of federal law enforcement agencies’ use of automated technologies to label individuals as potential threats on the basis of their facial expressions, their body movements, or the content of their speech. We are concerned that such technologies are ineffective as a means of investigating criminal activity, threaten due process and freedom of expression, and pose particular risk to marginalized and vulnerable communities.

The Government Accountability Office (GAO) has been effective in investigating new developments in the federal government’s use of surveillance technologies. In 2021, the GAO revealed that the government used facial recognition technology to identify individuals protesting the murder of George Floyd. In 2024, the GAO found serious privacy, civil rights, and effectiveness concerns with police use of biometric identification technologies.

Also in 2024, the GAO recommended that the Department of Homeland Security (DHS) assess and mitigate bias risks prior to its component agencies’ use of detection, observation, and monitoring technologies. We have learned that the Department of Justice (DOJ) and DHS are utilizing technologies that make dubious automated inferences about individuals’ emotions, attitudes, and intentions. The developers of these technologies claim they can make such determinations on the basis of physical measurements, such as facial expressions, eye movements, or gait, or from content that individuals create and share online, such as text or images. These technologies are based on controversial applications of methods from the artificial intelligence (AI) fields known as affective computing, emotion recognition, sentiment analysis, and deception detection.

For example, in March of this year Axios reported on the “Catch and Revoke” initiative, in which DHS and DOJ are helping the Department of State use AI to scan the social media accounts of tens of thousands of student visa holders and flag some as supposedly supporting terrorist organizations. However, as the Foundation for Individual Rights and Expression commented, AI “cannot be relied on to parse the nuances of expression about complex and contested matters.” We fear that AI’s inability to make such determinations accurately might actually be the reason the administration is using it in this case. Invoking AI in this way lends a facade of objectivity to what is in fact a sweeping attempt to punish the expression of views the administration dislikes. In fact, the administration has not presented evidence that the students it has targeted for deportation actually advocated terrorism.

Numerous empirical studies have cast doubt on the reliability of AI methods for inferring individuals’ intentions, emotions, or other mental states from external signals such as their facial movements or decontextualized online speech. For example, a review of psychological research on facial expressions concluded that “emotion categories are NOT expressed with facial movements that are sufficiently reliable and specific across contexts, individuals, and cultures to be considered diagnostic displays of any emotional state.” Similarly, psychologists have observed “ubiquitous problems in current research into AI-based deception detection” including “the underlying assumption that it is possible to identify a unique cue or combination of cues that is indicative of deception.” Some companies market technologies that they claim detect deception with high accuracy, based on physiological activity such as eye movements or changes in voice pitch, claims which either lack independent and replicable evidence or have been contradicted by multiple empirical studies. At the same time, researchers have found evidence suggesting racial, gender, and other demographic biases in the kinds of AI models used for affective computing, lie detection, and sentiment analysis.

It is particularly dangerous to use AI for inferring mental states in law enforcement contexts, where false positives can subject individuals to baseless investigation and detention. Furthermore, since many criminal statutes require proof of intent or other state of mind, using AI in this way could lead prosecutors to bring more severe charges against individuals on the basis of pseudoscientific evidence. This technology is also ripe for deliberate abuse, providing a pretext for government officials to target groups they disfavor.

Unfortunately, this is not the first time the federal government has applied AI to classify individuals as potential security threats. In the last several years, agencies in both DHS and DOJ have contracted with private vendors to use automated technologies that collect and analyze social media posts to make predictions about individuals’ attitudes, character, and intentions. For example: 

  • From 2019 to 2024, U.S. Customs and Border Protection (CBP) used the ONYX AI software from the company Fivecast to analyze online information in order to obtain “insights on potential threats and risks,” according to the DHS AI use-case inventory. CBP has shared little information on how it was using ONYX to assess risks to the United States. However, as 404 Media reported in 2023, Fivecast’s marketing materials claim their technology can search social media for information about specific individuals, “groups,” or “events,” including posts related to user-specified ideological movements, and can label the “sentiment and emotion” of posted content.
  • The Washington Post reported in 2022 that the FBI signed a five-year contract to use Babel X, an AIbased platform for searching and analyzing social media content. This contract fulfills a request for proposals for systems to search social media based on user location or “demographic information” and conduct “sentiment analysis” on posts to “provide analysis on emotion and likely attitudes” of users and “predictive analytics … that point towards possible actions of a subject or group.” VICE reported that CBP also began using Babel X in 2019 to analyze online content in order to identify travelers for enhanced security screening, including U.S. citizens and permanent residents. CBP noted in internal documents that Babel X helps its agents identify “threats to CBP and national security” with capabilities that include filtering posts by hashtags, “events,” and “known terms used by bad actors,” as well as analyzing the “sentiment” of posts.
  • As part of its Extreme Vetting Initiative, U.S. Immigration and Customs Enforcement (ICE) solicited proposals in 2017 from companies for systems that would use AI to analyze social media posts of visa applicants to predict their likelihood of committing crimes or to be “positively contributing member[s] of society.” The following year ICE concluded that no existing product was up to the task.
  • Since 2020, ICE has paid the contractor Barbaricum to conduct automated searches of social media platforms for posts threatening ICE. In its original request for bids, ICE was vague about how it defined threatening posts, but it specified that the contractor should provide “monitoring and analysis of behavioral and social media sentiment” and regular reports on the “number of negative references to ICE found in social media during monitoring.”

In addition to these concerning uses of sentiment analysis for law enforcement purposes, federal agencies have also shown interest in affective computing and deception detection technologies that purportedly infer individuals’ mental states from measures of their facial expressions, body language, or physiological activity. In 2011, DHS field-tested its Future Attribute Screening Technology (FAST), which DHS claimed could screen travelers at security checkpoints by detecting “deception” and “intent to cause harm” based on “physiological and behavioral cues,” such as travelers’ eye movements, facial expressions, heart rates, and breathing patterns. In 2011-2012, researchers tested a DHS-funded AI lie-detector based on similar measurements, called the Automated Virtual Agent for Truth Assessment in Real Time (AVATAR), at a border checkpoint.

Private companies also market similar deception detection products to law enforcement agencies, and count federal agencies among their customers. Converus sells a system called EyeDetect, which it claims can spot liars with high accuracy by measuring eye movements and pupil dilation. The State Department’s Bureau of International Narcotics Control and Law Enforcement Affairs (INL) currently has a contract with Converus for EyeDetect equipment. In addition, the Defense Department’s Defense Counterintelligence and Security Agency is funding research at the US-Mexico border to test EyeDetect’s usefulness for law enforcement. The CEO of Converus claims to have also demonstrated EyeDetect to the FBI and DHS.

Similarly, a company called NITV Federal Services sells a Computer Voice Stress Analyzer (CVSA). NITV claims CVSA uses machine learning to detect deception from changes in voice pitch, and sells this product to law enforcement and government agencies, including the Department of the Interior.

To aid in our evaluation of the potential threats these technologies pose to the effectiveness of law enforcement investigations, as well as to due process, freedom of expression, and civil rights, we request that GAO produce a report that answers the following questions about how DOJ and DHS are using AI technologies to infer people’s emotions, attitudes, or intentions:

  • To what extent are the Department of Justice and the Department of Homeland Security using such AI technologies for law enforcement purposes? In particular:
    • Approximately how many people have been the subject of an automated analysis conducted by DOJ or DHS personnel using these technologies?
    • What kinds of law enforcement actions have been guided by DOJ and DHS personnel’s use of these technologies? 
  • How have DOJ and DHS acquired these AI technologies?
  • What tests of these technologies did DOJ and DHS conduct or review before using them for law enforcement purposes?
  • How have DOJ and DHS assessed the costs and benefits of using these technologies?
  • What DOJ and DHS policies govern the uses of these AI technologies and what guidance or training do they provide to personnel in order to prevent violations of due process, freedom of expression, equal protection, and other constitutional rights?
  • How do DOJ and DHS ensure the AI technologies are used in accordance with agency policies? 

Thank you for your attention to this matter.

Sincerely,