A new phase of the Trump administration’s efforts to downsize the civil service is under way, focusing on the use of generative AI to automate work tasks previously handled by federal employees. The General Services Administration (GSA) is testing a chatbot with 1,500 staff members and may expand its use to over 10,000 workers managing more than $100 billion in contracts and services, according to Wired.
GSA tests AI chatbot to automate federal tasks amid downsizing
The GSA leadership frames the bot as a productivity enhancement tool. Thomas Shedd, director of the Technology Transformation Services (TTS) at GSA, highlighted at a recent all-hands meeting that the agency is pursuing an “AI-first strategy.” He stated, “as we decrease [the] overall size of the federal government… there’s still a ton of programs that need to exist, which is a huge opportunity for technology and automation to come in full force.” Shedd also mentioned the potential for “coding agents” to be introduced across government, referring to AI programs capable of writing and employing code.
The chatbot initiative began during President Joe Biden’s term with a small GSA technology team called 10x, which initially designed it as an AI testing platform, not a productivity tool. However, allies of DOGE, the Department of Government Efficiency, have pushed for its expedited development and deployment amid mass federal layoffs. The program, previously known as “GSAi” and now called “GSA Chat,” is designed to assist with drafting emails, writing code, and other functions. It will ultimately be available to other government agencies, potentially under the name “AI.gov.”
Zach Whitman, GSA’s chief AI officer, described the chatbot as a means for federal employees to work “more effectively and efficiently.” The interface resembles that of ChatGPT, where users submit prompts and receive responses. The system currently utilizes models licensed from Meta and Anthropic. Although document uploads are not yet permitted, plans suggest this feature may be available in the future. GSA employees foresee using the chatbot for various tasks, including large-scale project planning and analyzing federal data repositories.
GSA spokesperson Will Powell confirmed the agency is reviewing its IT resources and conducting tests to verify the effectiveness and reliability of available tools. However, experts indicate that the chatbot’s introduction may reflect a broader trend rather than significantly changing government operations. Reports have surfaced that DOGE advisers have leveraged AI to analyze agency spending for cuts and decide job retention across the government.
In a recent TTS meeting, Shedd anticipated that the division would shrink by “at least 50 percent” soon. Moreover, potential uses of AI extend to sensitive applications, such as assessing social media posts of student-visa holders by the State Department to identify links to designated terror groups.
Implementing generative AI carries established risks, including biases, factual inaccuracies, high costs, and complex inner workings. GSA recognized these issues when initializing work on the chatbot last summer, establishing the “10x AI Sandbox” to explore AI applications securely and cost-effectively. However, motivations to release the chatbot swiftly may overlook potential concerns regarding its applications. A former GSA employee noted the risks associated with relying on AI for fraud analysis, highlighting the potential for false positives. Warnings regarding issues such as “hallucination” and biased responses are included in a help page for early users, but methods of enforcement remain unspecified.
Federal agencies have tested generative AI for months, with GSA previously contracting Google to assess AI’s potential for enhancing productivity and collaboration. Other agencies, like the Departments of Homeland Security, Health and Human Services, and Veterans Affairs, were also exploring AI tools before the inauguration. Despite this, the chatbot’s particular deployment may not have been inevitable in its current form.
Biden’s administration had pursued a cautious approach to AI, emphasizing rigorous testing and transparency through an executive order. However, the repeal of this order on Trump’s first day in office marked a shift in the regulatory stance, potentially allowing federal agencies to utilize more speculative applications of AI technology.
Featured image credit: Katie Moum/Unsplash