The Potential Role of AI in Legislative Research and Drafting

Body
George Mason University, FUSE building, Mason Square, Arlington VA. Photo by: Ron Aira/Creative Services/ George Mason University.

George Mason University, FUSE building, Mason Square, Arlington VA. Photo by Ron Aira/Creative Services/George Mason University.

Writing legislation is no easy task. Policymakers must translate big ideas such as improving climate change or issuing technology regulations into detailed, legally appropriate language. As policymakers strive to work smarter and faster, the use of AI to support policymaking has begun to increase. To what extent can artificial intelligence (AI) help? 

A recent white paper from the Greg and Camille Baroni Center for Government Contracting explores whether large language models (LLMs), ChatGPT and GROK, could assist with legislative research and drafting. The experiment focused on the FoRGED Act, a major reform bill introduced by Senator Roger Wicker to overhaul the Department of Defense’s acquisition system. Richard Beutel and Art Nicewick co-authored the paper. 

The researchers trained two AI models to analyze the FoRGED Act’s complex legal provisions and suggest improvements. They asked AI to analyze past defense reforms, predict legal challenges, synthesize research, and draft notes. 

The results were promising. The AI models produced detailed notes, historical comparisons, and plain-language summaries. In one example, AI explored how the FoRGED Act’s acquisition reforms might affect deterrence against China, drawing on unclassified threat assessments to frame the analysis. 

However, there were also some limitations. An expert panel, including members of the Senate Armed Services Committee, reviewed the AI’s performance. They found that AI often provided helpful summaries and comparisons but lacked the persuasive finesse that human drafters bring. AI was also less successful at understanding the nuanced stakeholder dynamics in politics. And while the models processed massive amounts of information quickly, they could not always explain what they did not know. Therefore, human oversight remains critical. 

The panel also raised important ethical questions. AI systems rely on existing data, which may contain biased data or outdated assumptions. Additionally, the expert panel raised concerns about over-reliance on AI. This could lead to “path dependency,” where staff favor incremental tweaks over bold reforms because AI excels at refining what already exists, possibly limiting policy innovation over time. 

Still, the findings showed how AI could reduce the time, cognitive load, and cost for legislative staff. AI helped identify gaps, anticipate implementation challenges, and process vast datasets—work that would usually require hours of time. Rather than replacing human judgment, the goal is to give policymakers better tools.

When using AI models for legislative purposes, the panel recommends prioritizing the questions AI is likely to have sufficient information to appropriately answer. This targeted approach may reduce the risk of misleading or overconfident responses. Moreover, there is a need for transparent caveats. Policymakers using AI should clearly acknowledge what they do not know to avoid future consequences. 

As defense and technology policies continue to evolve, this pilot marks an early but important step. AI won’t write the laws of the future alone but it might help policymakers write them better. 

In This Story

People Mentioned in This Story