(The Center Square) – Virginia lawmakers are considering two artificial intelligence bills during the General Assembly session that would set new limits on how AI tools are used in schools and establish broader oversight standards for certain chatbot technologies.
The proposals, House Bill 1186 and House Bill 635, build on AI-related laws Virginia has passed in recent years. Those laws already cover areas such as AI-generated child sexual abuse material, the creation of nonconsensual AI-altered intimate images, limits on how automated systems can be used in parts of the criminal justice system, and consumer data protections enforced under the Virginia Consumer Data Protection Act.
House Bill 1186 would require every local school board in Virginia to adopt a policy restricting how artificial intelligence chatbots may be used for certain instructional purposes. Under the bill, students could not be required, encouraged, or permitted to use AI chatbots for schoolwork.
A fiscal impact statement prepared by the Department of Planning and Budget states that the bill is not expected to have a state fiscal impact. The analysis notes, however, that the measure places new requirements on local school boards and that any costs at the local level are unknown at this time.
House Bill 635 takes a broader approach. The bill would create the Artificial Intelligence Chatbots Act and set statewide rules for companies that operate AI chatbots in Virginia. Among other requirements, the proposal would require chatbots to clearly disclose that they are not human, set standards for how they interact with users, and establish penalties for prohibited practices.
The legislation is being considered as federal officials continue to shape a national approach to artificial intelligence policy.
In a December executive order, President Donald Trump directed agencies to work toward a more uniform federal approach to artificial intelligence and warned that differing state regulations could create burdens for companies and users. The order also instructs federal agencies to review state AI laws to identify those that could conflict with federal priorities, including how they relate to programs such as broadband deployment.
Virginia lawmakers debated similar issues last year, when then-Gov. Glenn Youngkin vetoed a broader artificial intelligence bill that would have created a regulatory framework for high-risk AI systems enforced by the attorney general’s office.
In his veto message, Youngkin said the role of government in safeguarding AI practices “should be one that enables and empowers innovators to create and grow, not one that stifles progress and places onerous burdens on our Commonwealth’s many business owners.”




