Governments, including Washington state, making policy for use of generative AI



(The Center Square) – Increasingly, government agencies are developing policies to guide employees in the use of generative artificial intelligence while considering the legal and ethical issues and potential cyber threats surrounding “AI.”

Last week, Washington state’s Office of the Chief Information Officer adopted interim guidelines for “purposeful and responsible use of generative artificial intelligence in Washington State Government.”

The office, which sets information technology policy and direction for the state and is a member of the governor’s executive cabinet, adopted the AI guidelines on Aug. 8.

The information technology office noted the rapid advancement of AI “has the potential to transform government business processes, changing how state employees perform their work and ultimately improving government efficiency.”

But AI also poses “new and challenging considerations,” and the policy advises state agencies and employees to “foster public trust, support business outcomes, and ensure ethical, transparent, accountable, and responsible implementation of this technology.”

When prompted by a user, generative AI can create content — text, images, audio and video materials — that normally requires human intelligence. The various systems – ChatGPT, Google AI, Microsoft Azure, IBM Watson and others – scan massive amounts of online data to learn patterns and relationships, then generate new content that may be similar but not identical to the underlying data. The technology is already being used in search engines and other online tools.

AI has caught the attention of a wide audience, ranging from teachers challenging the originality and plagiarism of student assignments to members of Congress funding its future development while considering future regulation.

Last week, U.S. Sen. Patty Murray, D-Washington, visited the University of Washington’s Computer Science and Engineering School to discuss AI development with researchers.

Murray, who helped to secure $20 million in federal funding to establish the National Science Foundation AI Institute for Dynamic Systems at UW, issued a press release saying, “Artificial intelligence brings with it immense opportunities, but also serious challenges and threats.”

“I secured this funding because I know how essential it is that this technology be developed responsibly and ethically. Leading on AI will be important if we want Washington state to remain at the forefront of innovation, research, and scientific achievement.”

Washington state’s newly established policy says it will follow principles established in the federal government’s National Institute of Standards and Technology “AI Risk Framework.”

“All content generated by AI should be reviewed and fact-checked, especially if used in public communication or decision-making,” the state’s policy says, adding, “… be mindful of the potential biases and inaccuracies that may be present.”

Additionally, any AI-generated content used in an official capacity “should be clearly labeled as such.” That includes details of its review and editing process, to provide “transparent authorship and responsible content evaluation.”

This spring, the City of Seattle was among government entities across the country which began implementing interim AI policies.

“The field is emergent and rapidly evolving, and the potential policy impacts and risks to the City are not fully understood,” Jim Loter, Seattle’s interim chief technology officer, wrote in an April 18 memo.

“Use of generative AI systems with the City of Seattle, therefore, can have unanticipated and unmitigated impacts,” said Loter.

He referenced such issues as the acquisition or use of intellectual property subject to copyright or trademark, guarding against disclosure of confidential or personally identifiable data about members of the public, awareness that AI may produce materials subject to public disclosure, and the need for attribution and accountability.

Seattle’s interim policy extends to Oct. 31 while city officials continue to examine implications for government use of generative AI.

AI can provide employees with draft communications, conduct research, summarize content, and generate software code, among other applications. But there are legal questions about accuracy, production of discriminatory or offensive content, potential disclosure of collective bargaining or contractual matters, and vulnerabilities to data breaches.

In June, the State of Maine placed a six-month moratorium on the use of generative AI by state agencies, saying the rapidly evolving cyber threat landscape poses “significant risks to … sensitive and confidential data that we are entrusted to protect for our citizens.”

Read the Black Chronicle Black History Edition for Free! Click Below

Read the Black Chronicle Black History Edition for Free! Click Below



Share post:


More like this

Former Fentress County Rescue Squad captain indicted for forgery, $22K of theft

(The Center Square) – A former Fentress County Rescue...

Noem: state to reach damage threshold for federal flood assistance

(The Center Square) - South Dakota Gov. Kristi Noem...

Fast action sought in Pennsylvania on 220-page marijuana bill

(The Center Square) — Legislators pushing a bipartisan bill...

Mexican fentanyl trafficker who spread ‘suffering’ in Lane County sentenced

(The Center Square) - An international drug trafficking organization...