Artificial Intelligence

Maryland lawmakers set sights on addressing artificial intelligence, including government use – Baltimore Sun


As lawmakers across the country start to grapple with the transformative potential of artificial intelligence, Maryland officials said Monday the state must either begin focusing on the emerging technology or risk falling behind.

“We cannot afford to be stuck with a system that is 10 years out of date,” Gov. Wes Moore said before signing an executive order that sets some guiding principles for the state as it implements AI.

The order, which broadly describes a “principled yet adaptable, pragmatic path forward so that the technology’s benefits can be confidently harnessed,” was among a few steps that Moore metaphorically called a “software update” for the state.

The Democratic governor’s plans to begin coordinating state agency efforts on AI capabilities are among the few policy priorities he’s broadly outlined for his second year in office, and they follow Democratic President Joe Biden’s steps in recent months to guide the development of the technology amid concerns of its effects.

They also point to a new, potentially larger effort among Maryland lawmakers around addressing AI during the annual 90-day legislative session that begins Wednesday in Annapolis.

“I’m convinced that, like electricity or other powerful forms of energy, there’s huge benefits and huge risks,” said Sen. Katie Fry Hester, a Howard County Democrat who’s preparing five bills dealing with AI for the session.

Among her bills, she said, will be ideas to foster productive uses for AI in the education system — think private tutors for students — and proactively address the technology’s pitfalls, like when deep-fake technology is used in revenge porn or when generative AI is used in election campaigning.

One of her bills will also be aimed at supplementing the plans Moore and his cabinet announced Monday about coordinating and tracking the use of AI in government.

Maryland Department of Information Technology Secretary Katie Savage outlined the administration’s four-pronged approach to what she described as “the starting line of [the state government’s] AI journey.”

The first element of that plan, Moore’s executive order, establishes a set of “principles and values” and a commitment to studying how the technology will affect areas like cybersecurity and workforce development, as well as potential ways to pilot the technology in government.

Two other steps are aimed at access for Marylanders using government services. The creation of an inter-governmental group called Maryland Digital Service will work with state agencies to “create consistent and intuitive digital experiences” that are focused on users and accessible to everyone. And a new policy on digital accessibility will require work with the Department of Disabilities to guide those decisions to make sure people are able to utilize services “regardless of their abilities,” Savage said.

Savage said those efforts will identify an “accessibility liaison” at each state agency to make sure services, for example, are provided in multiple languages and are accessible for the visually impaired.

A final immediate step will be the creation of a Maryland Cybersecurity Task Force, which will partner with technology and emergency management departments to enhance the state’s cybersecurity capabilities.

“The words AI and cyber can make some people scared,” Moore said. “Here’s the thing. This technology is already here. The only question is whether we are going to be reactive or proactive in this moment.”

Nishant Shah, the governor’s senior adviser for responsible artificial intelligence, joined the administration in a first-of-its-kind position in August after working on AI products at Meta, the parent company of Facebook.

He said in an interview that it’s important for state government to be building the framework for “accountable mechanisms” around AI and figuring out where the “low risk, high value” areas are to implement and learn from it — like “building our AI muscle,” he said.

“This is a technology that’s moving really, really fast. It’s tough to express how fast it’s moving. So as a state, we need to understand how we’re going to be approaching it,” Shah said. “There’s a lot of possibility, but it’s a double-edged sword as most any new platform technology is.”

Part of the process, he said, will be creating an “AI inventory to be very clear on what is actually in use,” and then making that public so there’s proper oversight of the systems that already use AI.

Hester, who has worked frequently on cybersecurity and technology issues in the legislature, said she fully supports the administration’s moves but one of her bills would cement the plan to build the AI inventory into law. It would also require officials to study the impacts of specific AI use in major areas of government, like education or the judicial system, she said.

While Savage, the information technology secretary, said the administration’s new efforts could be done within its existing budget, Hester’s bill will also ensure there are at least a few dedicated staff members and resources available to them. The legislation will also establish a group of experts from outside the government to advise the administration.

“It’s important that we have a broader set of people offering some guidance for agencies who want it. We don’t want to be talking to ourselves,” Hester said.

For her separate bill addressing generative AI in revenge porn, the definition of revenge porn in the law would be altered to effectively prohibit deep-fake technology from being used to put a person in an image of a sexual nature without their consent, and it would give the person targeted a right to sue.

Another bill would give the State Board of Elections more authority to require campaign advertisements or other literature to specifically disclose their use of deep-fake or other AI technology.

“We’re living in an era where digital misinformation can spread rapidly and it can be really hard to tell sometimes what’s true and what’s not true,” Hester said, noting that some of the biggest risks around AI in 2024 will be election misinformation and security.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.