Maryland lawmakers set their sights on tackling artificial intelligence, including for government use – Baltimore Sun

Maryland lawmakers set their sights on tackling artificial intelligence, including for government use – Baltimore Sun

As lawmakers across the country begin to grapple with the transformative potential of artificial intelligence, Maryland officials said Monday that the state must either start focusing on the emerging technology or risk being left behind.

“We cannot stay stuck in a system that is 10 years old,” Gov. Wes Moore said before signing an executive order setting some guidelines for the state as it implements artificial intelligence.

The order, which broadly describes “a practical, principled and adaptable path forward so that the benefits of technology can be harnessed with confidence,” was among a few steps that Moore euphemistically called the nation’s “software modernization.”

The Democratic governor’s plans to begin coordinating government agencies’ efforts on artificial intelligence capabilities are among the few policy priorities he has broadly outlined in his second year in office, and they follow Democratic President Joe Biden’s steps in recent months to guide development of the technology amid concerns about its impacts.

They also point to a potentially larger new effort among Maryland lawmakers on tackling artificial intelligence during the annual 90-day legislative session that begins Wednesday in Annapolis.

“I am convinced that, like electricity or other powerful forms of energy, there are tremendous benefits and tremendous risks,” said Sen. Katie Frey Hester, a Democrat from Howard County, who is preparing five bills addressing artificial intelligence this session.

Among the bills she will introduce are ideas to boost productive uses of artificial intelligence in the education system — think tutoring for students — and proactively address technology risks, such as fake technology being used in revenge porn or when generative AI is used, she said. Used in election propaganda.

One of the bills also aims to complement plans announced by Moore and his government on Monday to coordinate and track the use of artificial intelligence in government.

Maryland IT Secretary Katie Savage outlined the administration’s four-pronged approach in what she described as “the starting line of (state government’s) AI journey.”

The first element of that plan, Moore’s executive order, outlines a set of “principles and values” and a commitment to studying how technology will impact areas such as cybersecurity and workforce development, as well as potential ways to pilot technology in government.

Two other steps aim to give Marylanders access to government services. An intergovernmental group called the Maryland Digital Service will work with state agencies to “create consistent, intuitive digital experiences” that are user-centered and accessible to everyone. The new policy on digital accessibility will require working with the Department of Disabilities to guide those decisions to ensure people are able to benefit from services “regardless of their abilities,” Savage said.

The effort will identify an “accessibility liaison” at each state agency to ensure that services, for example, are provided in multiple languages ​​and accessible to the visually impaired, Savage said.

The immediate final step will be the creation of a Maryland Cybersecurity Task Force, which will partner with the technology and emergency management departments to strengthen the state’s cybersecurity capabilities.

“The words artificial intelligence and internet can make some people afraid,” Moore said. “Here is the need. This technology is already here. The only question is whether we will be reactive or proactive in this moment.”

Nishant Shah, the governor’s senior advisor for responsible artificial intelligence, joined the administration in a first-of-its-kind position in August after working on AI products at Meta, Facebook’s parent company.

He said in an interview that it was important for the state government to build a framework of “accountability mechanisms” around AI and know what “low-risk, high-value” areas to implement and learn from — such as “building our AI muscle.” ,” He said.

“It’s a technology that’s moving very, very quickly. It’s hard to put into words how fast it’s moving. So, as a country, we need to understand how we’re going to deal with this,” Shah said. “There are a lot of possibilities, but it’s a double-edged sword like most new platform technologies.” ”

Part of the process, he said, will be to create an “AI inventory to be very clear on what is actually in use,” and then make that public so that there is proper oversight of the systems that are actually using AI.

Hester, who has frequently worked on cybersecurity and technology issues in the Legislature, said she fully supports the administration’s moves but one of the bills she has introduced would consolidate the plan to build the AI ​​stockpile into law. It would also require officials to study the impacts of specific use of AI in key areas of government, such as education or the judicial system, she said.

While Savage, the information technology secretary, said the administration’s new efforts can be done within its current budget, Hester’s bill would also ensure there are at least a few dedicated employees and resources available to them. The legislation would also create a group of experts from outside government to advise the administration.

“It’s important that we have a broader group of people who provide some guidance to agencies that want this. We don’t want to talk to ourselves,” Hester said.

For its separate bill addressing generative AI in revenge porn, the definition of revenge porn in the law would be changed to effectively prohibit the use of fake technology to place a person in an image of a sexual nature without their consent, which would be tantamount to giving the targeted person the right to file a lawsuit.

Another bill would give the state Board of Elections more power to require campaign ads or other literature to specifically disclose their use of deepfakes or other artificial intelligence technologies.

“We live in an age where digital misinformation can spread quickly, and it can be really difficult to know what is true and what is not,” Hester said, noting that some of the biggest risks surrounding AI in 2024 will be election-related misinformation and threats. Electoral. protection.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *