Special Reports
How AI is poised to impact government – and vice versa
The future is now – and the public and private sector are both working together and at cross-purposes to meet the moment
Since OpenAI launched its chatbot in November 2022, artificial intelligence has seemed to shift from a far-fetched sci-fi premise – think HAL 9000 in “2001: A Space Odyssey” or the childlike android in “A.I. Artificial Intelligence” – to an inescapable, everyday reality. ChatGPT and other AI-powered tools have sprouted up overnight, spreading to search engines, social media platforms, virtual assistants and beyond.
The advent of the AI age holds plenty of promise, even as it threatens to disrupt entire professions, undermine and exploit the creative work of artists and writers, and unleash a flood of deceptive audio, videos and other imagery. Government officials are scrambling to draft legislation and regulations to rein in the risks, balanced against AI’s transformative potential to make work more efficient and improve our way of life.
Of course, some AI tools have been in use for years – and not just by tech startups, but just as often in the public sector. State and local governments have already been employing AI to complete a range of tasks more efficiently and effectively: sifting through vast amounts of data, automating bureaucratic processes, and identifying and responding more quickly to public safety threats.
Now, as the underlying technology continues to improve and its applications multiply, the question isn’t just how exactly governments will regulate artificial intelligence – it’s also how exactly governments will deploy it.
Governmental bodies use AI to help prevent cyberattacks and detect public health threats. AI is deployed by law enforcement to identify potential offenders and by local and state agencies as a screening and predictive tool – although there’s sharp debate about whether such efforts do more harm than good. And even as they work out their kinks, chatbots are being used to serve constituents, companies and other stakeholders that interact directly with government offices.
The domain where government use of artificial intelligence has perhaps generated the most alarm is in law enforcement, amid fears of ever-expanding surveillance capabilities. The maker of the New York City Police Department’s “Digidog” robots touts its AI capabilities, and the Metropolitan Transportation Authority recently began using AI-powered surveillance to track fare evasion in the city’s subways. Florida recently contracted with a California tech company to transcribe inmates’ phone conversations in the state’s prison system.
Police use of facial recognition has also prompted an outcry in many jurisdictions. In Pittsburgh, police used facial recognition technology during the 2020 Black Lives Matter protests without the knowledge or permission of police command staff, PublicSource reported in 2021. The episode was controversial for a variety of reasons. In May 2020, the Pittsburgh City Council voted to regulate the use of facial recognition and predictive policing technologies by city entities by mandating that council members sign off on any use of facial recognition technologies. The software was criticized on the grounds that it violated people’s privacy because it builds its database of images by procuring pictures collected without the subject’s consent. Finally, the software can locate photos of people they haven’t posted online themselves.
Today, AI recognition technology remains restricted in Pittsburgh. A draft proposal on Carnegie Mellon University’s campus in July 2022 considered a video surveillance policy allowing the university to use facial recognition technology during investigations, but it spurred a backlash from students and alumni alike on the basis that this would be an invasion of students’ privacy and normalize surveillance. Ultimately, the university nixed the policy in response to community concerns and claimed that the Carnegie Mellon University Police Department had never used facial recognition technology in the past.
“Based on feedback received from the community to the first draft of a video security policy that would have allowed for the potential use of facial recognition tools in criminal investigations, we have decided not to move forward with further consideration of this policy document,” the school said in a statement at the time.
One application that remains accessible to law enforcement throughout Pennsylvania is JNET, a database with facial recognition capabilities. In Allegheny County, the Office of the District Attorney announced in August 2022 that it would use NICE Justice, an AI-powered tool that facilitates the management, sharing and use of digital evidence, with photos, videos, PDFs and documents uploaded to a single portal.
According to the district attorney’s office, its 35,000 annual cases is beyond the capacity of the 127 attorneys it has on staff, and the software can improve efficiency and reduce delays. “We have a sizable number of cases that were postponed due to delays in processing digital evidence,” Rebecca D. Spangler, first assistant district attorney for the Allegheny County District Attorney’s Office, said in a statement. “With NICE Justice, we’ll be able to streamline the entire process of managing digital evidence, from intake to discovery. When we’re able to eliminate postponements by making the system more efficient as a whole, that’s good for everyone.”
NICE General Manager for Public Safety John Rennie said one consequence of the coronavirus pandemic was heightened demand for expediency. “COVID helped a lot of people realize that doing things slowly in an office was also not always the best way of doing things,” Rennie said. “The more technology-oriented DAs and the more progressive DAs in particular very much recognize that their staff needs these tools to be able to self-serve instead of having to send off to other people or do in different programs.”
Meanwhile, elected officials have been advancing measures to regulate artificial intelligence. New York City’s AI bias law, which requires employers using AI tools for hiring to perform annual audits of its technology, went into effect this summer, and the Adams administration’s new AI action plan envisions the creation of a “framework for AI governance” and the formation of an advisory group.
In Pennsylvania, Gov. Josh Shapiro issued an executive order in September to establish standards and a governance framework for generative artificial intelligence use by state agencies and employees. This included establishing an AI governing board along with establishing core values for generative AI use, among them accuracy, employee empowerment and equity.
“We can’t ignore new technology – we have to educate ourselves and be proactive to minimize the risks and maximize the benefits of innovation and that’s the approach my administration is taking here in Pennsylvania,” Shapiro said.
In July, U.S. Sen. Bob Casey of Pennsylvania introduced a bill that would restrict employers’ use of AI in hiring. Pennsylvania state Reps. Robert Merski and Chris Pielli have introduced legislation to make it a misdemeanor to use AI to impersonate a loved one without consent. In May, U.S. Rep. Joe Morelle of New York introduced legislation banning the nonconsensual sharing of intimate deepfake images, something domestic violence advocates have voiced concerns about.
State Sen. Jay Costa of Pennsylvania, who has been a vocal advocate of AI and the technology’s integration into society, has sponsored legislation to spur more study of the technology. He has teamed up with state Sen. Jimmy Dillon and other lawmakers on legislation establishing an advisory committee to conduct a study on AI, including how to distinguish between AI- and human-generated content.
“AI is not just about the technology; it’s going to impact aspects of our daily lives that we can’t even imagine right now,” Dillon said. “We need to really concentrate on and address the ethics and the regulations and the workforce concerns and take a comprehensive approach toward it.”
Yet advocates for responsible AI use said transparency is often easier said than done. In the Pittsburgh area, the Allegheny County Department of Human Services has used the Allegheny Family Screening Tool since 2016 to help social workers predict when children may face harm, ideally as a way to check individual biases of staffers. Yet the practice has drawn scrutiny from the U.S. Department of Justice following complaints that the technology could result in unfairly targeting parents with disabilities or mental health disorders. The Associated Press reported that the tool draws on Supplemental Security Income data as well as records of diagnoses such as schizophrenia or mood disorders. Disability rights advocates argued that this unfairly targets parents with disabilities and mental illness and punishes them for accessing country resources.
The parents at the center of a case that resulted in a Justice Department investigation weren’t allowed to see their risk score after their daughter was removed from their care. An Allegheny County spokesperson said via email that the county doesn’t consider the tool to be AI – but not everyone sees it that way.
Experts said being able to access information about what kind of data goes into training an AI model is crucial. But Julia Stoyanovich, director of the Center for Responsible AI, said regulating AI can be tricky because entities – including local and state governments – can be cagey about disclosing when they’re using it.
“We do need to think about how we can create an environment where people can have a positive conversation where it’s not all just sticks, but also carrots,” Stoyanovich said, where “government representatives are able to come to the table and actually use the expertise of external folks to try and help them do things better rather than the worried about negative publicity.”
U.S. Senate Majority Leader Chuck Schumer has made headlines for exploring increased oversight of AI and met recently with tech leaders to discuss how to regulate AI. However, Stoyanovich lamented that the first meetings were with industry leaders, even though input was taken from a wide range of constituents and stakeholder groups.
“Essentially, we’re going to be relying on industry to say that they’re not going to be evil,” Stoyanovich said. “I think that’s really a mistake … because industry incentives do not align very often with societal incentives. And we do absolutely need legal and regulatory instruments here that are going to protect citizens in terms of both, what data about them is being used, how it’s collected, and also the decisions that were subjected to.”
Atiya Irvin-Mitchell is the lead reporter in Pittsburgh for Technical.ly, a news organization for technologists and entrepreneurs, by way of Report for America.
NEXT STORY: The battle to deep-six the deepfake threat to elections levels up