
Tech official says AI in executive branch is ‘walled-off enterprise’
BOSTON, May 7, 2026…..Sen. Barry Finegold thinks the technological revolution is different this time around.
“Twenty-plus years ago, there was this cute little company started in Harvard called Facebook. We all thought it was nice, how these kids played Facebook with their friends at Harvard and other colleges, and we didn’t understand how big this would be,” Finegold said, reflecting on the introduction of the social media platform.
“I think this is different. I think we do understand how big AI is, how big it’s going to be,” the co-chair of the Committee on Economic Development and Emerging Technologies said at the Massachusetts Continuing Legal Education Conference Center in downtown Boston at a State House News Service/MASSterList event titled, “The AI Revolution in Massachusetts: Disruption, Risk, Opportunity.”
While he thinks lawmakers are “a little ahead of the curve” two decades later, he acknowledged: “No one really knows how far this is going to go.”
Massachusetts lawmakers have not passed comprehensive legislation regulating artificial intelligence. The House last year passed a bill regulating AI in campaign advertisements, and the Senate passed a data privacy bill. Both branches have passed school cellphone-related legislation, and the House bill attempts to regulate the use of social media by young people.
A Finegold bill (S 2630) would establish guardrails for developing and deploying AI models, which are already widespread. The bill received a favorable vote from the Committee on Advanced Information Technology, the Internet and Cybersecurity and has been in Senate Ways and Means since October.
“We’ve had conversations with California, New York. We’ve been talking to the legislators there. What we’ve been trying to do is have bills that are very similar, so it’s really what we hope would be a framework for the federal government to adopt,” Finegold said.
Executive Branch AI Deployment
The Healey administration in February launched a ChatGPT-powered AI assistant under a contract with OpenAI.

“In terms of the executive branch, I would say that we are keeping pace with AI, at least at this moment,” said Thomas Myers, general counsel and chief privacy officer at the Executive Office of Technology Services and Security. He called Massachusetts “one of the first states” to come out with a generative AI policy establishing guardrails to monitor its use and development.
At the time of the rollout, the National Association of Government Employees, which represents about 15,000 state employees, said the administration was “rushing” the introduction of AI and said some of its workers were concerned AI could take away job duties.
A 2026 Tufts University Fletcher School of Law and Diplomacy study projected Massachusetts is one of the most vulnerable states to AI when considering the proportion of jobs here that are at risk of displacement.
“We’re not exactly sure what’s going to be added. I think we’re very concerned about what’s going to be subtracted. But if you look at the history of technology, most times, we end up adding jobs, not losing jobs,” Finegold said of workforce displacement concerns.
“One challenge we’re having is how quickly AI is moving,” Finegold added. “I do think we can retrain people. We’re just going to need the time and resources.”
Endicott College Associate Professor Dr. Allan Glass called the findings “sobering,” and said he didn’t think they should be “softened.”
“The reality of the concentration of financial services, legal supports, health care administration in Massachusetts is going to lead to massive disruption,” Glass said. “Equally so, though, is the trade-off. We should be asking ourselves the question, how do we benefit from this technology to make businesses or organizations within the Massachusetts border more productive overall?”
Trust, Risk, and Ethical Concerns
Whether people trust the technology being introduced is a big part of the conversation, panelists agreed.

“If people do not trust the technology, if people do not trust how it’s developed, if people don’t trust who’s developing it, then you’re going to have the problem of deployment and adoption. And that becomes very critical when we think about governments that are deploying this,” said Reverend Chris Hope, a minister and technologist at The Hope Group.
Rep. Francisco Paulino said he’s not worried that AI will create mass displacement of employees, but of the “way AI can standardize discrimination,” and Hope suggested that regulators need to figure out how to regulate potential harm categories.
“When we talk about discrimination, when we talk about manipulation, when we talk about displacement — that is, to me, the priority and focus, because it’s really more of a moral question as well,” Hope said.
There’s tension between how regulators can balance protecting users and advancing technology, Kevin Bolen, head of AI transformation, strategy and investments at KPMG, said.
“The reality is, we’re not going to bend through this inflection curve right now between the technology and the adoption if we don’t close that trust gap,” Bolen added. “I think anytime you have kind of the explosion of technology that we’ve seen, it’s up to the regulators to not only look at the impacts to today, but to forecast where it’s going to go.”
AI’s Impact on the Workforce and the Economic Risk
Slower legislative pace is “incongruous” with the pace technology changes at, he said, “So you can’t legislate for today. You have to legislate for where it’s going to be three-to-five years from now. And that’s really been hard to predict with the pace of AI’s transformation.”
Myers said that the state isn’t replacing workers with AI, and that its intention is to “support” existing workers with tools and training to make them more efficient.
“Specifically for the ChatGPT enterprise, we have our own walled-off enterprise that is not connected to any other model or any other enterprise, it is not training on any models,” Myers explained.
In addition to an acceptable use policy addressing parameters for usage, there’s also a data protection agreement in place, he said. Prior to implementing the contract, the state also did a risk review and privacy impact assessment.