
Includes formation of MassCompute to create and implement regulations related to AI and its usage
Senators voted on a bipartisan basis to nudge forward legislation that seeks to install some of the first legal guardrails for artificial intelligence technology in Massachusetts, including by creating a state board to develop and deploy AI technology and narrowing how companies can use electronic monitoring tools to track workers.
Sen. Michael Moore, Senate chairman of the Advanced Information Technology, the Internet and Cybersecurity Committee, said there are “very few legal limitations for what individuals, corporations, and developers can do with artificial intelligence” in Massachusetts, and even fewer legal protections for people whose data is used to train AI algorithms.
“This regulatory void leads to a lot of uncertainty for everyone, whether you use AI or not,” the Millbury Democrat said. “The five bills we are advancing out of Committee … represent a step toward reasonable, common-sense regulations that will protect Bay Staters from the negative effects of AI while maintaining Massachusetts’ status as one of the most innovative economies in the world. Striking this balance is critically important.”
Two of the bills that advanced last week deal broadly with AI and similar technologies. S 2630 would create an entity called MassCompute modeled after a similar program in California. That organization would be made up of public and private sector experts and would partner with the attorney general’s office to “create and implement regulations related to AI and its usage,” Moore’s office said. The bill also creates an Artificial Intelligence Innovation Trust Fund to support MassTech’s AI Hub.
The committee also advanced S 35 dealing with electronic monitoring tools in the workplace, or devices or systems that collect data related to worker activity or communication. The bill creates guidelines for how such tools can be used, limiting their use to ensuring the quality of goods or services, worker performance assessments, ensuring compliance with labor laws, protecting worker health and safety, and administering wages. Companies could not primarily rely on data from those systems when making hiring, promotion or disciplinary decisions.
The committee also promoted S 2631 to prohibit a person or political committee from “maliciously” distributing deceptive election-related information — generated by AI or otherwise — with the intent to mislead voters within 90 days of an election. The committee’s S 2632 would impose limits on the use of AI in health care decision-making and would clarify that therapy or psychotherapy services can only be conducted by a licensed professional (allowing AI to be used in some cases as “supplementary support.”) And S 2633 would expand existing laws against the creation and distribution of child sexual abuse material to apply to content created in whole or in part through AI.