The Federal Communications Commission will pursue an inquiry into the impact of artificial intelligence on robocalls and robotexts, in a move that agency Chair Jessica Rosenworcel said would ideally result in getting “this junk off the line.”
The FCC’s 5-0 vote Wednesday came following a Notice of Inquiry seeking an agency response on how AI can — for better or worse — impact the prevalence of illegal and unsolicited text messages and phone calls under the Telephone Consumer Protection Act.
The agency will now engage in information-gathering on the topic and better prepare for changes to calls and texts brought on by AI-fueled tech. The FCC will also seek comment on how to define AI within the context of robocalls and robotexts, assess how emerging AI systems will affect consumer privacy rights under the TCPA and ultimately determine whether the agency will pursue additional steps to tackle the issue.
“I think we make a mistake if we only focus on the potential for harm,” Rosenworcel said. “We need to equally focus on how artificial intelligence can radically improve the tools we have today to block unwanted robocalls and robotexts.”
Rosenworcel spent much of her time raising concerns about voice cloning scams that test “our ability to separate vocal fact from fiction in order to commit fraud.” She pointed to a video circulating on the internet that uses the voice of Tom Hanks to “hawk dental plans.” More alarmingly, there’s been a proliferation of scams using the AI-generated voices of family members. The FCC will be tasked with learning how AI can battle back against those schemes.
“We are talking about technology that can see patterns in our network traffic unlike anything we have today,” Rosenworcel said. “They can lead to the development of analytic tools that are exponentially better at finding fraud before it reaches us at home. Used at scale, we cannot only stop this junk, we can use it to increase trust.”
The risks involved with voice cloning also keep Democratic Commissioner Geoffrey Starks up at night. He said it’s critical that the FCC use its authority to address “malicious uses of AI.”
So far, Starks said he’s appreciated the whole-of-government approach to AI regulation, as well as engagement with sectors ranging from health care to homeland security. That “intersectionality” will be crucial as regulators zero in on AI’s benefits and harms.
The White House’s recently released AI executive order calls on the FCC to engage in rulemaking that supports efforts to combat AI-facilitated robocalls and robotexts in addition to setting up AI systems to block such unwanted messages.
Republican Commissioner Brendan Carr said he’s worried that the Biden administration’s approach to AI regulation “is going to be overly prescriptive,” comparing the “lengthy” executive order to tech regulatory approaches he’s seen in Brussels.
“They have a very different mindset in the European regulatory circles when it comes to new technologies,” Carr said. “They study it, they have salons about it, they want to set very concrete rules on the front end. And you don’t see a tremendous amount of tech innovation taking place right now in Europe.”
Despite those misgivings, Carr said he was in favor of putting “some commonsense guardrails in place” when it comes to AI.
The FCC has felt some heat in recent weeks over its response to robocall scams. After the agency identified 20 submissions to its robocall mitigation database that were not compliant with its rules, members of the Senate Commerce, Science and Transportation Subcommittee on Communications, Media and Broadband said in a hearing that the FCC was not doing enough to protect consumers.