Four United States senators wrote to Federal Trade Commission (FTC) Chairwoman Lina Khan requesting information about efforts by the FTC to track the use of artificial intelligence (AI) in defrauding older Americans. Is.
In Letter Addressing Khan, US senators Robert Casey, Richard Blumenthal, John Fetterman and Kirsten Gillibrand highlighted the need to effectively respond to AI-enabled fraud and deception.
Underscoring the importance of understanding the extent of the threat to counter the threat, he said:
“We ask the FTC how it is working to collect data on the use of AI in scams and ensure this is accurately reflected in its Consumer Sentinel Network (Sentinel) database.”
Consumer Watchdog is the FTC’s investigative cyber tool used by federal, state, or local law enforcement agencies, which includes reports about various scams. Senators asked the FTC Chairman four questions about AI scam data collection practices.
The senator wanted to know whether the FTC has the ability to identify AI-powered scams and tag them in Sentinel accordingly. Additionally, the Commission was asked whether it could identify generic AI scams that went unnoticed by victims.
Lawmakers also requested an analysis of Sentinel’s data to identify the popularity and success rates of each type of scam. The final question asked whether the FTC uses AI to process the data collected by Sentinel.
Senator Casey is also Chairman of the Special Committee on Aging, which studies issues related to older Americans.
On November 27, the US, the United Kingdom, Australia and 15 other countries jointly released global guidelines to help protect artificial intelligence (AI) models from tampering, and require companies to make their models “secure by design”. requested.
Exciting news! we supported @NCSC And 21 international partners will develop “Guidelines for Safe AI System Development”! This is operational collaboration in action for secure AI in the digital age: https://t.co/DimUhZGW4R#AISafety #SecureByDesign pic.twitter.com/e0sv5ACiC3
– Cybersecurity and Infrastructure Security Agency (@CISAgov) 27 November 2023
The guidelines mainly recommend maintaining a tight grip on the infrastructure of AI models, monitoring any tampering with models before and after release, and training staff on cyber security risks.
However, it failed to discuss the use of image-generating models and deep fakes or data collection methods and possible controls around their use in training models.