The US House of Representatives has barred congressional staff from using Copilot, one of Microsoft’s AI-powered products, as part of disputes over the security level of this technology. The security of codes and text suggestions by this AI assistant, which has been automated to contact generally uncontrolled cloud services, has raised concerns of many immigration officials.
The dissemination of such confidential data through unauthorized intermediaries provoked officials to restrict access to Copilot across the entire university infrastructure. Microsoft has made clear that it recognizes these concerns by announcing its intention to unveil only AI tools designed to comply with severe regulations and special security standards for public agencies such as government organizations. As far as things go, they’ll be available throughout the 2024 calendar year.
The Senate, meanwhile, will not be uniform in exercising extreme caution. Notably, no-AI spam tools are also banned worldwide, reflecting widespread concern about the potential for data theft. In the meantime, Microsoft is working to overcome these challenges coming its way. The latest guidance for Copilot Studio, a tool that builds customized artificial intelligence assistants, limits the opportunity for government institutions to interact with non-government data by blocking its input from data stored in the US and controlled by high access controls.
If only this incident would emphasize the fact that the AI being developed is becoming more and more powerful and we should keep an eye on the risks involved. The White House’s action to require all federal government departments to name their own chief AI officer is further evidence of the careful but vigorous strategy being used to bring AI and government together.