How a public sector is looking for safe use of neural networks
The introduction of artificial intelligence (AI) in public administration is developing in two parallel ways: while the development of the “concept of development of II’s regulation until 2030,” the government has ended in the implementation of AI in management, economics and social sphere. For federal and regional departments, universal trusted decisions on the basis of AI are developing. Meanwhile, cybersecurity experts warn that out of 43% of companies that have introduced the neural network, only 36% provided their protection.
The development of the “Concept for the Development of II Defense until 2030” has almost been completed at the Minzifra site, Kommersant was told to the Minzifra. “Now the finalization of the document is almost completed, in the near future we will agree on it with other departments,” they say there. Particular attention is paid to the ethics and control of a person for AI, depending on their tasks. Mincifers also added that modern AI systems can perform unprogrammed tasks, which increases risks. Also, within the framework of the St. Petersburg International Law Forum, closed and public discussions of the future regulation of AI were held at the end of May. As it became known to Kommersant, representatives of the regulators – the Ministry of Council, the Ministry of Justice and the Presidential Administration – discussed the final approaches to the « Concept of Regulation of AI until 2030. »
Meanwhile, the government has already begun the introduction of AI technology into management, economics and social sphere. Now, for the implementation of federal and regional departments, typical decisions are being created on the basis of trusted AI, they told Kommersant in the apparatus of Deputy Prime Minister Dmitry Grigorenko. “The decisions will be universal, that is, not created under one local task,” they say there. To understand the goals and objectives of the solutions, regions were interviewed, on the basis of which work is underway. In early May, it became known that the government plans to create the Center for the Development of Artificial Intelligence on the basis of its analytical center. The unit should coordinate the interaction of federal, regional authorities and business in the implementation of specialized tasks (see “Kommersant” of May 12).
Safe work and implementation of AI in Russia worries not only regulators, but also business. On the forum “Trusting Technologies 2025” on May 20, the Senior Managing Director of Appsec Solutions, Anton Basharin said that AI has already become a new vector of vulnerability for cyber ends. “Already 43% of Russian companies use AI in their processes, but only 36% of them have at least minimal security policy. This makes technology a convenient target for attacks, ”the company notes.
Especially vulnerable are AISSTRAS, built into corporate infrastructure. They gain access to customer databases, internal documents and management systems and can become a source of leaks, they say in Appsec Solutions.
In some cases, II systems study on infected datasets containing backdors (allow an attacker to access the system) and distorting data. Such models are compared with the Trojan Horses. In addition, malicious requests can overload Jeadens, which is fraught with stopping processes and forced investments in the infrastructure, said Anton Basharin. Experts emphasize that modern and models can not only process data, but also independently collect, store and transmit information about users, often without proper restrictions.