THE COMMUNITY FOR SECURITY COUNCIL, APPOINTED BY CLUSIT, FOCUSES ON AI AND THE METAVERSE FROM A CYBERSECURITY PERSPECTIVE
On 3 March, SCAI Partners participated in a conference organised by CLUSIT’s Community for Security Council (the Italian Association for Information Security), which provided an excellent platform for cyber security and privacy professionals to discuss current issues.
One major topic of discussion was the Metaverse, which attracted a lot of attention. The Metaverse is an immersive environment where users interact through avatars. However, it is also a virtual space that employs multiple devices and technologies, which hackers can potentially exploit to target users, thus exposing companies to new risks such as spyware and avatar scams.
The Metaverse, composed of various elements such as personal, sensitive, biometric, geolocation, and tracking data, creates a fertile environment for cyber attacks. As a result, it is vital that companies protect their digital identities. This can be achieved by adopting privacy and security principles by design, promoting greater awareness and training, and cooperating with the relevant authorities.
The Council also discussed data protection compliance in the development, production, use, and deployment of Artificial Intelligence (AI) solutions, which is further complicated by the absence of a universally-recognised definition of AI. The technology is constantly evolving and its global impact on society is becoming increasingly significant.
Given the dynamic nature of the field, Europe is progressing towards the release of its first Artificial Intelligence Regulation (AI Act). Similar to the General Data Protection Regulation (GDPR), this legislation will require companies engaged in the use, production, procurement, and development of Artificial Intelligence to be prepared and equipped. It will also establish the fundamental aspects required for regulating AI-related scenarios.
Despite the core principles of the GDPR to which AI systems much adhere – such as lawfulness, fairness, transparency, integrity, confidentiality, and accountability – the use of these technologies can still pose a significant risk to users, despite legal compliance, particularly when it involves personal data.
To reduce the risk of privacy violations, it is essential to adopt specific techniques, including privacy-enhancing technologies (PETs) at the earliest stages of privacy by design. According to the OECD, privacy-enhancing technologies (PETs) are a collection of digital technologies and methods that facilitate the collection, processing, analysis, and sharing of information while safeguarding the confidentiality of personal data. They specifically allow a relatively high level of utility to be obtained from data while also minimising the need for collection and processing. These technologies include data perturbation, synthetic data, federated learning, and the use of data formats that are not as “human readable”.
We recommend exercising caution and scrutinising assumptions concerning automated decision-making processes wherever Artificial Intelligence interacts with Data Protection. These processes refer to the use of tools such as machines or algorithms to make decisions, with or without human intervention, which can impact those involved in several ways. To guarantee sufficient protection, companies need to adopt best practices that ensure an effective review and human involvement in decision-making processes. This can be achieved by employing user interfaces, providing incentives, and supporting employees.