EDITO
Between dystopian fantasy and techno-bliss,
it is hard to determine where artificial intelligence (AI) will lead humanity. One thing is certain:
nothing will ever be the same again.
InCyber Forum
Our professional activities, our personal lives, our ways of thinking, of deciding, of acting, of consuming, of producing, of caring for ourselves, are all going to be permanently disrupted. But tomorrow’s world will depend above all on what we collectively decide to do – or not to do – with artificial intelligence, and on the «trust» we manage to build “in” and “through” these technologies. To be “trusted”, AI must, according to the European
Commission, respect 7 principles, including technical robustness and security, respect for privacy and data governance. The widespread use of these technologies will introduce new risks, not only because of the increased attack surface, but also because of their intrinsic vulnerabilities (poisoning attacks, adversarial attacks, etc). As technological progress is ambivalent by nature, AI is also used by attackers all along the “kill chain” to recognize a target, bypass protection mechanisms, build “deep fakes”, automate an attack and so on. The first challenge is therefore to secure not only artificial intelligence, but also the data it absorbs and produces.
Data and artificial intelligence are inextricably linked. And the figures are staggering: every second, 7 megabytes of data are created for each person, resulting in a global quantity of data that will reach 181 zettabytes in 2025, compared with 2 zettabytes in 2010. Fortunately, AI is also revolutionizing the “art” of cybersecurity, enabling us to improve our authentication, data security, threat detection, code analysis, orchestration, incident response and other capabilities.
A technological leap that should also lead us to radically reinvent our policies, organizations and skills, so that they are “AI – ready”.