Artificial intelligence (AI)skills : new cybersecurity skills to protect citizens

“…AI measures could support the normative system by realizing a procedural rights design that protects the attacked victims from the abuse of technologies…”

Artificial Intelligence as Tool for Victims’ Protection against Cybercrime. Authored by Prof. Dr. Rolf H. WEBER

Artificial intelligence (AI). Photo credits: Pixabay

The inherent complexities associated with artificial intelligence (AI) and algorithms require a prudent assessment of the new technological advances and their consequences for existing socio-ethical and legal systems as a whole. Some substantive principles need to be kept in mind if AI is used, particularly in personally sensitive areas; examples are (i) the transpareny in respect of data provenance (background of data collection and processing), (ii) the creation of awareness (also about possible biases involved in the design and implementation of AI), (iii) the existence of accountability mechanisms (related to the «production» of AI results) and (iv) the availability of auditing models.

At first, artificial intellligence is a technology; nevertheless, apart from data security and data accuracy, the human agency also plays a role, i.e. social systems cannot only be conducted by machines. The legal framework enshrines fundamental rights; as a consequence, the ongoing dialog regarding the ethics of AI should expand to consider the human rights implications of these technologies including the risks of discrimination. Thereby, ethics and law can act in a complementary way, being able to inspire each other; efforts must be strengthened to develop a comprehensive approach for the socio-ethical and the legal dimension of AI value conceptualizations moving into the direction of a potentially symbiotic relationship.

The commitment of cybercrime activities is an ethically and legally not acceptable behavior. Victims of cybercrime are particularly vulnerable persons since defense instruments cannot easily be implemented. Therefore, the protection of victims by the States in combatting cybercrime appears to be a valuable governmental objective. Indeed, AI measures could support the normative system by realizing a procedural rights design that protects the attacked victims from the abuse of technologies (and unjustified power). AI and automation tools are able to increase efficiency and fairness of defense actions. In particular risk assessments tools do have the potential to exercise a significant positive impact on the rights of individuals.

A key parameter of such risk assessment tools consists in their predictive accuracy. The collection of past data and subsequently the appropriate processing and scoring is of utmost importance. In view of the fast technological advances, new variables appearing within short time intervals are an unavoidable factor. Therefore, risk elements need to be continually rebalanced in reponse to the changed inputs. If the tools are developed and applied by private enterprises (not by governmental agencies), precautionary measures are necessary in order to find an adequate balance between business secrecy needs, the exercise of private power and public interests. Furthermore, it must be avoided that risk assessment tools do have a negative impact on the individual rights to equality and non-discrimination.

Artificial intelligence, at least partly based on big data analytics, is not mainly following the traditional proofs of causality, but puts the focus on correlations. As a consequence, the protection of vulnerable persons such as victims of cybercrime is dependent on further knowledge creation that is able to develop an ethical and legal environment in a non-discriminatory way. Apart from the mentioned principles of accountability and auditability, meachnisms of collective actions that are in a position to enlarge the required knowledge might have to be broadened. The respective tasks are not easy to fulfil but AI at least offers chances which should be taken at hand in order not to miss positive opportunities.

The author Prof. Dr. Rolf H. Weber is Professor of international business law at Zurich University acting there as co-director of the Research Program on Financial Market Regulation, the Center for Information Technology, Society, and Law and the Blockchain Center. Furthermore, he was Visiting Professor at Hong Kong University and he is practicing attorney-at-law in Zurich. Prof. Weber is member of the Editorial Board of several Swiss and international legal periodicals and frequently publishes on issues of global law. His main fields of research and practice are IT- and Internet, international trade and finance as well as competition law.

Bringing the voice of youth on the digital world to you from +35 countries. Policies and governance: online safety, cybersecurity & hottest internet issues.