Assessing the trustworthiness of artificial intelligence

Roberto V Zicari

INTRODUCTION

This chapter illustrates how to evaluate whether an artificial intelligence (AI) system, used, for example, as a decision-making tool, is trustworthy. Using machines to make decisions raises several ethical considerations for society. AI ethics has increasingly focused on converting abstract principles into practical action. This chapter describes a new process for assessing trustworthy AI in practice, called “Z-inspection”.11Z-inspection® is a registered trademark. Z-inspection® is open access and distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license. For reasons of textual clarity, it is not the publisher’s house style to use symbols such as ®, etc. However, the absence of such symbols should not be taken to indicate absence of trademark protection; anyone wishing to use product names in the public domain should first clear such use with the product owner.

AI,22Artificial intelligence refers to systems that display intelligent behaviour by analysing their environment and taking actions (with some degree of autonomy) to achieve specific goals. AI-based systems can be purely software-based

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here