Skip to main content

A deeper dive into machine learning methods: their opportunities, limitations, risks and uncertainties

Devajyoti Ghose and George Soulellis

There is … a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short-term profits, our safety will not be the top priority. We urgently need research on how to prevent these new beings from wanting to take control. They are no longer science fiction.

Geoffrey Hinton, Nobel Banquet speech, 2024

9.1 INTRODUCTION

When Alan Turing asked “can machines think?” and postulated the imitation game in 1950, he did not interpret the question literally. Instead, he interpreted the question as “can what machines do be called thinking?”. In the early days of artificial intelligence (AI), when Deep Blue beat Gary Kasparov in a chess match in 1997 (an impressive feat at the time), machines were trained to think “like” humans (that is, loosely speaking, trained to observe and act like a human would act in a similar situation). But, in the imitation game, Turing did not imply that machines think because they can imitate how humans think; he only required that a machine be able to successfully fool a human observer

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

Want to know what’s included in our free membership? Click here

Show password
Hide password

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here