AI ethics Debate: The little matter of right and wrong. Will an ethical framework limit AI’s potential or make sure it’s fit to fly?
Prof Nigel Crook, Associate Dean: Research and Knowledge Exchange Faculty of Technology, Design and Environment, Oxford Brookes, University
AI is the arguably not just the biggest technological advance of our time, but the biggest commercial opportunity, promising unheard of opportunities for innovation, transformation and productivity. However, as society gets ready for large scale adoption, there are global calls for an ethical framework that will govern the way AI applications are developed and applied. Some fear that this will clip AI’s wings, slow innovation and dilute commercial returns. Others insist AI’s long term benefits can only be assured by a considered approach that puts humans above technology. Who’s right, what can we do about it and how can we make sure AI achieves its potential without threatening our lives?
Nigel has more than 30 years’ experience as a lecturer in computer science and researcher in AI. He currently leads research into cognitive robots at Oxford Brookes and is an expert reviewer and evaluator for the European Commission. His research interests include biologically inspired machine learning, embodied conversational agents, social robotics and human-robot interaction. He graduated in computing and philosophy from Lancaster University and has a PhD in medical expert systems.