The Law Society held a conference last year on the legal regulation of artificial intelligence (AI) and also the use of AI in law. You can read my summary of it below and watch the recording of the conference on YouTube.
Jonathan Smithers - President, Law Society
Jonathan touched upon the thought-provoking and abstract concept of super-humanism. As technology becomes more integral, not only to our daily lives, but to our correct function, are we becoming more than human? He explained the evolution of the concept of a legal person from men only to companies, animals, rivers and even holy books. The European Parliament has asked the question "should there be a new category to describe robots, similar to other legal categories such as natural persons, legal persons, animals, or objects?" Due to pressure from shareholders and consumers, developers are working at an increasing rate - and the risks that come with this need to be understood. Jonathan gave a very relevant example of this: Tay, the chatbot released by Microsoft, that, within hours of going live on Twitter, posted anti-semitic, racist and sexist comments due to learning from internet trolls. Jonathan asked three questions on the liability of AI:
What legal responsibility arises from a robot's harmful actions?
What will be the effect of a robot developing autonomous and cognitive features?
How can a machine be held responsible, wholly or partly, for its acts or omissions?
He concluded that unless action is taken immediately, the legal system may not be able to handle what will come in the future of technology.