- RSA hosts London Market Broker Roundtable-changing needs of customers highlighted
- Lockton report highlights that UK companies are underestimating the impact of a data breach
- Standard & Poor's warns on some potential reinsurer exposures in 2017
- Standard Life and Aberdeen Asset Management merger completed
- Prudential sells US broker-dealer network
- Admiral turnover and customer numbers up in first half-"marginal increase in profitability and more material increase in underlying dividend" says CEO Stevens
- Talanx net income up 14.9% in "pleasing" first half expired
- Bongi appointed EVP of Professional Lines at Brit Global Specialty USA(BGSU) expired
- Charles Taylor TPA appoints Director of Strategy and Performance expired
- Davies Group acquires Claims Management Services(CMSL) expired
- Fidelis appoints CRO Mathias to Board expired
- Schinnerer Group to acquire ICAT expired
9th August 2017
Clyde & Co seminar panel discuss the difference between autonomous and assisted-driven vehicles and the issues arising after a claim
‘Assisted’ and ‘autonomous’ driving may seem similar, but the difference between them could prove to be a challenging battleground for the insurance industry in the development of self-driving vehicles. This was the view of a panel of experts speaking at global law firm Clyde & Co’s seminar on the future of autonomous vehicles last month.
The panel, comprising motor experts and Clyde & Co partners from around the world, said that there was a legal grey area between a car fully under its own control and a car which was nominally driving under its own control but still required driver supervision–even though the driver was not playing an active role.
Clyde & Co partner Mark Hemsted highlighted the potential legal issues that could be caused by an advanced car switching between different driving modes. He said “Imagine a car driving in autonomous mode on a motorway and then switching to assisted mode when it turns onto a trunk road as conditions change. If the change was triggered in error, this could lead to product liability issues if an accident resulted. There’s also the question of whether the driver is aware of the change. The same could be the case with switching between assisted and manual modes.”
Mark Wing, a Clyde & Co partner, noted the difficulty insurers would face when challenging motor manufacturers over possible faults in their intelligent vehicles. He said “Experience shows that motor manufacturers do not readily acknowledge design flaws in their products. They also know the vehicle far better than the insurer, which leads to the question of how any insurer will be able to prove fault.”
He also pointed out that product liability claims focusing on the car’s artificial intelligence(AI) system would be particularly problematic. He said “A fully autonomous car is dependent on its sensors and AI. That AI is likely to be a connected service rather than a product built into the vehicle. What happens if the AI itself goes wrong? How can we understand the decisions made by an incredibly complex AI system? We may have to revisit the systems we have for assigning liability.”
The Automated and Electronic Vehicles Bill, announced during the recent Queen's Speech, places liability for accidents involving autonomous vehicles on the motor insurer, with the possibility of subrogating from the manufacturer. Panel members speculated that manufacturers may provide their own insurance for drivers if the traditional motor insurance market failed to respond favourably.
Hemsted observed that data storage in autonomous vehicles would bring about major changes for the management of insurance claims. He said “Automation is clearly going to affect evidence. Autonomous vehicles bristle with sensors so will record any accident. That’s going to provide a perfect factual history of the collision. Specifically, that data will show what role, if any, the driver played, which will be vital to these types of claims.
If autonomous cars become the perfect witness in an accident, traditional methods of accident investigation and gathering physical evidence will no longer be necessary.”
In order to illustrate the issues surrounding assisted and autonomous driving modes, the panel highlighted the death in May last year of the driver of a Tesla car in Florida. The car’s Autopilot system requires driver supervision. Data recovered from the vehicle revealed that the car had warned its driver seven times to place his hands on the wheel. The data also showed that during the 37 minutes the car had driven in this assisted mode, the driver’s hands had been on the wheel for just 25 seconds. The panel felt that, at present, there was a risk that that some assisted driving systems–modes in which cars effectively drive themselves but under the supervision of the human occupant–were so reliable they could lull drivers into a false sense of security.
The panel also felt that the brand names of the assisted driving technologies such as Tesla’s Autopilot could confuse motorists as to the scope of their capabilities. This April, Tesla owners in the US filed a class action against the manufacturer for allegedly mischaracterising the capabilities of its new Autopilot 2 feature to consumers.
Clyde & Co Trends(11 articles)