The WS Society

View Original

The Future Legal Implications of Automated Vehicles

By Victoria Hayward, Madeleine Chambers and Fraser Meighan.

Victoria, Madeleine and Fraser were law student scholars, participating in the WS Society Summer Scholarship programme during July and August 2021. This article summarises their research and presentation.

Introduction 

The introduction of autonomous vehicles in the UK has numerous legal implications. Currently, provisions and guidance in the majority of areas are vague, unclear and not fit for purpose. In their first consultation paper, the Scottish Law Commission together with the Law Commission of England and Wales highlighted three key areas which were crucial for the introduction of autonomous vehicles in the UK. Firstly, that the use of autonomous vehicles is lawful. Secondly, that there are appropriate mechanisms and monitoring in place to ensure adequate safety. Finally, to allow for appropriate mechanisms for attributing civil and criminal liability for compensation purposes. Successfully reaching these standards will require significant consideration and review of current legislation and guidance.

The legal implications of ‘self-driving’ vehicles 

The UK Government has indicated that ‘self-driving’ vehicles could be allowed on British roads by the end of 2021. 

Currently, self-driving vehicles are regulated by the Automated and Electric Vehicles Act 2018 (‘the 2018 Act’). The 2018 Act created a new form of civil liability where vehicles are driving themselves. The key provisions for determining whether a vehicle is self-driving are sections 1 and 8 of the 2018 Act. Section 1(1)(a) provides that the Secretary of State must prepare a list of all motor vehicles that are, in the Secretary of State’s opinion, “designed or adapted to be capable, at least in some circumstances or situations, of safely driving themselves”. The crucial word here is “safely”. This is defined in section 8(1)(a) as a vehicle “operating in a mode in which it is not being controlled, and does not need to be monitored, by an individual”.

Why does the definition of ‘self-driving’ matter? The Scottish Law Commission, together with the Law Commission of England and Wales, are working to create a legal framework for automated vehicles in preparation for their future deployment on our roads. Under their provisional proposals, if a vehicle is classified as being capable of safely driving itself and the automated system is engaged, the person in the driving seat will be a ‘user-in-charge’ rather than a driver. This will have important legal consequences. Firstly, the user-in-charge will be permitted to undertake activities which drivers of conventional vehicles are prohibited from doing, such as reading emails or watching a film. Secondly, if there is an accident caused by a self-driving vehicle the insurer would be liable to compensate the victim irrespective of fault by the user-in-charge. Thirdly, the user-in-charge could not be prosecuted for a wide range of criminal offences such as careless or dangerous driving. 

There are a number of issues that must be considered before the responsibilities of drivers change. One of the most significant is which vehicles would be compliant with the 2018 Act’s definition of self-driving. Currently, no vehicle has been listed as an automated vehicle within the meaning of the Act. It therefore remains uncertain which vehicles would be lawful or safe for the purposes of driving, and indeed, for the 2018 Act to apply. 

Similarly, it is imperative that the law is clear about which vehicles do not meet the test of self-driving. Motor vehicles are becoming increasingly technology based, with driving assistance features taking over most of the task of driving. Tesla, along with other vehicle manufacturers such as Uber and Waymo, have developed vehicles that are capable of driving themselves most of the time but still require a driver to monitor the environment and intervene when necessary.

The legislation only requires listed vehicles to be capable of driving themselves in “some circumstances or situations”. This raises important questions which go to the meaning of self-driving. Where do we draw the line on activities that the user-in-charge is allowed to undertake? If the user-in-charge has diverted their attention and relaxed into other activities, can they be relied on to intervene when things go wrong? And if so, when and how should they be called on to take over driving? 

Following a recent consultation on the safe use of ‘Automated Lane Keeping Systems’ (ALKS) on motorways, the UK Government has indicated that vehicles with ALKS technology would meet the test of self-driving. Thatcham Research and the Association of British Insurers have urged caution over defining such systems as self-driving. Matthew Avery, Thatcham Director of Research explained:

The Government’s plan threatens road safety. Motorists could feasibly watch television in their car from early next year because they believe their Automated Lane Keeping System can be completely trusted to do the job of a human driver. But that’s not the reality. The limitations of the technology mean it should be classified as ‘Assisted Driving’ because the driver must be engaged, ready to take over.

It is hoped that clear guidance will be provided in the near future. There are still many unanswered questions following the 2018 Act, but as the UK prepares for the introduction of self-driving vehicles on our roads, these questions are now becoming urgent. 

Civil Liability and the Product Liability Framework 

When considering civil liability in the context of autonomous vehicles, section 2(1) of the 2018 Act creates a new form of liability that arises directly on the insurers. Section 5 allows for the insurer to bring an action against “any other person liable” once they have settled their liabilities in the first instance. At this stage, it is likely to be insurers who wish to bring claims under the Consumer Protection Act 1987 against manufacturers, producers, and suppliers. However, there is uncertainty as to whether the 1987 Act can be relied upon in the context of defective software in automated vehicles, and if so, when? This area is not isolated to autonomous vehicles - the question of whether “over-the-air” software can be deemed a defective product under the 1987 Act affects numerous industries and technologies. 

The term “product” is defined in section 1(2) of the 1987 Act, but it is unclear whether software updates would fall into this definition. In St Albans City and DC v International Computers Ltd [1996], it was held that software must have a physical medium to be considered a product under the Act’s definition. Therefore, software supplied with the original vehicle would be covered, but software updates would not. The complex neural networks would create difficulties when ascertaining which software caused the defect, as it would be crucial to ascertain whether it was original software or a software update. Nevertheless, in UsedSoft GMBH v Oracle International Corporation (C-128/11) it was considered that “the on-line transmission method” could be seen as the functional equivalent of software which has a physical medium in terms of product liability. This would seemingly allow software updates to be covered by the definition of “product”, but this has not been confirmed. The uncertainty in this area creates a legal framework that is unpredictable and that cannot be relied upon by insurers in secondary claims. 

The 1987 Act defines “defect” as such that “the safety of the product is not such as persons are generally entitled to expect”. We then must consider, under the current product liability framework, when software would be deemed defective. When responding to the first consultation paper, the European Commission Group of Experts on Liability and New Technology asked whether “unpredictable deviations in the decision-making path” could be considered a defect in the context of self-learning AI systems, which make decisions based on learned knowledge. Due to this, a decision the system makes today may not be the same decision it makes later.

Fitting this spectrum of decision-making into the binary categories of defective and non-defective would require the examination of the algorithm behind the decision. This is particularly different given the amount of data retention that would be required to do so. Clearer provisions would improve legal certainty in the area, but equally it may seek to transform this spectrum into a binary model. At this point it is difficult to know what would be more harmful. 

We should consider who would be liable under the 1987 Act, and when this liability would cease. A point of contention is whether these individuals should retain liability for the lifetime of the vehicle. It is inevitable that software ageing occurs over time, and that this ageing can result in flaws in the software. It is unlikely that the purpose of the 1987 Act was to amount to a “lifelong warranty” over software and its general wear and tear. However, clearer guidance is required on how we then establish when liability over of the software ceases. 

It is evident that the current product liability framework lacks legal certainty. A review of the provisions is inevitable. However, whether this review should be of the 1987 Act generally, or specifically for automated vehicle software is unclear. 

Automated Vehicles and Criminal Liability 

There are a number of aspects that must be given due consideration when analysing the potential criminal liabilities of automated vehicles. Firstly, current road traffic legislation in the UK is not equipped to deal with an increase in vehicle automation on our roads. Certainly, the criminal law must be updated to ensure deficiencies and gaps are eradicated. We have already seen examples of the law being revised. In 2017 concerns were raised regarding remote parking and laws prohibiting mobile phone usage whilst driving. Following a review of the legislation, amendments allowed for remote parking in certain circumstances. Nevertheless, there are still many instances where a review of the relevant legislation would create a framework that is fit for purpose.

It is unlikely that the general public would be inclined to buy or use an automated vehicle if they were held criminally responsible for an accident which was wholly caused by the automated vehicle itself rather than their own error. 

Thirdly, a new system of dealing with infractions caused by automated vehicles must be introduced. This new system of dealing with infractions must aim to improve the operational performance and safety of automated vehicles on our roads. Ultimately, criminal sanctions may not be the most appropriate method of responding to issues presented by automated vehicles. If an automated vehicle was found to have a fault or defect, and the relevant manufacturer faced criminal repercussions that resulted in an economic sanction, the outcome would do little to improve the safety of automated vehicles. As proposed jointly by the Scottish Law Commission and the Law Commission of England and Wales, a regulatory body in charge of the supervision of these cars could be highly functional and effective in its approach. Criminal sanctions carry with them a strong societal stigma, and their proper use allocates blame and punishes prohibited behaviour. If the goal is to make automated vehicles as safe as possible rather than express moral disapprobation for wrongdoing, then regulatory sanctions would be wholly more effective. A system that allowed for the constant review and monitoring of automated vehicles could ensure consistency across the board of all vehicles and aim to prevent serious failings.

As with any emerging area, vehicle automation and its role in society present challenges and change for the law. Understanding the potential implications for criminal liability goes hand in hand with understanding the regulatory and safety aspects of automated vehicles as well as the civil liability framework. 

Conclusion 

An increase in vehicle automation on our roads is a probability that cannot be ignored. With this increasing presence comes new challenges. Such challenges pose real questions and consequences under the law. The legal issues in this article are only a small proportion of what would need to be reviewed before the introduction of these vehicles to the UK market. The UK and Scottish Government, as well as many private organisations, have heavily invested in the research and development of autonomous vehicles. However, significant public and private funding is still required in order to successfully launch autonomous vehicles in the UK. We cannot ignore the possibility that despite the research currently being done in this area, many factors could cause the introduction to be unsuccessful.