Strict-ish liability? An experiment in the law as algorithm



Some researchers in the US recently conducted an ‘experiment in the law as algorithm’. (One of the researchers involved with the project was interviewed by Ars Technia, here.) At first glance, this seems like quite a simple undertaking for someone with knowledge of a particular law and mathematical proficiency: laws are clearly defined rules, which can be broken in clearly defined ways. This is most true for strict liability offences, which require no proof of a mental element of the offence (the mens rea). An individual can commit a strict liability offence even if she had no knowledge that her act was criminal and had no intention to commit the crime. All that is required under strict liability statutes is that the act itself (the actus reus) is voluntary. Essentially: if you did it, you’re liable – it doesn’t matter why or how. So, for strict liability offences such as speeding it would seem straightforward enough to create an algorithm that could compare actual driving speed with the legal speed limit, and adjudicate liability accordingly.

This possibility of law as algorithm is what the US researchers aimed to test out with their experiment. They imagined the future possibility of automated law enforcement, especially for simple laws like those governing driving. To conduct their experiment, the researchers assigned a group of 52 programmers the task of automating the enforcement of driving speed limits. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over an hour-long commute. The programmers (without collaboration) each wrote a program that computed the number of speed limit violations and issued mock traffic tickets.

Despite the seemingly clear-cut nature of what it means to break the speed limit, the experiment demonstrated that even relatively narrow and straightforward ‘rules’ can be problematically indeterminate in practice. Even though the programmers worked with quantitative data for both vehicle speed and the speed limit, the number of tickets issued varied from none to one per sensor sample above the speed limit. The results demonstrated significant deviation in number and type of tickets issued during the course of the commute, based on legal interpretations and assumptions made by programmers untrained in the law.

It is perhaps surprising that assumptions would bias an algorithm designed to indicate the frequency and magnitude of speeding offences. What assumptions could be involved when deciding whether the actual driving speed X is greater than the limit of Y? However, the researchers point out that laws were not created with automated enforcement in mind, and that even seemingly simple laws have subtle features that require programmers to make assumptions about how to encode them. For example:

\”An automated system […] could maintain a continuous flow of samples based on driving behavior and thus issue tickets accordingly. This level of resolution is not possible in manual law enforcement. In our experiment, the programmers were faced with the choice of how to treat many continuous samples all showing speeding behavior. Should each instance of speeding (e.g. a single sample) be treated as a separate offense, or should all consecutive speeding samples be treated as a single offense? Should the duration of time exceeding the speed limit be considered in the severity of the offense? [p.11]\”

When we manually enforce laws relating to speeding– or even when we use speed cameras – we know that these mechanisms capture only a fraction of the total number of instances of speeding. There is also usually a ‘buffer zone’ of a few miles per hour within which a driver might technically be speeding but would not get picked up. Particularly when police officers use speed guns to measure drivers’ speeds, there is room for discretion which cannot be built in to an algorithm. As the researchers say, bias can be encoded into the system but, once encoded, the code is unbiased in its execution. The researchers conclude that discretion after the fact may actually be important even for the simplest of offences, like speeding. Offences requiring the mental element in addition to commission of the prohibited act are likely to be even harder to effectively encode ex ante:

\”The question arises, then: What is the societal cost of automated law enforcement, particularly when involving artificially-intelligent robotic systems unmediated by human judgment? Our tradition of jurisprudence rests, in large part, on the indispensable notion of human observation and consideration of those attendant circumstances that might call—or even mandate—mitigation, extenuation, or aggravation. When robots mediate in our stead either on the side of law enforcement or the defendant, whether for reasons of frugality, impartiality, or convenience—an essential component of our judicial system is, in essence, stymied. Synecdochically embodied by the judge, the jury, the court functionary, etc., the human component provides that necessary element of sensibility and empathy for a system that always, unfortunately, carries with it the potential of rote application, a lady justice whose blindfold ensures not noble objectivity but compassionless indifference. [p. 28]\”

This, perhaps, is an unsurprising view when considering complex offences that require that the offender acted with intention or knowledge or recklessness. But it also raises interesting questions for strict liability. Might it be the case that strict liability statutes are not only enacted under the assumption but perhaps even the hope that not all volitions will be picked up? Is the lower resolution of manual law enforcement actually preferable for less serious offences? The answer to this will depend in part on the seriousness of the offence in question and the justifications for the attendant sanctions: Deterrence? Retribution? Generation of revenue?

There is, of course, an important difference between seeing the algorithm as inadequate because it gets something factually wrong and seeing it as inadequate because some discretion might be preferable. For example, the discretion involved in deciding how offences should be delineated as a driver meanders above and below the speed limit is something we might wish to preserve. Further, the experiment demonstrated that hilly terrain caused the vehicle to exceed the speed limit despite the cruise control being set at the speed limit. This inability for a driver to have precision control over her speed provides justification for a buffer zone. Thus, despite the conceptual simplicity of what it means to break the speed limit, the experiment in law as algorithm at least raises the possibility that, in some cases, strict-ish liability is actually what we optimally want.