The premier weekly newspaper for the legal profession in Ontario
Issue link: https://digital.lawtimesnews.com/i/926061
Law Times • January 15, 2018 Page 7 www.lawtimesnews.com Canada's own Cold War BY PHILIP GIRARD A s the Korean situation seems in- creasingly to recall the Cold War era, some legal history from our own Cold War was featured last year in a rather disturbing media story. Researcher Dennis Molinaro was try- ing to find out more about a secret wire- tapping program run by the RCMP in the 1950s called Project PICNIC. Denied ac- cess at first to the secret Order in Council authorizing the program, he uncovered many more secrets. A million pages of documents on this and related security issues had never been transferred to Library and Archives Can- ada by the Privy Council Office. In the memorable image provided by a clever reporter, the material would pro- duce a stack of paper as high as the CN Tower. The CBC's story on Molinaro's quest elicited the release of the Order in Coun- cil the next day. He recently elaborated on Project PIC- NIC in an article in the Canadian His- torical Review. The scheme was authorized under the Emergency Powers Act, a War Measures Act 'lite' passed in March 1951 to deal with the Korean crisis. In July, the secret Order in Council PC 3486 permitted the RCMP to engage in covert phone tapping against individuals, organizations and embassies on Cana- dian soil. While the existence of the order was revealed in Parlia- ment, its content was not: An Order in Council of the previ- ous day had amended the Reg- ulations Act to permit the non- publication of certain orders. While the existence of such measures might arguably be justified in wartime, the gov- ernment wanted to keep this form of intelligence-gathering going after the end of the hos- tilities in Korea in July 1953. Continuing PICNIC could not be done under the expired EPA, so the govern- ment decided to use the Official Secrets Act to achieve similar ends. Section 11 of the OSA permitted a jus- tice of the peace to issue a search warrant and to seize any "evidence" that indicated an offence under the Act had been com- mitted or was about to be committed. The beauty of this route was that RCMP officers could be named as jus- tices of the peace under the RCMP Act, and the OSA also gave RCMP officers of the rank of superintendent and above the power to issue such warrants in case of emergencies. The Department of Justice position was that "evidence" could include oral communications and — voilà — s. 11 au- thorized phone tapping. Thus, the RCMP continued to listen in on the conversations of Canadian citizens suspected of disloyalty during peacetime for an open-ended period of time with essentially no safeguards. Move over, Ed- ward Snowden! This was a mass surveil- lance scheme in the mak- ing, way before the internet, accomplished without the knowledge or oversight of Par- liament. Two of many interesting aspects to this story are as fol- lows. First, Molinaro proves that in carrying out these activities, the RCMP were not going rogue. Rather, their actions were carried out under duly enacted legal authority. Gov- ernment lawyers and officials found the mode of proceeding distasteful, but they rationalized it without too much diffi- culty. Canada's allies supposedly did it, and the end — averting serious threats to na- tional security — justified the means. Second, deals regarding these activities had to be negotiated (secretly, of course) with the telephone companies. Bell Canada, in particular, did not just roll over. It wanted assurances no law was be- ing broken, and it sought a confirmation of the program's legality from the deputy minister of justice; it also refused to ac- cept warrants signed by anyone other than the commissioner of the RCMP or, if unavailable, their deputy. Other phone companies complied without complaint. It is not clear (yet) how many warrants were issued, what kind of information was acquired pursuant to the program or what was done with it. The RCMP continued its phone- tapping activities into the 1970s under the aegis of the OSA. Presumably, in the pre-computer era, sheer logistics prevented the monitoring and maintaining of data on huge num- bers of people, compared to what is pos- sible now. Future research may enlighten us on this point, but the actual number of people subject to surveillance is not the point. The lack of transparency and potential for abuse is important. In reading the Molinaro article, one imagines that similar conversations are probably going on within the security es- tablishment right now. He has used the techniques of legal his- tory to provide a valuable window on how government actors conceptualize secu- rity measures when they think no one is watching. LT uPhilip Girard is a legal historian and professor at Osgoode Hall Law School. He's also associate editor at the Osgoode Soci- ety for Canadian Legal History. His email address is pgirard@osgoode.yorku.ca. AI needs debate about potential bias BY CAROLE PIOVESAN A rtificial intelligence — or, at a high level, computer systems that are self-learning and self-executing — is introducing pro- found efficiencies to the legal industry by automating an increasing number of tasks tradition- ally performed by legal professionals. In addition to being applied in the daily practice of law — due diligence, case law research and document review, to name a few examples — AI is also being used to assist with judicial decision-making. A good example of this is in criminal law. In certain jurisdictions across the U.S., judges are employing AI to conduct risk assessments of defendants in sentenc- ing and parole decisions. These AI-based assessments analyze a compre- hensive set of data (such as the defendant's age, prior criminal record, postal code and employment history) to predict recidivism rates. The idea, in part, is that AI systems can promote fairness by reducing the inf lu- ence of human bias. However, as Kathryn Hume, vice president of prod- uct and strategy for integrate.ai, recently stated, "[A]s algorithms play an increasingly widespread role in so- ciety, automating — or at least inf luencing — decisions that impact whether someone gets a job or how some- one perceives her identity, some researchers and prod- uct developers are raising alarms that data-powered products are not nearly as neutral as scientific rhetoric leads us to believe." Hume is ref lecting upon two issues associated with the mainstream use of AI. First is the issue of bias. The output of an AI system is dependent upon the data set available and analyzed by that system. Limits to the data set will inf luence the system's decisions. Second is the issue of transparency. The debate about whether algorithms should be transparent (or an explainable analytical process) is controversial. Many of these systems are proprietary and protected as a trade secret. In addition, some of these systems are so complex that even open systems can be next to impossible to understand, in some cases requiring another AI to in- terpret the analytical process. The well-known case of Wisconsin v. Loomis is il- lustrative of the practical effects of both these issues. Eric Loomis, 35, was arrested for his involvement in an attempted drive-by shooting. At the sentencing hearing, the presiding Justice Scott Horne rejected a plea deal and sentenced Loomis to six years imprisonment, explaining that "[t]he risk assessment tools that have been utilized suggest that you're extremely high risk to reoffend." The tool referred to by Horne is a system called COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions. It is used in several U.S. jurisdictions to predict pre-trial recidivism, general recidivism and violent recidivism using individual and group data. Loomis challenged the court's reliance on COMPAS on the basis that, among other reasons, the proprietary software violated his right to due process. The appellate court affirmed the lower court's reli- ance on COMPAS, stating that COMPAS is simply one tool available to the judge to assist in determining an appropriate sentence. Human judgment is still applied to the sentence. However, an in-depth study of the COMPAS system conducted by the investigative non-governmental or- ganization ProPublica raised concerns about potential bias in the system's outputs. ProPublica reported that many of the questions asked by COMPAS contain sta- tistical biases pointing to a targeted group. ProPublica ran its own detailed statistical test to isolate the effect of race from criminal history and re- cidivism and, when controlling for prior crimes, future recidivism, age and gender, African-American defen- dants were 77 per cent more likely to be assigned higher risk scores than white defendants. The study also found that female defendants were 19.4 per cent more likely to get a higher score than men. The Loomis decision also highlights the im- portance of transparency. ProPublica's methodol- ogy was extensive as the COMPAS system itself is protected. Algorithmic transparency in the judicial context should be given greater importance for at least two reasons. First, it permits meaningful review of a system's de- cisions. A judge's reasons must permit meaningful ap- pellate review and must explain how and why a judge came to a particular decision. Where the output of an AI system is a key feature in a judge's decision, the algo- rithmic scoring process should be open to court chal- lenge. Second, it enables public trust in the administration of justice. AI systems can have a positive impact on society. Staying with the same example, AI-based risk assess- ment systems are being credited with slowing the growth in the prison population in Virginia to five per cent from 31 per cent in a decade. A better understanding of the outputs of AI systems in judicial decision-making can reduce skepticism and promote a perception of fairness. AI is expected to assist lawyers and judges to focus on high-value work instead of more routine tasks and matters. But we need a broader discussion about the standards that should be associated with the transpar- ency of these systems, particularly when used in the context of judicial decision-making. Developers, policy-makers, lawyers, academics and enterprise each approach these issues from different perspectives. Canada, as a leader in AI research and innovation, should also lead in tackling some of these thorny ethi- cal and legal issues associated with AI incorporation in law. LT uCarole Piovesan is a litigator at McCarthy Tétrault LLP. She is the firm lead on AI for the cybersecurity, privacy and data management group and co-author of the firm's pa- per: "From Chatbots to Self-Driving Cars: The Legal Risks of Adopting Artificial Intelligence in Your Business." u SPEAKER'S CORNER COMMENT That's History Philip Girard