Categories
Colloquium Events

Colloquium: “A Panel discussion on algorithms and society”

1 December 2020. Tuesday. 3pm. London Time. On Zoom.


Title Mapping the Public Debate on Ethical Concerns: Algorithms in Mainstream Media

Speaker Balbir S. Barn, Department of Computer Science, Middlesex University, London.

Abstract Algorithms are in the mainstream media news on an almost daily basis. Their context is invariably artificial intelligence (AI) and machine learning decision-making. In media articles, algorithms are described as powerful, autonomous actors that have a capability of producing actions that have consequences. Despite a tendency for deification, the prevailing critique of algorithms focuses on ethical concerns raised by decisions resulting from algorithmic processing.  This paper reports results from the first systematic mapping study of articles appearing in leading UK national papers from the perspective of widely accepted ethical concerns such as inscrutable evidence, misguided evidence, unfair outcomes and transformative effects.

Bio Balbir S. Barn, PhD, is professor of software engineering at Middlesex University. Balbir has 15 years of industrial research experience working in research centres at Texas Instruments and JP Morgan Chase as well as leading on academic funded research (Over £2.5 million). Balbir’s primary research area is focused on model driven software engineering. Currently, Balbir is conducting research on simulation environments for digital twins that focuses on study, evaluation and advances in value sensitive design and epistemic concerns in the use of simulation technology. Balbir has published over 125 peer-reviewed papers in leading international conferences and journals and includes a recently completed edited book on “Advanced Digital Architectures for Model-Driven Adaptive Enterprises”.

Title Mastering the Algorithm and Mentoring the Human Decision-Maker

Speaker Mandeep K. Dhami, Department of Psychology, Middlesex University, London.

Abstract I present some evidence for the reasons why algorithms should be preferred over unaided human judgment, and attempt to offer measured solutions to problems associated with so-called ‘algorithmic bias’. Although I will refer to the crime and justice domains, the issues I raise are applicable to other domains.

Bio Mandeep K. Dhami, PhD is Professor in Decision Psychology at Middlesex University, London. She previously held academic and research positions in the UK (University of Surrey and University of Cambridge), Canada (University of Victoria), USA (University of Maryland) and Germany (Max Planck Institute for Human Development). Mandeep has also worked as a Principal Scientist for the UK Ministry of Defence, and has work experience in two British prisons. Mandeep is an internationally recognized expert on human judgment and decision-making, risk perception, and uncertainty communication. She applies her expertise to solving problems in the criminal justice and intelligence analysis domains. Mandeep has authored 120 scholarly publications and is lead editor of a book entitled ‘Judgment and Decision Making as a Skill’ published by Cambridge University Press. To-date, she has obtained over £2 million in research funding and her research has received several international awards including from the European Association for Decision Making and the American Psychological Association (Division 9). Mandeep regularly advises Government bodies nationally and internationally on evidence-based policy and practice. Most recently, she was a UK representative on the NATO SAS-114 research panel on ‘Assessment and Communication of Risk and Uncertainty to Support Decision Making’ and was awarded the 2020 NATO SAS Panel Excellence Award. Mandeep is currently co-Editor of Judgment and Decision Making, the official journal of both the (US-based) Society for Judgment and Decision Making and the European Association for Decision Making.

Title Regulating Machine Learning

Speaker Carlisle George, Department of Computer Science, Middlesex University, London.

Abstract The increasing applicability of AI and Machine Learning in all aspects of life has raised an important issue of how best to develop a governance framework to mitigate many concerns related to its use. Some of these concerns relate to accountability, liability, privacy/data protection and bias/discrimination. Many of these challenges stem from the lack of algorithmic transparency that is caused by constraints related to understandability (how ML models work before deployment), explainability (how ML models reach particular decisions after deployment) and autonomy (ML models can change, evolve and adapt through their deployment).  In this debate, I argue that given the concerns associated with the use of machine learning, it is inevitable that regulatory measures will be adopted in the future. However, how this regulation develops and by whom, is an issue that we need to carefully consider. As the effects of machine learning become the subject of litigation, case law will develop as judges make rulings on issues such as liability. Sector-specific regulators may need to develop requirements for development, testing and deployment of ML technologies. Professional Organisations and Civil society will also have roles to play. As academics and researchers, we can also play an integral role in shaping how regulation develops. What practical steps can we take to address transparency? Is it possible to develop ways of understanding ML models and explaining what they do? How do we address the training of ML algorithms so as not to replicate bias that is within the data itself? How can we develop methods to “appeal” against decisions of perceived “infallible” AI systems?

Bio Dr Carlisle George is an Associate Professor at Middlesex University. Among other qualifications, he holds a PhD in Computer Science (University of London) and a Master’s degree (LLM) in “IT and Communications Law” (LSE). He is qualified as a Barrister and was called to the Bar of England and Wales at Lincoln’s Inn, London and the Bar of the Eastern Caribbean Supreme Court (where he maintains his legal practice licence). Dr George has many years of experience working on legal/regulatory issues in EU projects as a senior legal expert/advisor including EAHC – study focusing on “An overview of the national laws on electronic health records in the EU Member States and their interaction with the provision of cross-border eHealth services”; Directorate E – study on Completeness and conformity of measures of Member States to transpose the PNR Directive; and Directorate C – study on the legal and political context for setting up a European Identity document. He has also been a legal researcher on EU funded projects including VALCRI (Visual Analytics for Sense-Making in Criminal Intelligence Analysis) and SAMi2 (Semantics Analysis Monitor for Illegal Use of the Internet). His main areas of focus are legal aspects of health-IT, data protection/privacy and legal aspects of e-commerce, digital forensics and data science. He has published many academic articles and co-edited two books on eHealth and medical informatics, organised six international Health IT workshops and continues to be research active with doctoral students. He is also the convenor of the ALERT (Aspects of Law and Ethics Relating to Technology) research group at Middlesex.

Title Human-Centred Algorithms – A Recap

Speaker B.L. William Wong, Professor of Human-Computer Interaction, Department of Computer Science, Middlesex University, London, and Professor-in-Residence, Genetec, Inc.

Abstract In this talk I will give a brief statement as a recap to provide continuity from last month’s opening CS Colloquium where we discussed some lessons from the VALCRI project about designing ethical safeguards for using AI / ML in criminal intelligence and investigative analysis. This will include (1) making ethical problems tractable, (2) designs should enable humans to ascertain or challenge if outcomes or recommendations of  AI / ML are sensible (3) the type of human-machine partnership will affect how the ethical problems manifest themselves. These problems were discussed in the context of human-centredness in relation to the algorithmic transparency framework for black-box algorithms and a cognitive engineering-based approach to designing for visibility and transparency.

Bio Professor Wong’s research is in cognitive engineering and the representation and interaction design of visual analytics user interfaces that enhance situation awareness, sense-making, reasoning, and decision making in dynamic environments. His current research focuses on designing for transparency in human-machine teams.  In September 2020, Dr Wong returned from a 2-year industrial sabbatical as Principal Scientist at Genetec Inc., where he and his team commercialised selected IP from the EU-funded VALCRI project. He has received over US$25.3 million in research grants and published over 120 scientific peer reviewed articles with his students and colleagues

Zoom link https://mdx-ac-uk.zoom.us/j/6684138396?pwd=d0I5V2JlTHVKbjlKWXZ2MW1RZ0ozQT09

Meeting ID: 668 413 8396

Passcode: mdx