

Discover more from This month in Responsible AI
November 2022
Responsible AI LLC
Responsible AI LLC submitted comments (PDF) to New York City’s Department of Consumer and Worker Protection regarding the proposed Rules on Automated Employment Decision Tools.
Responsible AI LLC is now on Mastodon @responsibleai@mastodon.social.
This month in responsible AI
Business
Twitter is now under new ownership, which has made major cuts to staffing. Unfortunately, among those cut were most of the ML Ethics, Transparency and Accountability (META) team, formed just 1.5 years ago. Twitter joins the list of companies that have gutted their responsible AI capabilities, alongside Meta and Alphabet.
A class-action lawsuit has been filed against Microsoft and OpenAI over alleged license violations and copyright infringements in GitHub Copilot.
Policy and Government
United States
The White House Office of Science, Technology and Policy published their Blueprint for an AI Bill of Rights; the Technical Companion is light on operational guidelines, but rather summarizes multiple cases of concern and defers to specific agency efforts that elaborate on more specific developments.
NIST held its third workshop on AI Risk Management.
NIST plans to open an online Trustworthy and Responsible AI Resource Center.
FTC held PrivacyCon 2022, which included a panel on automated decision-making.
[RFC] FDA’s Computer Software Assurance (CSA) Draft Guidance is open for comment.
☛ Submit a comment on regulations.gov by 2022-11-14.
[FR] FTC has extended the deadline to comment on its Advance Notice of Proposed Rulemaking (ANPR) on commercial surveillance and lax data security practices.
☛ Submit a comment on regulations.gov by 2022-11-21.
Europe
The European Union have published the final texts of the Digital Services Act and Digital Markets Act. The latter is now in effect from 2022-11-01, and specifically proscribe anti-competitive behavior on Big Tech’s platforms such as app installation and data protection.
The European Parliament will hold a workshop on 2022-11-14 on public perspectives on AI.
United Kingdom
In an interview with the BBC, The UK's deputy information commissioner warns of biometric technologies that are “modern-day phrenology” and “junk science”.
BSI held a webinar on shaping global standards for Artificial Intelligence.
BSI have published a draft British Standard, Validation framework for the use of AI within healthcare, for public comment.
☛ Submit a comment to BSI by 2022-12-05.
The UK’s banking regulators have published a discussion paper on AI/ML in banking.
☛ Submit a comment to DP5_22@bankofengland.co.uk by 2023-02-10.
The Alan Turing Institute launched the AI Standards Hub, a clearinghouse for standards work and a place to do work on trustworthy AI. YouTube recordings of the launch are available.
Italy
Banca d’Italia published an Occasional Paper on AI in credit scoring in Italy: Emilia Bonaccorsi di Patti et al., Artificial intelligence in credit scoring: an analysis of some experiences in the Italian financial system.
Papers
The Trustworthy ML Initiative held their second annual symposium; the recording is on YouTube.
Call for papers. The AAAI R2HCAI workshop is soliciting submissions regarding human-centric AI, including topics relevant to responsible AI.
☛ Submit your paper by November 14, 2022 (AOE).
My reading list
[Twitter] Evani Radiya-Dixit, A Sociotechnical Audit: Assessing Police use of Facial Recognition. Studies three cases in the UK uncovering deficiencies in regulatory oversight and various aspects of responsible use of AI. Recommends banning the police use of facial recognition in public spaces.
[arXiv; demo] Raphael Tang et al., What the DAAM: Interpreting Stable Diffusion Using Cross Attention. Introduces a practical attribution method designed specifically for the U-Net denoising network in stable diffusion.
[arXiv] Luyu Gao et al., Attributed Text Generation via Post-hoc Research and Revision. Addresses the problem of hallucinated content in generated text with an automated research-and-revise workflow.
[arXiv] David Gray Widder and Dawn Nafus, Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers' Notions of Responsibility. Discusses how modularity, by virtue of hiding information and separating concerns, complicates the analysis of accountability, and argues that a systemic view beyond reductive checklists is necessary.
[PR; demo] Eleanor Drage and Kerry Mackareth, Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”, Philosophy and Technology. Argues that objectivity in employment AI systems is impossible without acknowledging the structures of power behind gender and race. The demo site shows a mock-up of a personality assessment based from a photo of a person’s face, and how quickly assessments can change when the image is manipulated.
[arXiv] Michael D. Ekstrand and Maria Soledad Pera, Matching Consumer Fairness Objectives & Strategies for RecSys, FAccTRec 2022. This position paper describes different fairness goals in recommender systems and argues that the appropriate interventions to implement ought to be in the context of specific goals and applications.
[arXiv, Twitter] Joris Baan, Wilker Aziz, Barbara Plank, Raquel Fernandez, Stop Measuring Calibration When Humans Disagree, EMNLP’22. Discusses a fundamental problem with calibration error metrics when there is label noise and majority voting is used to resolve disagreement in expert labels. Eliminating the human variation in expert labels results in overfitting to the consensus, which results in miscalibration. The authors instead propose capturing distance to the label distribution instead.
[PDF, Twitter] Michelle S. Lam et al., End-User Audits: A System Empowering Communities to Lead Large-Scale Investigations of Harmful Algorithmic Behavior, CSCW’22. Collaborative filtering of crowdsourced non-expert audits can surface concerns with a quality comparable to an expert audit.
[arXiv, Twitter] Artem Moskalev et al., LieGG: Studying Learned Lie Group Generators, NeurIPS’22. Presents a method for finding symmetries in neural networks by solving for elements of the Lie algebra transforming between different input features.
[arXiv] Çağlar Aytekin, Neural Networks are Decision Trees. This preprint makes the (rather banal) observation that neural networks with piecewise linear activation functions like ReLU can be rewritten as decision trees, simply by rewriting the x>0 condition of the activation function as an if … else statement. This observation is not of much theoretical interest other than to further disprove the claim that decision trees are inherently interpretable: just because a ML model can be expressed as a decision tree, that does not automatically mean it is fully explainable.