This post has been de-listed (Author was flagged for spam)
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Recently, policymakers have started to look into how to regulate artificial intelligence (AI) models. These models are expected to impact every walk of life -- in medicine, the stakes are higher since these algorithms would influence medical care decisions and delivery. Outside medicine, there are additional issues.
Artificial intelligence (AI) models like OpenAI’s GPT-4 that are capable of a range of general tasks such as text synthesis, image manipulation and audio generation. Policymakers, civil society organisations and industry practitioners have expressed concerns about the reliability of foundation models, the risk of misuse of their powerful capabilities and the systemic risks they could pose as more and more people begin to use them in their daily lives.
Many of these risks to people and society – such as the potential for powerful and widely used AI systems to discriminate against particular demographics, or to spread misinformation more widely and easily – are not new, but foundation models have some novel features that could greatly amplify the potential harms. [quoted from Source]
AI REGULATORY FRAMEWORK
Current Policy Efforts
- UK and US governments have released voluntary commitments for developers of these models
- EU’s AI Act includes some stricter requirements for models before they can be sold on the market
- The US Executive Order on AI includes obligations on developers of foundation models to test their systems for certain risks ( Algorithmic Accountability Act of 2022)
Ada Lovelace Report
A new report by Ada Lovelace Foundation provides a framework how the policymakers and regulators could use the principle of FDA framework for approving medical devices and drugs, aka. prove safety before release of model into the wild (Safe Before Sale).
Drug and medical device regulators have a long history of applying a rigorous oversight process to novel, groundbreaking and experimental technologies that – alongside their possible benefits – could present potentially severe consequences for people and society.
This paper draws on interviews with 20 experts and a literature review to examine the suitability and applicability of the US Food and Drug Administration (FDA) oversight model to foundation models. It explores the similarities and differences between medical devices and foundation models,  the limitations of the FDA model as applied to medical devices, and how the FDA’s governance framework could be applied to the governance of foundation models.
The reports first describes in detail FDA approval and oversight role for drugs and devices from preapproval through psotmarketing. Then the report draws parallels with how same model could be applied to AI foundation models.
Ada Lovelace Report on applying FDA paradigm to AI models' regulation. Fig 4
Ada Lovelace Report on applying FDA paradigm to AI models' regulation. Fig 5
SOURCE
- Safe before sale. Learnings from the FDA’s model of life sciences oversight for foundation models [Report]. By Merlin Stein. Ada Lovelace Institute. 14 December 2023 [archive]
- Algorithmic Accountability Act of 2022. US Senate. 3 February 2022 [archive]
Subreddit
Post Details
- Posted
- 11 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/RegulatoryC...