Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed (Author was flagged for spam)

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

3
[Ada Lovelace Institute Report] Using FDA's Drug/Device Approval Framework for Regulating AI
Post Flair (click to view more posts with a particular flair)
Author Summary
Post Body

Recently, policymakers have started to look into how to regulate artificial intelligence (AI) models. These models are expected to impact every walk of life -- in medicine, the stakes are higher since these algorithms would influence medical care decisions and delivery. Outside medicine, there are additional issues.

Artificial intelligence (AI) models like OpenAI’s GPT-4 that are capable of a range of general tasks such as text synthesis, image manipulation and audio generation. Policymakers, civil society organisations and industry practitioners have expressed concerns about the reliability of foundation models, the risk of misuse of their powerful capabilities and the systemic risks they could pose as more and more people begin to use them in their daily lives.

Many of these risks to people and society – such as the potential for powerful and widely used AI systems to discriminate against particular demographics, or to spread misinformation more widely and easily – are not new, but foundation models have some novel features that could greatly amplify the potential harms. [quoted from Source]

AI REGULATORY FRAMEWORK

Current Policy Efforts

  • UK and US governments have released voluntary commitments for developers of these models
  • EU’s AI Act includes some stricter requirements for models before they can be sold on the market
  • The US Executive Order on AI includes obligations on developers of foundation models to test their systems for certain risks ( Algorithmic Accountability Act of 2022)

Ada Lovelace Report

A new report by Ada Lovelace Foundation provides a framework how the policymakers and regulators could use the principle of FDA framework for approving medical devices and drugs, aka. prove safety before release of model into the wild (Safe Before Sale).

Drug and medical device regulators have a long history of applying a rigorous oversight process to novel, groundbreaking and experimental technologies that – alongside their possible benefits – could present potentially severe consequences for people and society.

This paper draws on interviews with 20 experts and a literature review to examine the suitability and applicability of the US Food and Drug Administration (FDA) oversight model to foundation models. It explores the similarities and differences between medical devices and foundation models,  the limitations of the FDA model as applied to medical devices, and how the FDA’s governance framework could be applied to the governance of foundation models.

The reports first describes in detail FDA approval and oversight role for drugs and devices from preapproval through psotmarketing. Then the report draws parallels with how same model could be applied to AI foundation models.

Ada Lovelace Report on applying FDA paradigm to AI models' regulation. Fig 4

Ada Lovelace Report on applying FDA paradigm to AI models' regulation. Fig 5

SOURCE

Author
Account Strength
0%
Account Age
2 years
Verified Email
Yes
Verified Flair
No
Total Karma
4,739
Link Karma
1,763
Comment Karma
2,337
Profile updated: 4 months ago
Posts updated: 8 months ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
11 months ago