There is 1 version of this bill. View text

Click the check-box to add or remove the section, click the text link to scroll to that section.
Titles Actions Overview All Actions Cosponsors Committees Related Bills Subjects Latest Summary All Summaries

Titles (2)

Short Titles

Short Titles - House of Representatives

Short Title(s) as Introduced

Algorithmic Accountability Act of 2019

Official Titles

Official Titles - House of Representatives

Official Title as Introduced

To direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.


Actions Overview (1)

Date Actions Overview
04/10/2019Introduced in House

All Actions (3)

Date All Actions
04/11/2019Referred to the Subcommittee on Consumer Protection and Commerce.
Action By: Committee on Energy and Commerce
04/10/2019Referred to the House Committee on Energy and Commerce.
Action By: House of Representatives
04/10/2019Introduced in House
Action By: House of Representatives

Cosponsors (31)


Committees (1)

Committees, subcommittees and links to reports associated with this bill are listed here, as well as the nature and date of committee activity and Congressional report number.

Committee / Subcommittee Date Activity Related Documents
House Energy and Commerce04/10/2019 Referred to
House Energy and Commerce Subcommittee on Consumer Protection and Commerce04/11/2019 Referred to

A related bill may be a companion measure, an identical bill, a procedurally-related measure, or one with text similarities. Bill relationships are identified by the House, the Senate, or CRS, and refer only to same-congress measures.


Latest Summary (1)

There is one summary for H.R.2231. View summaries

Shown Here:
Introduced in House (04/10/2019)

Algorithmic Accountability Act of 2019

This bill requires specified commercial entities to conduct assessments of high-risk systems that involve personal information or make automated decisions, such as systems that use artificial intelligence or machine learning.

Specifically, high-risk automated decision systems include those that (1) may contribute to inaccuracy, bias, or discrimination; or (2) facilitate decision-making about sensitive aspects of consumers' lives by evaluating consumers' behavior. Further, an automated-decision system, or information system involving personal data, is considered high-risk if it (1) raises security or privacy concerns, (2) involves the personal information of a significant number of people, or (3) systematically monitors a large, publicly accessible physical location.

Assessments of high-risk automated-decision systems must (1) describe the system in detail, (2) assess the relative costs and benefits of the system, (3) determine the risks to the privacy and security of personal information, and (4) explain the steps taken to minimize those risks, if discovered. Assessments of high-risk information systems involving personal information must evaluate the extent to which the system protects the privacy and security of such information.