[Congressional Bills 117th Congress]
[From the U.S. Government Publishing Office]
[S. 5351 Introduced in Senate (IS)]

<DOC>






117th CONGRESS
  2d Session
                                S. 5351

    To clarify the applicability of civil rights law to algorithmic 
                   decisions, and for other purposes.


_______________________________________________________________________


                   IN THE SENATE OF THE UNITED STATES

                           December 21, 2022

  Mr. Portman introduced the following bill; which was read twice and 
   referred to the Committee on Commerce, Science, and Transportation

_______________________________________________________________________

                                 A BILL


 
    To clarify the applicability of civil rights law to algorithmic 
                   decisions, and for other purposes.

    Be it enacted by the Senate and House of Representatives of the 
United States of America in Congress assembled,

SECTION 1. SHORT TITLE.

    This Act may be cited as the ``Stopping Unlawful Negative Machine 
Impacts through National Evaluation Act''.

SEC. 2. DEFINITIONS.

    In this Act:
            (1) Artificial intelligence.--The term ``artificial 
        intelligence'' has the meaning given the term in section 238(g) 
        of the John S. McCain National Defense Authorization Act for 
        Fiscal Year 2019 (10 U.S.C. 2358 note).
            (2) Artificial intelligence system.--The term ``artificial 
        intelligence system'' means any data system, software, 
        application, tool, or utility that operates in whole or in part 
        using dynamic or static machine learning algorithms or other 
        forms of artificial intelligence, including a data system, 
        software, application, tool, or utility--
                    (A) that is established primarily for the purpose 
                of researching, developing, or implementing artificial 
                intelligence technology; and
                    (B) for which the artificial intelligence 
                capability is integrated into another system or 
                business process, operational activity, or technology 
                system.
            (3) Covered civil rights law.--The term ``covered civil 
        rights law'' means--
                    (A) the Civil Rights Act of 1964 (42 U.S.C. 2000a 
                et seq.), the Age Discrimination in Employment Act of 
                1967 (29 U.S.C. 621 et seq.), the Americans with 
                Disabilities Act of 1990 (42 U.S.C. 12101 et seq.), 
                title V of the Rehabilitation Act of 1973 (29 U.S.C. 
                791 et seq.), section 6(d) of the Fair Labor Standards 
                Act of 1938 (29 U.S.C. 206(d)), title II of the Genetic 
                Information Nondiscrimination Act of 2008 (42 U.S.C. 
                2000ff et seq.), subchapter II of chapter 43 of title 
                38, United States Code, title IX of the Education 
                Amendments of 1972 (20 U.S.C. 1681 et seq.), the Age 
                Discrimination Act of 1975 (42 U.S.C. 6101 et seq.), 
                and any provision of Federal, State, or local law, 
                including the Constitution of the United States, that 
                prohibits discrimination in public or private 
                employment (including contracting), or in the provision 
                of a program or activity or accommodation, on the basis 
                of a protected class; and
                    (B) the Immigration and Nationality Act (8 U.S.C. 
                1101 et seq.), the Voting Rights Act of 1965 (52 U.S.C. 
                10301 et seq.), and any provision of Federal, State, or 
                local law, including the Constitution of the United 
                States, that prohibits discrimination concerning legal 
                status or a legal right on the basis of a protected 
                class.
            (4) Covered entity.--The term ``covered entity'' means any 
        person (including a partnership, corporation, Federal, State, 
        or local agency, or entity) that is subject to a covered civil 
        rights law.
            (5) Director.--The term ``Director'' means the Director of 
        the National Institute for Standards and Technology.

SEC. 3. APPLICABILITY OF CIVIL RIGHTS LAWS TO DECISIONS MADE BY OR 
              AUGMENTED BY ALGORITHMS.

    (a) Purpose.--The purpose of this section to remove any doubt about 
the liability described in subsection (b) of a covered entity described 
in subsection (b).
    (b) Liability.--A covered entity that uses artificial intelligence 
to make or inform a decision that has an impact on a person that is 
addressed by a covered civil rights law, including whether to provide a 
program or activity or accommodation to a person, shall be liable for a 
claim of discrimination under the corresponding covered civil rights 
law in the same manner and to the same extent (including being liable 
pursuant to that law's standard of culpability) as if the covered 
entity had made such decision without the use of artificial 
intelligence.

SEC. 4. REQUIREMENT FOR NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY 
              PROGRAM OF TECHNOLOGY EVALUATIONS OF BIAS AND 
              DISCRIMINATION IN ARTIFICIAL INTELLIGENCE SYSTEMS.

    (a) Establishment of Technology Evaluation Program.--Not later than 
1 year after the date of the enactment of this Act, the Director shall 
establish a program for conducting technology evaluations to assess and 
assist in mitigating bias and discrimination in artificial intelligence 
systems of covered entities with respect to race, sex, age, disability, 
and other classes or characteristics protected by covered civil rights 
laws. In establishing such program, the Director shall ensure that such 
evaluations effectively approximate real-world applications of 
artificial intelligence systems.
    (b) Priority Evaluation Areas.--In carrying out the program 
required under subsection (a), the Director shall prioritize the 
conduct of technology evaluations to mitigate bias in--
            (1) the applications identified as high risk by previous 
        technology evaluations and strategy documents;
            (2) speech recognition and synthesis;
            (3) recommendation systems, including for financial and 
        criminal justice applications;
            (4) sensitive image recognition technology, including 
        facial and gait recognition systems; and
            (5) any other artificial intelligence use case that poses a 
        high risk for discrimination based on classes or 
        characteristics protected by covered civil rights laws, such as 
        image and video synthesis, text generation, and conversation 
        and information systems.
    (c) Participation.--In designing technology evaluations under 
subsection (a), the Director shall ensure the participation of any 
industry and nongovernmental experts and entities in the fields of 
artificial intelligence, machine learning, computer science, social 
sciences, civil rights, and civil liberties seeking to participate in 
such evaluations.
    (d) Authorization of Appropriations.--There is authorized to be 
appropriated to the Director such sums as may be necessary to carry out 
this section for each of the fiscal years 2023 through 2028.
    (e) Sunset.--The program required under subsection (a) shall 
terminate on December 31, 2028.
                                 <all>