[Congressional Bills 118th Congress]
[From the U.S. Government Publishing Office]
[S. 3312 Reported in Senate (RS)]

<DOC>





                                                       Calendar No. 723
118th CONGRESS
  2d Session
                                S. 3312

   To provide a framework for artificial intelligence innovation and 
                accountability, and for other purposes.


_______________________________________________________________________


                   IN THE SENATE OF THE UNITED STATES

                           November 15, 2023

 Mr. Thune (for himself, Ms. Klobuchar, Mr. Wicker, Mr. Hickenlooper, 
  Mr. Lujan, Mrs. Capito, Ms. Baldwin, and Ms. Lummis) introduced the 
 following bill; which was read twice and referred to the Committee on 
                 Commerce, Science, and Transportation

            December 18 (legislative day, December 16), 2024

              Reported by Ms. Cantwell, with an amendment
 [Strike out all after the enacting clause and insert the part printed 
                               in italic]

_______________________________________________________________________

                                 A BILL


 
   To provide a framework for artificial intelligence innovation and 
                accountability, and for other purposes.

    Be it enacted by the Senate and House of Representatives of the 
United States of America in Congress assembled,

<DELETED>SECTION 1. SHORT TITLE.</DELETED>

<DELETED>    This Act may be cited as the ``Artificial Intelligence 
Research, Innovation, and Accountability Act of 2023''.</DELETED>

<DELETED>SEC. 2. TABLE OF CONTENTS.</DELETED>

<DELETED>    The table of contents for this Act is as 
follows:</DELETED>

<DELETED>Sec. 1. Short title.
<DELETED>Sec. 2. Table of contents.
   <DELETED>TITLE I--ARTIFICIAL INTELLIGENCE RESEARCH AND INNOVATION

<DELETED>Sec. 101. Open data policy amendments.
<DELETED>Sec. 102. Online content authenticity and provenance standards 
                            research and development.
<DELETED>Sec. 103. Standards for detection of emergent and anomalous 
                            behavior and AI-generated media.
<DELETED>Sec. 104. Comptroller General study on barriers and best 
                            practices to usage of AI in government.
       <DELETED>TITLE II--ARTIFICIAL INTELLIGENCE ACCOUNTABILITY

<DELETED>Sec. 201. Definitions.
<DELETED>Sec. 202. Generative artificial intelligence transparency.
<DELETED>Sec. 203. Transparency reports for high-impact artificial 
                            intelligence systems.
<DELETED>Sec. 204. Recommendations to Federal agencies for risk 
                            management of high-impact artificial 
                            intelligence systems.
<DELETED>Sec. 205. Office of Management and Budget oversight of 
                            recommendations to agencies.
<DELETED>Sec. 206. Risk management assessment for critical-impact 
                            artificial intelligence systems.
<DELETED>Sec. 207. Certification of critical-impact artificial 
                            intelligence systems.
<DELETED>Sec. 208. Enforcement.
<DELETED>Sec. 209. Artificial intelligence consumer education.

        <DELETED>TITLE I--ARTIFICIAL INTELLIGENCE RESEARCH AND 
                          INNOVATION</DELETED>

<DELETED>SEC. 101. OPEN DATA POLICY AMENDMENTS.</DELETED>

<DELETED>    Section 3502 of title 44, United States Code, is amended--
</DELETED>
        <DELETED>    (1) in paragraph (22)--</DELETED>
                <DELETED>    (A) by inserting ``or data model'' after 
                ``a data asset''; and</DELETED>
                <DELETED>    (B) by striking ``and'' at the 
                end;</DELETED>
        <DELETED>    (2) in paragraph (23), by striking the period at 
        the end and inserting a semicolon; and</DELETED>
        <DELETED>    (3) by adding at the end the following:</DELETED>
        <DELETED>    ``(24) the term `data model' means a mathematical, 
        economic, or statistical representation of a system or process 
        used to assist in making calculations and predictions, 
        including through the use of algorithms, computer programs, or 
        artificial intelligence systems; and</DELETED>
        <DELETED>    ``(25) the term `artificial intelligence system' 
        means an engineered system that--</DELETED>
                <DELETED>    ``(A) generates outputs, such as content, 
                predictions, recommendations, or decisions for a given 
                set of objectives; and</DELETED>
                <DELETED>    ``(B) is designed to operate with varying 
                levels of adaptability and autonomy using machine and 
                human-based inputs.''.</DELETED>

<DELETED>SEC. 102. ONLINE CONTENT AUTHENTICITY AND PROVENANCE STANDARDS 
              RESEARCH AND DEVELOPMENT.</DELETED>

<DELETED>    (a) Research.--</DELETED>
        <DELETED>    (1) In general.--Not later than 180 days after the 
        date of the enactment of this Act, the Under Secretary of 
        Commerce for Standards and Technology shall carry out research 
        to facilitate the development and standardization of means to 
        provide authenticity and provenance information for content 
        generated by human authors and artificial intelligence 
        systems.</DELETED>
        <DELETED>    (2) Elements.--The research carried out pursuant 
        to paragraph (1) shall cover the following:</DELETED>
                <DELETED>    (A) Secure and binding methods for human 
                authors of content to append statements of provenance 
                through the use of unique credentials, watermarking, or 
                other data or metadata-based approaches.</DELETED>
                <DELETED>    (B) Methods for the verification of 
                statements of content provenance to ensure authenticity 
                such as watermarking or classifiers, which are trained 
                models that distinguish artificial intelligence-
                generated media.</DELETED>
                <DELETED>    (C) Methods for displaying clear and 
                conspicuous statements of content provenance to the end 
                user.</DELETED>
                <DELETED>    (D) Technologies or applications needed to 
                facilitate the creation and verification of content 
                provenance information.</DELETED>
                <DELETED>    (E) Mechanisms to ensure that any 
                technologies and methods developed under this section 
                are minimally burdensome on content 
                producers.</DELETED>
                <DELETED>    (F) Such other related processes, 
                technologies, or applications as the Under Secretary 
                considers appropriate.</DELETED>
                <DELETED>    (G) Use of provenance technology to enable 
                attribution for content creators.</DELETED>
        <DELETED>    (3) Implementation.--The Under Secretary shall 
        carry out the research required by paragraph (1) as part of the 
        research directives pursuant to section 22A(b)(1) of the 
        National Institute of Standards and Technology Act (15 U.S.C. 
        278h-1(b)(1)).</DELETED>
<DELETED>    (b) Development of Standards.--</DELETED>
        <DELETED>    (1) In general.--For methodologies and 
        applications related to content provenance and authenticity 
        deemed by the Under Secretary to be at a readiness level 
        sufficient for standardization, the Under Secretary shall 
        provide technical review and assistance to such other Federal 
        agencies and nongovernmental standards organizations as the 
        Under Secretary considers appropriate.</DELETED>
        <DELETED>    (2) Considerations.--In providing any technical 
        review and assistance related to the development of content 
        provenance and authenticity standards under this subsection, 
        the Under Secretary may--</DELETED>
                <DELETED>    (A) consider whether a proposed standard 
                is reasonable, practicable, and appropriate for the 
                particular type of media and media environment for 
                which the standard is proposed;</DELETED>
                <DELETED>    (B) consult with relevant stakeholders; 
                and</DELETED>
                <DELETED>    (C) review industry standards issued by 
                nongovernmental standards organizations.</DELETED>
<DELETED>    (c) Pilot Program.--</DELETED>
        <DELETED>    (1) In general.--The Under Secretary shall carry 
        out a pilot program to assess the feasibility and advisability 
        of using available technologies and creating open standards to 
        facilitate the creation and verification of content governance 
        information for digital content.</DELETED>
        <DELETED>    (2) Locations.--The pilot program required by 
        paragraph (1) shall be carried out at not more than 2 Federal 
        agencies the Under Secretary shall select for purposes of the 
        pilot program required by paragraph (1).</DELETED>
        <DELETED>    (3) Requirements.--In carrying out the pilot 
        program required by paragraph (1), the Under Secretary shall--
        </DELETED>
                <DELETED>    (A) apply and evaluate methods for 
                authenticating the origin of and modifications to 
                government-produced digital content using technology 
                and open standards described in paragraph (1); 
                and</DELETED>
                <DELETED>    (B) make available to the public digital 
                content embedded with provenance or other 
                authentication provided by the heads of the Federal 
                agencies selected pursuant to paragraph (2) for the 
                purposes of the pilot program.</DELETED>
        <DELETED>    (4) Briefing required.--Not later than 1 year 
        after the date of the enactment of this Act, and annually 
        thereafter until the date described in paragraph (5), the Under 
        Secretary shall brief the Committee on Commerce, Science, and 
        Transportation of the Senate and the Committee on Science, 
        Space, and Technology of the House of Representatives on the 
        findings of the Under Secretary with respect to the pilot 
        program carried out under this subsection.</DELETED>
        <DELETED>    (5) Termination.--The pilot program shall 
        terminate on the date that is 10 years after the date of the 
        enactment of this Act.</DELETED>
<DELETED>    (d) Report to Congress.--Not later than 1 year after the 
date of the enactment of this Act, the Under Secretary shall submit to 
the Committee on Commerce, Science, and Transportation of the Senate 
and the Committee on Science, Space, and Technology of the House of 
Representatives a report outlining the progress of standardization 
initiatives relating to requirements under this section, as well as 
recommendations for legislative or administrative action to encourage 
or require the widespread adoption of such initiatives in the United 
States.</DELETED>

<DELETED>SEC. 103. STANDARDS FOR DETECTION OF EMERGENT AND ANOMALOUS 
              BEHAVIOR AND AI-GENERATED MEDIA.</DELETED>

<DELETED>    Section 22A(b)(1) of the National Institute of Standards 
and Technology Act (15 U.S.C. 278h-1(b)(1)) is amended--</DELETED>
        <DELETED>    (1) by redesignating subparagraph (I) as 
        subparagraph (K);</DELETED>
        <DELETED>    (2) in subparagraph (H), by striking ``; and'' and 
        inserting a semicolon; and</DELETED>
        <DELETED>    (3) by inserting after subparagraph (H) the 
        following:</DELETED>
                <DELETED>    ``(I) best practices for detecting outputs 
                generated by artificial intelligence systems, including 
                content such as text, audio, images, and 
                videos;</DELETED>
                <DELETED>    ``(J) methods to detect and understand 
                anomalous behavior of artificial intelligence systems 
                and safeguards to mitigate potentially adversarial or 
                compromising anomalous behavior; and''.</DELETED>

<DELETED>SEC. 104. COMPTROLLER GENERAL STUDY ON BARRIERS AND BEST 
              PRACTICES TO USAGE OF AI IN GOVERNMENT.</DELETED>

<DELETED>    (a) In General.--Not later than 1 year after the date of 
enactment of this Act, the Comptroller General of the United States 
shall--</DELETED>
        <DELETED>    (1) conduct a review of statutory, regulatory, and 
        other policy barriers to the use of artificial intelligence 
        systems to improve the functionality of the Federal Government; 
        and</DELETED>
        <DELETED>    (2) identify best practices for the adoption and 
        use of artificial intelligence systems by the Federal 
        Government, including--</DELETED>
                <DELETED>    (A) ensuring that an artificial 
                intelligence system is proportional to the need of the 
                Federal Government;</DELETED>
                <DELETED>    (B) restrictions on access to and use of 
                an artificial intelligence system based on the 
                capabilities and risks of the artificial intelligence 
                system; and</DELETED>
                <DELETED>    (C) safety measures that ensure that an 
                artificial intelligence system is appropriately limited 
                to necessary data and compartmentalized from other 
                assets of the Federal Government.</DELETED>
<DELETED>    (b) Report.--Not later than 2 years after the date of 
enactment of this Act, the Comptroller General of the United States 
shall submit to the Committee on Commerce, Science, and Transportation 
of the Senate and the Committee on Science, Space, and Technology of 
the House of Representatives a report that--</DELETED>
        <DELETED>    (1) summarizes the results of the review conducted 
        under subsection (a)(1) and the best practices identified under 
        subsection (a)(2), including recommendations, as the 
        Comptroller General of the United States considers 
        appropriate;</DELETED>
        <DELETED>    (2) describes any laws, regulations, guidance 
        documents, or other policies that may prevent the adoption of 
        artificial intelligence systems by the Federal Government to 
        improve certain functions of the Federal Government, 
        including--</DELETED>
                <DELETED>    (A) data analysis and 
                processing;</DELETED>
                <DELETED>    (B) paperwork reduction;</DELETED>
                <DELETED>    (C) contracting and procurement practices; 
                and</DELETED>
                <DELETED>    (D) other Federal Government services; 
                and</DELETED>
        <DELETED>    (3) includes, as the Comptroller General of the 
        United States considers appropriate, recommendations to modify 
        or eliminate barriers to the use of artificial intelligence 
        systems by the Federal Government.</DELETED>

  <DELETED>TITLE II--ARTIFICIAL INTELLIGENCE ACCOUNTABILITY</DELETED>

<DELETED>SEC. 201. DEFINITIONS.</DELETED>

<DELETED>    In this title:</DELETED>
        <DELETED>    (1) Appropriate congressional committees.--The 
        term ``appropriate congressional committees'' means--</DELETED>
                <DELETED>    (A) the Committee on Energy and Natural 
                Resources and the Committee on Commerce, Science, and 
                Transportation of the Senate;</DELETED>
                <DELETED>    (B) the Committee on Energy and Commerce 
                of the House of Representatives; and</DELETED>
                <DELETED>    (C) each congressional committee with 
                jurisdiction over an applicable covered 
                agency.</DELETED>
        <DELETED>    (2) Artificial intelligence system.--The term 
        ``artificial intelligence system'' means an engineered system 
        that--</DELETED>
                <DELETED>    (A) generates outputs, such as content, 
                predictions, recommendations, or decisions for a given 
                set of human-defined objectives; and</DELETED>
                <DELETED>    (B) is designed to operate with varying 
                levels of adaptability and autonomy using machine and 
                human-based inputs.</DELETED>
        <DELETED>    (3) Covered agency.--the term ``covered agency'' 
        means an agency for which the Under Secretary develops an NIST 
        recommendation.</DELETED>
        <DELETED>    (4) Covered internet platform.--</DELETED>
                <DELETED>    (A) In general.--The term ``covered 
                internet platform''--</DELETED>
                        <DELETED>    (i) means any public-facing 
                        website, consumer-facing internet application, 
                        or mobile application available to consumers in 
                        the United States; and</DELETED>
                        <DELETED>    (ii) includes a social network 
                        site, video sharing service, search engine, and 
                        content aggregation service.</DELETED>
                <DELETED>    (B) Exclusions.--The term ``covered 
                internet platform'' does not include a platform that--
                </DELETED>
                        <DELETED>    (i) is wholly owned, controlled, 
                        and operated by a person that--</DELETED>
                                <DELETED>    (I) during the most recent 
                                180-day period, did not employ more 
                                than 500 employees;</DELETED>
                                <DELETED>    (II) during the most 
                                recent 3-year period, averaged less 
                                than $50,000,000 in annual gross 
                                receipts; and</DELETED>
                                <DELETED>    (III) on an annual basis, 
                                collects or processes the personal data 
                                of less than 1,000,000 individuals; 
                                or</DELETED>
                        <DELETED>    (ii) is operated for the sole 
                        purpose of conducting research that is not 
                        directly or indirectly made for 
                        profit.</DELETED>
        <DELETED>    (5) Critical-impact ai organization.--The term 
        ``critical-impact AI organization'' means a non-government 
        organization that serves as the deployer of a critical-impact 
        artificial intelligence system.</DELETED>
        <DELETED>    (6) Critical-impact artificial intelligence 
        system.--The term ``critical-impact artificial intelligence 
        system'' means an artificial intelligence system that--
        </DELETED>
                <DELETED>    (A) is deployed for a purpose other than 
                solely for use by the Department of Defense or an 
                intelligence agency (as defined in section 3094(e) of 
                the National Security Act of 1947 (50 U.S.C. 3094(3))); 
                and</DELETED>
                <DELETED>    (B) is used or intended to be used--
                </DELETED>
                        <DELETED>    (i) to make decisions that have a 
                        legal or similarly significant effect on--
                        </DELETED>
                                <DELETED>    (I) the real-time or ex 
                                post facto collection of biometric data 
                                of natural persons by biometric 
                                identification systems without their 
                                consent;</DELETED>
                                <DELETED>    (II) the direct management 
                                and operation of critical 
                                infrastructure (as defined in section 
                                1016(e) of the USA PATRIOT Act (42 
                                U.S.C. 5195c(e))) and space-based 
                                infrastructure; or</DELETED>
                                <DELETED>    (III) criminal justice (as 
                                defined in section 901 of title I of 
                                the Omnibus Crime Control and Safe 
                                Streets Act of 1968 (34 U.S.C. 10251)); 
                                and</DELETED>
                        <DELETED>    (ii) in a manner that poses a 
                        significant risk to rights afforded under the 
                        Constitution of the United States or 
                        safety.</DELETED>
        <DELETED>    (7) Deployer.--The term ``deployer''--</DELETED>
                <DELETED>    (A) means an entity that uses or operates 
                an artificial intelligence system for internal use or 
                for use by third parties; and</DELETED>
                <DELETED>    (B) does not include an entity that is 
                solely an end user of a system.</DELETED>
        <DELETED>    (8) Developer.--The term ``developer'' means an 
        entity that--</DELETED>
                <DELETED>    (A) designs, codes, produces, or owns an 
                artificial intelligence system for internal use or for 
                use by a third party as a baseline model; and</DELETED>
                <DELETED>    (B) does not act as a deployer of the 
                artificial intelligence system described in 
                subparagraph (A).</DELETED>
        <DELETED>    (9) Generative artificial intelligence system.--
        The term ``generative artificial intelligence system'' means an 
        artificial intelligence system that generates novel data or 
        content in a written, audio, or visual format.</DELETED>
        <DELETED>    (10) High-impact artificial intelligence system.--
        The term ``high-impact artificial intelligence system'' means 
        an artificial intelligence system--</DELETED>
                <DELETED>    (A) deployed for a purpose other than 
                solely for use by the Department of Defense or an 
                intelligence agency (as defined in section 3094(e) of 
                the National Security Act of 1947 (50 U.S.C. 3094(3))); 
                and</DELETED>
                <DELETED>    (B) that is specifically developed with 
                the intended purpose of making decisions that have a 
                legal or similarly significant effect on the access of 
                an individual to housing, employment, credit, 
                education, healthcare, or insurance in a manner that 
                poses a significant risk to rights afforded under the 
                Constitution of the United States or safety.</DELETED>
        <DELETED>    (11) NIST recommendation.--The term ``NIST 
        recommendation'' means a sector-specific recommendation 
        developed under section 22B(b)(1) of the National Institute of 
        Standards and Technology Act, as added by section 204 of this 
        Act.</DELETED>
        <DELETED>    (12) Secretary.--The term ``Secretary'' means the 
        Secretary of Commerce.</DELETED>
        <DELETED>    (13) Significant risk.--The term ``significant 
        risk'' means a combination of severe, high-intensity, high-
        probability, and long-duration risk of harm to 
        individuals.</DELETED>
        <DELETED>    (14) TEVV.--The term ``TEVV'' means the testing, 
        evaluation, validation, and verification of any artificial 
        intelligence system that includes--</DELETED>
                <DELETED>    (A) open, transparent, testable, and 
                verifiable specifications that characterize realistic 
                operational performance, such as precision and accuracy 
                for relevant tasks;</DELETED>
                <DELETED>    (B) testing methodologies and metrics that 
                enable the evaluation of system trustworthiness, 
                including robustness and resilience;</DELETED>
                <DELETED>    (C) data quality standards for training 
                and testing datasets;</DELETED>
                <DELETED>    (D) requirements for system validation and 
                integration into production environments, automated 
                testing, and compliance with existing legal and 
                regulatory specifications;</DELETED>
                <DELETED>    (E) methods and tools for--</DELETED>
                        <DELETED>    (i) the monitoring of system 
                        behavior;</DELETED>
                        <DELETED>    (ii) the tracking of incidents or 
                        errors reported and their management; 
                        and</DELETED>
                        <DELETED>    (iii) the detection of emergent 
                        properties and related impacts; and</DELETED>
                <DELETED>    (F) and processes for redress and 
                response.</DELETED>
        <DELETED>    (15) Under secretary.--The term ``Under 
        Secretary'' means the Director of the National Institute of 
        Standards and Technology.</DELETED>

<DELETED>SEC. 202. GENERATIVE ARTIFICIAL INTELLIGENCE 
              TRANSPARENCY.</DELETED>

<DELETED>    (a) Prohibition.--</DELETED>
        <DELETED>    (1) In general.--Subject to paragraph (2), it 
        shall be unlawful for a person to operate a covered internet 
        platform that uses a generative artificial intelligence 
        system.</DELETED>
        <DELETED>    (2) Disclosure of use of generative artificial 
        intelligence systems.--</DELETED>
                <DELETED>    (A) In general.--A person may operate a 
                covered internet platform that uses a generative 
                artificial intelligence system if the person provides 
                notice to each user of the covered internet platform 
                that the covered internet platform uses a generative 
                artificial intelligence system to generate content the 
                user sees.</DELETED>
                <DELETED>    (B) Requirements.--A person providing the 
                notice described in subparagraph (A) to a user--
                </DELETED>
                        <DELETED>    (i) subject to clause (ii), shall 
                        provide the notice in a clear and conspicuous 
                        manner on the covered internet platform before 
                        the user interacts with content produced by a 
                        generative artificial intelligence system; 
                        and</DELETED>
                        <DELETED>    (ii) may provide an option for the 
                        user to choose to see the notice described in 
                        clause (i) only upon the first interaction of 
                        the user with content produced by a generative 
                        artificial intelligence system.</DELETED>
<DELETED>    (b) Enforcement Action.--Upon learning that a covered 
internet platform does not comply with the requirements under this 
section, the Secretary--</DELETED>
        <DELETED>    (1) shall immediately--</DELETED>
                <DELETED>    (A) notify the covered internet platform 
                of the finding; and</DELETED>
                <DELETED>    (B) order the covered internet platform to 
                take remedial action to address the noncompliance of 
                the generative artificial intelligence system operated 
                by the covered internet platform; and</DELETED>
        <DELETED>    (2) may, as determined appropriate or necessary by 
        the Secretary, take enforcement action under section 208 if the 
        covered internet platform does not take sufficient action to 
        remedy the noncompliance within 15 days of the notification 
        under paragraph (1)(A).</DELETED>
<DELETED>    (c) Effective Date.--This section shall take effect on the 
date that is 180 days after the date of enactment of this 
Act.</DELETED>

<DELETED>SEC. 203. TRANSPARENCY REPORTS FOR HIGH-IMPACT ARTIFICIAL 
              INTELLIGENCE SYSTEMS.</DELETED>

<DELETED>    (a) Transparency Reporting.--</DELETED>
        <DELETED>    (1) In general.--Each deployer of a high-impact 
        artificial intelligence system shall--</DELETED>
                <DELETED>    (A) before deploying the high-impact 
                artificial intelligence system, and annually 
                thereafter, submit to the Secretary a report describing 
                the design and safety plans for the artificial 
                intelligence system; and</DELETED>
                <DELETED>    (B) submit to the Secretary an updated 
                report on the high-impact artificial intelligence 
                system if the deployer makes a material change to--
                </DELETED>
                        <DELETED>    (i) the purpose for which the 
                        high-impact artificial intelligence system is 
                        used; or</DELETED>
                        <DELETED>    (ii) the type of data the high-
                        impact artificial intelligence system processes 
                        or uses for training purposes.</DELETED>
        <DELETED>    (2) Contents.--Each transparency report submitted 
        under paragraph (1) shall include, with respect to the high-
        impact artificial intelligence system--</DELETED>
                <DELETED>    (A) the purpose;</DELETED>
                <DELETED>    (B) the intended use cases;</DELETED>
                <DELETED>    (C) deployment context;</DELETED>
                <DELETED>    (D) benefits;</DELETED>
                <DELETED>    (E) a description of data that the high-
                impact artificial intelligence system, once deployed, 
                processes as inputs;</DELETED>
                <DELETED>    (F) if available--</DELETED>
                        <DELETED>    (i) a list of data categories and 
                        formats the deployer used to retrain or 
                        continue training the high-impact artificial 
                        intelligence system;</DELETED>
                        <DELETED>    (ii) metrics for evaluating the 
                        high-impact artificial intelligence system 
                        performance and known limitations; 
                        and</DELETED>
                        <DELETED>    (iii) transparency measures, 
                        including information identifying to 
                        individuals when a high-impact artificial 
                        intelligence system is in use;</DELETED>
                <DELETED>    (G) processes and testing performed before 
                each deployment to ensure the high-impact artificial 
                intelligence system is safe, reliable, and 
                effective;</DELETED>
                <DELETED>    (H) if applicable, an identification of 
                any third-party artificial intelligence systems or 
                datasets the deployer relies on to train or operate the 
                high-impact artificial intelligence system; 
                and</DELETED>
                <DELETED>    (I) post-deployment monitoring and user 
                safeguards, including a description of the oversight 
                process in place to address issues as issues 
                arise.</DELETED>
<DELETED>    (b) Developer Obligations.--The developer of a high-impact 
artificial intelligence system shall be subject to the same obligations 
as a developer of a critical impact artificial intelligence system 
under section 206(c).</DELETED>
<DELETED>    (c) Considerations.--In carrying out subsections (a) and 
(b), a deployer or developer of a high-impact artificial intelligence 
system shall consider the best practices outlined in the most recent 
version of the risk management framework developed pursuant to section 
22A(c) of the National Institute of Standards and Technology Act (15 
U.S.C. 278h-1(c)).</DELETED>
<DELETED>    (d) Noncompliance and Enforcement Action.--Upon learning 
that a deployer of a high-impact artificial intelligence system is not 
in compliance with the requirements under this section with respect to 
a high-impact artificial intelligence system, the Secretary--</DELETED>
        <DELETED>    (1) shall immediately--</DELETED>
                <DELETED>    (A) notify the deployer of the finding; 
                and</DELETED>
                <DELETED>    (B) order the deployer to immediately 
                submit to the Secretary the report required under 
                subsection (a)(1); and</DELETED>
        <DELETED>    (2) if the deployer fails to submit the report by 
        the date that is 15 days after the date of the notification 
        under paragraph (1)(A), may take enforcement action under 
        section 208.</DELETED>
<DELETED>    (e) Avoidance of Duplication.--</DELETED>
        <DELETED>    (1) In general.--Pursuant to the deconfliction of 
        duplicative requirements under paragraph (2), the Secretary 
        shall ensure that the requirements under this section are not 
        unnecessarily burdensome or duplicative of requirements made or 
        oversight conducted by a covered agency regarding the non-
        Federal use of high-impact artificial intelligence 
        systems.</DELETED>
        <DELETED>    (2) Deconfliction of duplicative requirements.--
        Not later than 90 days after the date of the enactment of this 
        Act, and annually thereafter, the Secretary, in coordination 
        with the head of any relevant covered agency, shall complete 
        the deconfliction of duplicative requirements relating to the 
        submission of a transparency report for a high-impact 
        artificial intelligence system under this section.</DELETED>
<DELETED>    (f) Rule of Construction.--Nothing in this section shall 
be construed to require a deployer of a high-impact artificial 
intelligence system to disclose any information, including data or 
algorithms--</DELETED>
        <DELETED>    (1) relating to a trade secret or other protected 
        intellectual property right;</DELETED>
        <DELETED>    (2) that is confidential business information; 
        or</DELETED>
        <DELETED>    (3) that is privileged.</DELETED>

<DELETED>SEC. 204. RECOMMENDATIONS TO FEDERAL AGENCIES FOR RISK 
              MANAGEMENT OF HIGH-IMPACT ARTIFICIAL INTELLIGENCE 
              SYSTEMS.</DELETED>

<DELETED>    The National Institute of Standards and Technology Act (15 
U.S.C. 278h-1) is amended by inserting after section 22A the 
following:</DELETED>

<DELETED>``SEC. 22B. RECOMMENDATIONS TO FEDERAL AGENCIES FOR SECTOR-
              SPECIFIC OVERSIGHT OF ARTIFICIAL INTELLIGENCE.</DELETED>

<DELETED>    ``(a) Definition of High-Impact Artificial Intelligence 
System.--In this section, the term `high-impact artificial intelligence 
system' means an artificial intelligence system--</DELETED>
        <DELETED>    ``(1) deployed for purposes other than those 
        solely for use by the Department of Defense or an element of 
        the intelligence community (as defined in section 3 of the 
        National Security Act of 1947 (50 U.S.C. 3003)); and</DELETED>
        <DELETED>    ``(2) that is specifically developed with the 
        intended purpose of making decisions that have a legal or 
        similarly significant effect on the access of an individual to 
        housing, employment, credit, education, health care, or 
        insurance in a manner that poses a significant risk to rights 
        afforded under the Constitution of the United States or to 
        safety.</DELETED>
<DELETED>    ``(b) Sector-Specific Recommendations.--Not later than 1 
year after the date of the enactment of the Artificial Intelligence 
Research, Innovation, and Accountability Act of 2023, the Director 
shall--</DELETED>
        <DELETED>    ``(1) develop sector-specific recommendations for 
        individual Federal agencies to conduct oversight of the non-
        Federal, and, as appropriate, Federal use of high-impact 
        artificial intelligence systems to improve the safe and 
        responsible use of such systems; and</DELETED>
        <DELETED>    ``(2) not less frequently than biennially, update 
        the sector-specific recommendations to account for changes in 
        technological capabilities or artificial intelligence use 
        cases.</DELETED>
<DELETED>    ``(c) Requirements.--In developing recommendations under 
subsection (b), the Director shall use the voluntary risk management 
framework required by section 22A(c) to identify and provide 
recommendations to a Federal agency--</DELETED>
        <DELETED>    ``(1) to establish regulations, standards, 
        guidelines, best practices, methodologies, procedures, or 
        processes to facilitate oversight of non-Federal use of high-
        impact artificial intelligence systems; and</DELETED>
        <DELETED>    ``(2) to mitigate risks from such high-impact 
        artificial intelligence systems.</DELETED>
<DELETED>    ``(d) Recommendations.--In developing recommendations 
under subsection (b), the Director may include the following:</DELETED>
        <DELETED>    ``(1) Key design choices made during high-impact 
        artificial intelligence model development, including rationale 
        and assumptions made.</DELETED>
        <DELETED>    ``(2) Intended use and users, other possible use 
        cases, including any anticipated undesirable or potentially 
        harmful use cases, and what good faith efforts model developers 
        can take to mitigate the use of the system in harmful 
        ways.</DELETED>
        <DELETED>    ``(3) Methods for evaluating the safety of high-
        impact artificial intelligence systems and approaches for 
        responsible use.</DELETED>
        <DELETED>    ``(4) Sector-specific differences in what 
        constitutes acceptable high-impact artificial intelligence 
        model functionality and trustworthiness, metrics used to 
        determine high-impact artificial intelligence model 
        performance, and any test results reflecting application of 
        these metrics to evaluate high-impact artificial intelligence 
        model performance across different sectors.</DELETED>
        <DELETED>    ``(5) Recommendations to support iterative 
        development of subsequent recommendations under subsection 
        (b).</DELETED>
<DELETED>    ``(e) Consultation.--In developing recommendations under 
subsection (b), the Director shall, as the Director considers 
applicable and practicable, consult with relevant covered agencies and 
stakeholders representing perspectives from civil society, academia, 
technologists, engineers, and creators.''.</DELETED>

<DELETED>SEC. 205. OFFICE OF MANAGEMENT AND BUDGET OVERSIGHT OF 
              RECOMMENDATIONS TO AGENCIES.</DELETED>

<DELETED>    (a) Recommendations.--</DELETED>
        <DELETED>    (1) In general.--Not later than 1 year after the 
        date of enactment of this Act, the Under Secretary shall submit 
        to the Director, the head of each covered agency, and the 
        appropriate congressional committees each NIST 
        recommendation.</DELETED>
        <DELETED>    (2) Agency responses to recommendations.--Not 
        later than 90 days after the date on which the Under Secretary 
        submits a NIST recommendation to the head of a covered agency 
        under paragraph (1), the head of the covered agency shall 
        transmit to the Director a formal written response to the NIST 
        recommendation that--</DELETED>
                <DELETED>    (A) indicates whether the head of the 
                covered agency intends to--</DELETED>
                        <DELETED>    (i) carry out procedures to adopt 
                        the complete NIST recommendation;</DELETED>
                        <DELETED>    (ii) carry out procedures to adopt 
                        a part of the NIST recommendation; or</DELETED>
                        <DELETED>    (iii) refuse to carry out 
                        procedures to adopt the NIST recommendation; 
                        and</DELETED>
                <DELETED>    (B) includes--</DELETED>
                        <DELETED>    (i) with respect to a formal 
                        written response described in clause (i) or 
                        (ii) of subparagraph (A), a copy of a proposed 
                        timetable for completing the procedures 
                        described in that clause;</DELETED>
                        <DELETED>    (ii) with respect to a formal 
                        written response described in subparagraph 
                        (A)(ii), the reasons for the refusal to carry 
                        out procedures with respect to the remainder of 
                        the NIST recommendation described in that 
                        subparagraph; and</DELETED>
                        <DELETED>    (iii) with respect to a formal 
                        written response described in subparagraph 
                        (A)(iii), the reasons for the refusal to carry 
                        out procedures.</DELETED>
<DELETED>    (b) Public Availability.--The Director shall make a copy 
of each NIST recommendation and each written formal response of a 
covered agency required under subsection (a)(2) available to the public 
at reasonable cost.</DELETED>
<DELETED>    (c) Reporting Requirements.--</DELETED>
        <DELETED>    (1) Annual secretarial regulatory status 
        reports.--</DELETED>
                <DELETED>    (A) In general.--On the first February 1 
                occurring after the date of enactment of this Act, and 
                annually thereafter until the date described in 
                subparagraph (B), the head of each covered agency shall 
                submit to the Director a report containing the 
                regulatory status of each NIST 
                recommendation.</DELETED>
                <DELETED>    (B) Continued reporting.--The date 
                described in this subparagraph is the date on which the 
                head of a covered agency--</DELETED>
                        <DELETED>    (i) takes final regulatory action 
                        with respect to a NIST recommendation; 
                        and</DELETED>
                        <DELETED>    (ii) determines and states in a 
                        report required under subparagraph (A) that no 
                        regulatory action should be taken with respect 
                        to a NIST recommendation.</DELETED>
        <DELETED>    (2) Compliance report to congress.--On April 1 of 
        each year, the Director shall--</DELETED>
                <DELETED>    (A) review the reports received under 
                paragraph (1)(A); and</DELETED>
                <DELETED>    (B) transmit comments on the reports to 
                the heads of covered agencies and the appropriate 
                congressional committees.</DELETED>
        <DELETED>    (3) Failure to report.--If, on March 1 of each 
        year, the Director has not received a report required under 
        paragraph (1)(A) from the head of a covered agency, the 
        Director shall notify the appropriate congressional committees 
        of the failure.</DELETED>
<DELETED>    (d) Technical Assistance in Carrying Out 
Recommendations.--The Under Secretary shall provide assistance to the 
heads of covered agencies relating to the implementation of the NIST 
recommendations the heads of covered agencies intend to carry 
out.</DELETED>
<DELETED>    (e) Regulation Review and Improvement.--The Administrator 
of the Office of Information and Regulatory Affairs of the Office of 
Management and Budget, in consultation with the Under Secretary, shall 
develop and periodically revise performance indicators and measures for 
sector-specific regulation of artificial intelligence.</DELETED>

<DELETED>SEC. 206. RISK MANAGEMENT ASSESSMENT FOR CRITICAL-IMPACT 
              ARTIFICIAL INTELLIGENCE SYSTEMS.</DELETED>

<DELETED>    (a) Requirement.--</DELETED>
        <DELETED>    (1) In general.--Each critical-impact AI 
        organization shall perform a risk management assessment in 
        accordance with this section.</DELETED>
        <DELETED>    (2) Assessment.--Each critical-impact AI 
        organization shall--</DELETED>
                <DELETED>    (A) not later than 30 days before the date 
                on which a critical-impact artificial intelligence 
                system is made publicly available by the critical-
                impact AI organization, perform a risk management 
                assessment; and</DELETED>
                <DELETED>    (B) not less frequently than biennially 
                during the period beginning on the date of enactment of 
                this Act and ending on the date on which the applicable 
                critical-impact artificial intelligence system is no 
                longer being made publicly available by the critical-
                impact AI organization, as applicable, conduct an 
                updated risk management assessment that--</DELETED>
                        <DELETED>    (i) may find that no significant 
                        changes were made to the critical-impact 
                        artificial intelligence system; and</DELETED>
                        <DELETED>    (ii) provides, to the extent 
                        practicable, aggregate results of any 
                        significant deviation from expected performance 
                        detailed in the assessment performed under 
                        subparagraph (A) or the most recent assessment 
                        performed under this subparagraph.</DELETED>
        <DELETED>    (3) Review.--</DELETED>
                <DELETED>    (A) In general.--Not later than 90 days 
                after the date of completion of a risk management 
                assessment by a critical-impact AI organization under 
                this section, the critical-impact AI organization shall 
                submit to the Secretary a report--</DELETED>
                        <DELETED>    (i) outlining the assessment 
                        performed under this section; and</DELETED>
                        <DELETED>    (ii) that is in a consistent 
                        format, as determined by the 
                        Secretary.</DELETED>
                <DELETED>    (B) Additional information.--Subject to 
                subsection (d), the Secretary may request that a 
                critical-impact AI organization submit to the Secretary 
                any related additional or clarifying information with 
                respect to a risk management assessment performed under 
                this section.</DELETED>
        <DELETED>    (4) Limitation.--The Secretary may not prohibit a 
        critical-impact AI organization from making a critical-impact 
        artificial intelligence system available to the public based on 
        the review by the Secretary of a report submitted under 
        paragraph (3)(A) or additional or clarifying information 
        submitted under paragraph (3)(B).</DELETED>
<DELETED>    (b) Assessment Subject Areas.--Each assessment performed 
by a critical-impact AI organization under subsection (a) shall 
describe the means by which the critical-impact AI organization is 
addressing, through a documented TEVV process, the following 
categories:</DELETED>
        <DELETED>    (1) Policies, processes, procedures, and practices 
        across the organization relating to transparent and effective 
        mapping, measuring, and managing of artificial intelligence 
        risks, including--</DELETED>
                <DELETED>    (A) how the organization understands, 
                manages, and documents legal and regulatory 
                requirements involving artificial 
                intelligence;</DELETED>
                <DELETED>    (B) how the organization integrates 
                characteristics of trustworthy artificial intelligence, 
                which include valid, reliable, safe, secure, resilient, 
                accountable, transparent, globally and locally 
                explainable, interpretable, privacy-enhanced, and fair 
                with harmful bias managed, into organizational 
                policies, processes, procedures, and 
                practices;</DELETED>
                <DELETED>    (C) a methodology to determine the needed 
                level of risk management activities based on the 
                organization's risk tolerance; and</DELETED>
                <DELETED>    (D) how the organization establishes risk 
                management processes and outcomes through transparent 
                policies, procedures, and other controls based on 
                organizational risk priorities.</DELETED>
        <DELETED>    (2) The structure, context, and capabilities of 
        the critical-impact artificial intelligence system or critical-
        impact foundation model, including--</DELETED>
                <DELETED>    (A) how the context was established and 
                understood;</DELETED>
                <DELETED>    (B) capabilities, targeted uses, goals, 
                and expected costs and benefits; and</DELETED>
                <DELETED>    (C) how risks and benefits are mapped for 
                each system component.</DELETED>
        <DELETED>    (3) A description of how the organization employs 
        quantitative, qualitative, or mixed-method tools, techniques, 
        and methodologies to analyze, assess, benchmark, and monitor 
        artificial intelligence risk, including--</DELETED>
                <DELETED>    (A) identification of appropriate methods 
                and metrics;</DELETED>
                <DELETED>    (B) how artificial intelligence systems 
                are evaluated for trustworthy 
                characteristics;</DELETED>
                <DELETED>    (C) mechanisms for tracking artificial 
                intelligence system risks over time; and</DELETED>
                <DELETED>    (D) processes for gathering and assessing 
                feedback relating to the efficacy of 
                measurement.</DELETED>
        <DELETED>    (4) A description of allocation of risk resources 
        to map and measure risks on a regular basis as described in 
        paragraph (1), including--</DELETED>
                <DELETED>    (A) how artificial intelligence risks 
                based on assessments and other analytical outputs 
                described in paragraphs (2) and (3) are prioritized, 
                responded to, and managed;</DELETED>
                <DELETED>    (B) how strategies to maximize artificial 
                intelligence benefits and minimize negative impacts 
                were planned, prepared, implemented, documented, and 
                informed by input from relevant artificial intelligence 
                deployers;</DELETED>
                <DELETED>    (C) management of artificial intelligence 
                system risks and benefits; and</DELETED>
                <DELETED>    (D) regular monitoring of risk treatments, 
                including response and recovery, and communication 
                plans for the identified and measured artificial 
                intelligence risks, as applicable.</DELETED>
<DELETED>    (c) Developer Obligations.--The developer of a critical-
impact artificial intelligence system that agrees through a contract or 
license to provide technology or services to a deployer of the 
critical-impact artificial intelligence system shall provide to the 
deployer of the critical-impact artificial intelligence system the 
information reasonably necessary for the deployer to comply with the 
requirements under subsection (a), including--</DELETED>
        <DELETED>    (1) an overview of the data used in training the 
        baseline artificial intelligence system provided by the 
        developer, including--</DELETED>
                <DELETED>    (A) data size;</DELETED>
                <DELETED>    (B) data sources;</DELETED>
                <DELETED>    (C) copyrighted data; and</DELETED>
                <DELETED>    (D) personal identifiable 
                information;</DELETED>
        <DELETED>    (2) documentation outlining the structure and 
        context of the baseline artificial intelligence system of the 
        developer, including--</DELETED>
                <DELETED>    (A) input modality;</DELETED>
                <DELETED>    (B) output modality;</DELETED>
                <DELETED>    (C) model size; and</DELETED>
                <DELETED>    (D) model architecture;</DELETED>
        <DELETED>    (3) known capabilities, limitations, and risks of 
        the baseline artificial intelligence system of the developer at 
        the time of the development of the artificial intelligence 
        system; and</DELETED>
        <DELETED>    (4) documentation for downstream use, including--
        </DELETED>
                <DELETED>    (A) a statement of intended 
                purpose;</DELETED>
                <DELETED>    (B) guidelines for the intended use of the 
                artificial intelligence system, including a list of 
                permitted, restricted, and prohibited uses and users; 
                and</DELETED>
                <DELETED>    (C) a statement of the potential for 
                deviation from the intended purpose of the baseline 
                artificial intelligence system.</DELETED>
<DELETED>    (d) Termination of Obligation To Disclose Information.--
</DELETED>
        <DELETED>    (1) In general.--The obligation of a critical-
        impact AI organization to provide information, upon request of 
        the Secretary, relating to a specific assessment category under 
        subsection (b) shall end on the date of issuance of a relevant 
        standard applicable to the same category of a critical-impact 
        artificial intelligence system by--</DELETED>
                <DELETED>    (A) the Secretary under section 207(c) 
                with respect to a critical-impact artificial 
                intelligence system;</DELETED>
                <DELETED>    (B) another department or agency of the 
                Federal Government, as determined applicable by the 
                Secretary; or</DELETED>
                <DELETED>    (C) a non-governmental standards 
                organization, as determined appropriate by the 
                Secretary.</DELETED>
        <DELETED>    (2) Effect of new standard.--In adopting any 
        standard applicable to critical-impact artificial intelligence 
        systems under section 207(c), the Secretary shall--</DELETED>
                <DELETED>    (A) identify the category under subsection 
                (b) to which the standard relates, if any; 
                and</DELETED>
                <DELETED>    (B) specify the information that is no 
                longer required to be included in a report required 
                under subsection (a) as a result of the new 
                standard.</DELETED>
<DELETED>    (e) Rule of Construction.--Nothing in this section shall 
be construed to require a critical-impact AI organization, or permit 
the Secretary, to disclose any information, including data or 
algorithms--</DELETED>
        <DELETED>    (1) relating to a trade secret or other protected 
        intellectual property right;</DELETED>
        <DELETED>    (2) that is confidential business information; 
        or</DELETED>
        <DELETED>    (3) that is privileged.</DELETED>

<DELETED>SEC. 207. CERTIFICATION OF CRITICAL-IMPACT ARTIFICIAL 
              INTELLIGENCE SYSTEMS.</DELETED>

<DELETED>    (a) Establishment of Artificial Intelligence Certification 
Advisory Committee.--</DELETED>
        <DELETED>    (1) In general.--Not later than 180 days after the 
        date of enactment of this Act, the Secretary shall establish an 
        advisory committee to provide advice and recommendations on 
        TEVV standards and the certification of critical-impact 
        artificial intelligence systems.</DELETED>
        <DELETED>    (2) Duties.--The advisory committee established 
        under this section shall advise the Secretary on matters 
        relating to the testing and certification of critical-impact 
        artificial intelligence systems, including by--</DELETED>
                <DELETED>    (A) providing recommendations to the 
                Secretary on proposed TEVV standards to ensure such 
                standards--</DELETED>
                        <DELETED>    (i) maximize alignment and 
                        interoperability with standards issued by 
                        nongovernmental standards organizations and 
                        international standards bodies;</DELETED>
                        <DELETED>    (ii) are performance-based and 
                        impact-based; and</DELETED>
                        <DELETED>    (iii) are applicable or necessary 
                        to facilitate the deployment of critical-impact 
                        artificial intelligence systems in a 
                        transparent, secure, and safe manner;</DELETED>
                <DELETED>    (B) reviewing prospective TEVV standards 
                submitted by the Secretary to ensure such standards 
                align with recommendations under subparagraph 
                (A);</DELETED>
                <DELETED>    (C) upon completion of the review under 
                subparagraph (B), providing consensus recommendations 
                to the Secretary on--</DELETED>
                        <DELETED>    (i) whether a TEVV standard should 
                        be issued, modified, revoked, or added; 
                        and</DELETED>
                        <DELETED>    (ii) if such a standard should be 
                        issued, how best to align the standard with the 
                        considerations described in subsection (c)(2) 
                        and recommendations described in subparagraph 
                        (A); and</DELETED>
                <DELETED>    (D) reviewing and providing advice and 
                recommendations on the plan and subsequent updates to 
                the plan submitted under subsection (b).</DELETED>
        <DELETED>    (3) Composition.--The advisory committee 
        established under this subsection shall be composed of not more 
        than 15 members with a balanced composition of representatives 
        of the private sector, institutions of higher education, and 
        non-profit organizations, including--</DELETED>
                <DELETED>    (A) representatives of--</DELETED>
                        <DELETED>    (i) institutions of higher 
                        education;</DELETED>
                        <DELETED>    (ii) companies developing or 
                        operating artificial intelligence 
                        systems;</DELETED>
                        <DELETED>    (iii) consumers or consumer 
                        advocacy groups; and</DELETED>
                        <DELETED>    (iv) enabling technology 
                        companies; and</DELETED>
                <DELETED>    (B) any other members the Secretary 
                considers to be appropriate.</DELETED>
<DELETED>    (b) Artificial Intelligence Certification Plan.--
</DELETED>
        <DELETED>    (1) In general.--Not later than 1 year after the 
        date of enactment of this Act, the Secretary shall establish a 
        3-year implementation plan for the certification of critical-
        impact artificial intelligence systems.</DELETED>
        <DELETED>    (2) Periodic update.--The Secretary shall 
        periodically update the plan established under paragraph 
        (1).</DELETED>
        <DELETED>    (3) Contents.--The plan established under 
        paragraph (1) shall include--</DELETED>
                <DELETED>    (A) a methodology for gathering and using 
                relevant, objective, and available information relating 
                to TEVV;</DELETED>
                <DELETED>    (B) a process for considering whether 
                prescribing certain TEVV standards under subsection (c) 
                for critical-impact artificial intelligence systems is 
                appropriate, necessary, or duplicative of existing 
                international standards;</DELETED>
                <DELETED>    (C) if TEVV standards are considered 
                appropriate, a process for prescribing such standards 
                for critical-impact artificial intelligence systems; 
                and</DELETED>
                <DELETED>    (D) an outline of standards proposed to be 
                issued, including an estimation of the timeline and 
                sequencing of such standards.</DELETED>
        <DELETED>    (4) Consultation.--In developing the plan required 
        under paragraph (1), the Secretary shall consult the 
        following:</DELETED>
                <DELETED>    (A) The National Artificial Intelligence 
                Initiative Office.</DELETED>
                <DELETED>    (B) The interagency committee established 
                under section 5103 of the National Artificial 
                Intelligence Initiative Act of 2020 (15 U.S.C. 
                9413).</DELETED>
                <DELETED>    (C) The National Artificial Intelligence 
                Advisory Committee.</DELETED>
                <DELETED>    (D) Industry consensus standards issued by 
                non-governmental standards organizations.</DELETED>
                <DELETED>    (E) Other departments, agencies, and 
                instrumentalities of the Federal Government, as 
                considered appropriate by the Secretary.</DELETED>
        <DELETED>    (5) Submission to certification advisory 
        committee.--Upon completing the initial plan required under 
        this subsection and upon completing periodic updates to the 
        plan under paragraph (2), the Secretary shall submit the plan 
        to the advisory committee established under subsection (a) for 
        review.</DELETED>
        <DELETED>    (6) Submission to committees of congress.--Upon 
        completing the plan required under this subsection, the 
        Secretary shall submit to the relevant committees of Congress a 
        report containing the plan.</DELETED>
        <DELETED>    (7) Limitation.--The Secretary may not issue TEVV 
        standards under subsection (c) until the date of the submission 
        of the plan under paragraphs (5) and (6).</DELETED>
<DELETED>    (c) Standards.--</DELETED>
        <DELETED>    (1) Standards.--</DELETED>
                <DELETED>    (A) In general.--The Secretary shall issue 
                TEVV standards for critical-impact artificial 
                intelligence systems.</DELETED>
                <DELETED>    (B) Requirements.--Each standard issued 
                under this subsection shall--</DELETED>
                        <DELETED>    (i) be practicable;</DELETED>
                        <DELETED>    (ii) meet the need for safe, 
                        secure, and transparent operations of critical-
                        impact artificial intelligence 
                        systems;</DELETED>
                        <DELETED>    (iii) with respect to a relevant 
                        standard issued by a non-governmental standards 
                        organization that is already in place, align 
                        with and be interoperable with that 
                        standard;</DELETED>
                        <DELETED>    (iv) provide for a mechanism to, 
                        not less frequently than once every 2 years, 
                        solicit public comment and update the standard 
                        to reflect advancements in technology and 
                        system architecture; and</DELETED>
                        <DELETED>    (v) be stated in objective 
                        terms.</DELETED>
        <DELETED>    (2) Considerations.--In issuing TEVV standards for 
        critical-impact artificial intelligence systems under this 
        subsection, the Secretary shall--</DELETED>
                <DELETED>    (A) consider relevant available 
                information concerning critical-impact artificial 
                intelligence systems, including--</DELETED>
                        <DELETED>    (i) transparency reports submitted 
                        under section 203(a);</DELETED>
                        <DELETED>    (ii) risk management assessments 
                        conducted under section 206(a); and</DELETED>
                        <DELETED>    (iii) any additional information 
                        provided to the Secretary pursuant to section 
                        203(a)(1)(B);</DELETED>
                <DELETED>    (B) consider whether a proposed standard 
                is reasonable, practicable, and appropriate for the 
                particular type of critical-impact artificial 
                intelligence system for which the standard is 
                proposed;</DELETED>
                <DELETED>    (C) consult with relevant artificial 
                intelligence stakeholders and review industry standards 
                issued by nongovernmental standards 
                organizations;</DELETED>
                <DELETED>    (D) pursuant to paragraph (1)(B)(iii), 
                consider whether adoption of a relevant standard issued 
                by a nongovernmental standards organization as a TEVV 
                standard is the most appropriate action; and</DELETED>
                <DELETED>    (E) consider whether the standard takes 
                into account--</DELETED>
                        <DELETED>    (i) transparent, replicable, and 
                        objective assessments of critical-impact 
                        artificial intelligence system risk, structure, 
                        capabilities, and design;</DELETED>
                        <DELETED>    (ii) the risk posed to the public 
                        by an applicable critical-impact artificial 
                        intelligence system; and</DELETED>
                        <DELETED>    (iii) the diversity of 
                        methodologies and innovative technologies and 
                        approaches available to meet the objectives of 
                        the standard.</DELETED>
        <DELETED>    (3) Consultation.--Before finalizing a TEVV 
        standard issued under this subsection, the Secretary shall 
        submit the TEVV standard to the advisory committee established 
        under subsection (a) for review.</DELETED>
        <DELETED>    (4) Public comment.--Before issuing any TEVV 
        standard under this subsection, the Secretary shall provide an 
        opportunity for public comment.</DELETED>
        <DELETED>    (5) Cooperation.--In developing a TEVV standard 
        under this subsection, the Secretary may, as determined 
        appropriate, advise, assist, and cooperate with departments, 
        agencies, and instrumentalities of the Federal Government, 
        States, and other public and private agencies.</DELETED>
        <DELETED>    (6) Effective date of standards.--</DELETED>
                <DELETED>    (A) In general.--The Secretary shall 
                specify the effective date of a TEVV standard issued 
                under this subsection in the order issuing the 
                standard.</DELETED>
                <DELETED>    (B) Limitation.--Subject to subparagraph 
                (C), a TEVV standard issued under this subsection may 
                not become effective--</DELETED>
                        <DELETED>    (i) during the 180-day period 
                        following the date on which the TEVV standard 
                        is issued; and</DELETED>
                        <DELETED>    (ii) more than 1 year after the 
                        date on which the TEVV standard is 
                        issued.</DELETED>
                <DELETED>    (C) Exception.--Subparagraph (B) shall not 
                apply to the effective date of a TEVV standard issued 
                under this section if the Secretary--</DELETED>
                        <DELETED>    (i) finds, for good cause shown, 
                        that a different effective date is in the 
                        public interest; and</DELETED>
                        <DELETED>    (ii) publishes the reasons for the 
                        finding under clause (i).</DELETED>
        <DELETED>    (7) Rule of construction.--Nothing in this 
        subsection shall be construed to authorize the Secretary to 
        impose any requirements on or take any enforcement actions 
        under this section or section 208 relating to a critical-impact 
        AI organization before a TEVV standard relating to those 
        requirements is prescribed.</DELETED>
<DELETED>    (d) Exemptions.--</DELETED>
        <DELETED>    (1) Authority to exempt and procedures.--
        </DELETED>
                <DELETED>    (A) In general.--The Secretary may exempt, 
                on a temporary basis, a critical-impact artificial 
                intelligence system from a TEVV standard issued under 
                subsection (c) on terms the Secretary considers 
                appropriate.</DELETED>
                <DELETED>    (B) Renewal.--An exemption under 
                subparagraph (A)--</DELETED>
                        <DELETED>    (i) may be renewed only on 
                        reapplication; and</DELETED>
                        <DELETED>    (ii) shall conform to the 
                        requirements of this paragraph.</DELETED>
                <DELETED>    (C) Proceedings.--</DELETED>
                        <DELETED>    (i) In general.--The Secretary may 
                        begin a proceeding to grant an exemption to a 
                        critical-impact artificial intelligence system 
                        under this paragraph if the critical-impact AI 
                        organization that deployed the critical-impact 
                        artificial intelligence systems applies for an 
                        exemption or a renewal of an 
                        exemption.</DELETED>
                        <DELETED>    (ii) Notice and comment.--The 
                        Secretary shall publish notice of the 
                        application under clause (i) and provide an 
                        opportunity to comment.</DELETED>
                        <DELETED>    (iii) Filing.--An application for 
                        an exemption or for a renewal of an exemption 
                        under this paragraph shall be filed at such 
                        time and in such manner and contain such 
                        information as the Secretary may 
                        require.</DELETED>
                <DELETED>    (D) Actions.--The Secretary may grant an 
                exemption under this paragraph upon finding that--
                </DELETED>
                        <DELETED>    (i) the exemption is consistent 
                        with the public interest and this section; 
                        and</DELETED>
                        <DELETED>    (ii) the exemption would 
                        facilitate the development or evaluation of a 
                        feature or characteristic of a critical-impact 
                        artificial intelligence system providing a 
                        safety and security level that is not less than 
                        the TEVV standard level.</DELETED>
        <DELETED>    (2) Disclosure.--Not later than 30 days after the 
        date on which an application is filed under this subsection, 
        the Secretary may make public information contained in the 
        application or relevant to the application, unless the 
        information concerns or is related to a trade secret or other 
        confidential information not relevant to the 
        application.</DELETED>
        <DELETED>    (3) Notice of decision.--The Secretary shall 
        publish in the Federal Register a notice of each decision 
        granting or denying an exemption under this subsection and the 
        reasons for granting or denying that exemption, including a 
        justification with supporting information for the selected 
        approach.</DELETED>
<DELETED>    (e) Self-Certification of Compliance.--</DELETED>
        <DELETED>    (1) In general.--Subject to paragraph (2), with 
        respect to each critical-impact artificial intelligence system 
        of a critical-impact AI organization, the critical-impact AI 
        organization shall certify to the Secretary that the critical-
        impact artificial intelligence system complies with applicable 
        TEVV standards issued under this section.</DELETED>
        <DELETED>    (2) Exception.--A critical-impact AI organization 
        may not issue a certificate under paragraph (1) if, in 
        exercising reasonable care, the critical-impact AI organization 
        has constructive knowledge that the certificate is false or 
        misleading in a material respect.</DELETED>
<DELETED>    (f) Noncompliance Findings and Enforcement Action.--
</DELETED>
        <DELETED>    (1) Finding of noncompliance by secretary.--Upon 
        learning that a critical-impact artificial intelligence system 
        deployed by a critical-impact AI organization does not comply 
        with the requirements under this section, the Secretary shall--
        </DELETED>
                <DELETED>    (A) immediately--</DELETED>
                        <DELETED>    (i) notify the critical-impact AI 
                        organization of the finding; and</DELETED>
                        <DELETED>    (ii) order the critical-impact AI 
                        organization to take remedial action to address 
                        the noncompliance of the artificial 
                        intelligence system; and</DELETED>
                <DELETED>    (B) may, as determined appropriate or 
                necessary by the Secretary, and if the Secretary 
                determines that actions taken by a critical-impact AI 
                organization are insufficient to remedy the 
                noncompliance of the critical-impact AI organization 
                with this section, take enforcement action under 
                section 208.</DELETED>
        <DELETED>    (2) Actions by critical-impact ai organization.--
        If a critical-impact AI organization finds that a critical-
        impact artificial intelligence system deployed by the critical-
        impact AI organization is noncompliant with an applicable TEVV 
        standard issued under this section or the critical-impact AI 
        organization is notified of noncompliance by the Secretary 
        under paragraph (1)(A)(i), the critical-impact AI organization 
        shall--</DELETED>
                <DELETED>    (A) without undue delay, notify the 
                Secretary by certified mail or electronic mail of the 
                noncompliance or receipt of the notification of 
                noncompliance;</DELETED>
                <DELETED>    (B) take remedial action to address the 
                noncompliance; and</DELETED>
                <DELETED>    (C) not later than 10 days after the date 
                of the notification or receipt under subparagraph (A), 
                submit to the Secretary a report containing information 
                on--</DELETED>
                        <DELETED>    (i) the nature and discovery of 
                        the noncompliant aspect of the critical-impact 
                        artificial intelligence system;</DELETED>
                        <DELETED>    (ii) measures taken to remedy such 
                        noncompliance; and</DELETED>
                        <DELETED>    (iii) actions taken by the 
                        critical-impact AI organization to address 
                        stakeholders affected by such 
                        noncompliance.</DELETED>

<DELETED>SEC. 208. ENFORCEMENT.</DELETED>

<DELETED>    (a) In General.--Upon discovering noncompliance with a 
provision of this Act by a deployer of a high-impact artificial 
intelligence system or a critical-impact AI organization if the 
Secretary determines that actions taken by the critical-impact AI 
organization are insufficient to remedy the noncompliance, the 
Secretary shall take an action described in this section.</DELETED>
<DELETED>    (b) Civil Penalties.--</DELETED>
        <DELETED>    (1) In general.--The Secretary may impose a 
        penalty described in paragraph (2) on deployer of a high-impact 
        artificial intelligence system or a critical-impact AI 
        organization for each violation by that entity of this Act or 
        any regulation or order issued under this Act.</DELETED>
        <DELETED>    (2) Penalty described.--The penalty described in 
        this paragraph is the greater of--</DELETED>
                <DELETED>    (A) an amount not to exceed $300,000; 
                or</DELETED>
                <DELETED>    (B) an amount that is twice the value of 
                the transaction that is the basis of the violation with 
                respect to which the penalty is imposed.</DELETED>
<DELETED>    (c) Violation With Intent.--</DELETED>
        <DELETED>    (1) In general.--If the Secretary determines that 
        a deployer of a high-impact artificial intelligence system or a 
        critical-impact AI organization intentionally violates this Act 
        or any regulation or order issued under this Act, the Secretary 
        may prohibit the critical-impact AI organization from deploying 
        a critical-impact artificial intelligence system.</DELETED>
        <DELETED>    (2) In addition.--A prohibition imposed under 
        paragraph (1) shall be in addition to any other civil penalties 
        provided under this Act.</DELETED>
<DELETED>    (d) Factors.--The Secretary may by regulation provide 
standards for establishing levels of civil penalty under this section 
based upon factors such as the seriousness of the violation, the 
culpability of the violator, and such mitigating factors as the 
violator's record of cooperation with the Secretary in disclosing the 
violation.</DELETED>
<DELETED>    (e) Civil Action.--</DELETED>
        <DELETED>    (1) In general.--Upon referral by the Secretary, 
        the Attorney General may bring a civil action in a United 
        States district court to--</DELETED>
                <DELETED>    (A) enjoin a violation of section 207; 
                or</DELETED>
                <DELETED>    (B) collect a civil penalty upon a finding 
                of noncompliance with this Act.</DELETED>
        <DELETED>    (2) Venue.--A civil action may be brought under 
        paragraph (1) in the judicial district in which the violation 
        occurred or the defendant is found, resides, or does 
        business.</DELETED>
        <DELETED>    (3) Process.--Process in a civil action under 
        paragraph (1) may be served in any judicial district in which 
        the defendant resides or is found.</DELETED>
<DELETED>    (f) Rule of Construction.--Nothing in this section shall 
be construed to require a developer of a critical-impact artificial 
intelligence system to disclose any information, including data or 
algorithms--</DELETED>
        <DELETED>    (1) relating to a trade secret or other protected 
        intellectual property right;</DELETED>
        <DELETED>    (2) that is confidential business information; 
        or</DELETED>
        <DELETED>    (3) that is privileged.</DELETED>

<DELETED>SEC. 209. ARTIFICIAL INTELLIGENCE CONSUMER 
              EDUCATION.</DELETED>

<DELETED>    (a) Establishment.--Not later than 180 days after the date 
of enactment of this Act, the Secretary shall establish a working group 
relating to responsible education efforts for artificial intelligence 
systems.</DELETED>
<DELETED>    (b) Membership.--</DELETED>
        <DELETED>    (1) In general.--The Secretary shall appoint to 
        serve as members of the working group established under this 
        section not more than 15 individuals with expertise relating to 
        artificial intelligence systems, including--</DELETED>
                <DELETED>    (A) representatives of--</DELETED>
                        <DELETED>    (i) institutions of higher 
                        education;</DELETED>
                        <DELETED>    (ii) companies developing or 
                        operating artificial intelligence 
                        systems;</DELETED>
                        <DELETED>    (iii) consumers or consumer 
                        advocacy groups;</DELETED>
                        <DELETED>    (iv) public health 
                        organizations;</DELETED>
                        <DELETED>    (v) marketing 
                        professionals;</DELETED>
                        <DELETED>    (vi) entities with national 
                        experience relating to consumer education, 
                        including technology education;</DELETED>
                        <DELETED>    (vii) public safety 
                        organizations;</DELETED>
                        <DELETED>    (viii) rural workforce development 
                        advocates;</DELETED>
                        <DELETED>    (ix) enabling technology 
                        companies; and</DELETED>
                        <DELETED>    (x) nonprofit technology industry 
                        trade associations; and</DELETED>
                <DELETED>    (B) any other members the Secretary 
                considers to be appropriate.</DELETED>
        <DELETED>    (2) Compensation.--A member of the working group 
        established under this section shall serve without 
        compensation.</DELETED>
<DELETED>    (c) Duties.--</DELETED>
        <DELETED>    (1) In general.--The working group established 
        under this section shall--</DELETED>
                <DELETED>    (A) identify recommended education and 
                programs that may be voluntarily employed by industry 
                to inform--</DELETED>
                        <DELETED>    (i) consumers and other 
                        stakeholders with respect to artificial 
                        intelligence systems as those systems--
                        </DELETED>
                                <DELETED>    (I) become available; 
                                or</DELETED>
                                <DELETED>    (II) are soon to be made 
                                widely available for public use or 
                                consumption; and</DELETED>
                <DELETED>    (B) submit to Congress, and make available 
                to the public, a report containing the findings and 
                recommendations under subparagraph (A).</DELETED>
        <DELETED>    (2) Factors for consideration.--The working group 
        established under this section shall take into consideration 
        topics relating to--</DELETED>
                <DELETED>    (A) the intent, capabilities, and 
                limitations of artificial intelligence 
                systems;</DELETED>
                <DELETED>    (B) use cases of artificial intelligence 
                applications that improve lives of the people of the 
                United States, such as improving government efficiency, 
                filling critical roles, and reducing mundane work 
                tasks;</DELETED>
                <DELETED>    (C) artificial intelligence research 
                breakthroughs;</DELETED>
                <DELETED>    (D) engagement and interaction methods, 
                including how to adequately inform consumers of 
                interaction with an artificial intelligence 
                system;</DELETED>
                <DELETED>    (E) human-machine interfaces;</DELETED>
                <DELETED>    (F) emergency fallback 
                scenarios;</DELETED>
                <DELETED>    (G) operational boundary 
                responsibilities;</DELETED>
                <DELETED>    (H) potential mechanisms that could change 
                function behavior in service; and</DELETED>
                <DELETED>    (I) consistent nomenclature and taxonomy 
                for safety features and systems.</DELETED>
        <DELETED>    (3) Consultation.--The Secretary shall consult 
        with the Chair of the Federal Trade Commission with respect to 
        the recommendations of the working group established under this 
        section, as appropriate.</DELETED>
<DELETED>    (d) Termination.--The working group established under this 
section shall terminate on the date that is 2 years after the date of 
enactment of this Act.</DELETED>

SECTION 1. SHORT TITLE.

    This Act may be cited as the ``Artificial Intelligence Research, 
Innovation, and Accountability Act of 2024''.

SEC. 2. TABLE OF CONTENTS.

    The table of contents for this Act is as follows:

Sec. 1. Short title.
Sec. 2. Table of contents.

        TITLE I--ARTIFICIAL INTELLIGENCE RESEARCH AND INNOVATION

Sec. 101. Open data policy amendments.
Sec. 102. Online content authenticity and provenance standards research 
                            and development.
Sec. 103. Standards for detection of anomalous behavior and artificial 
                            intelligence-generated media.
Sec. 104. Comptroller general study on barriers and best practices to 
                            usage of AI in government.

            TITLE II--ARTIFICIAL INTELLIGENCE ACCOUNTABILITY

Sec. 201. Definitions.
Sec. 202. Generative artificial intelligence transparency.
Sec. 203. Transparency reports for high-impact artificial intelligence 
                            systems.
Sec. 204. Guidelines for Federal agencies and plans for oversight of 
                            high-impact artificial intelligence 
                            systems.
Sec. 205. Office of Management and Budget Oversight guidelines and 
                            agency oversight plans.
Sec. 206. Risk management assessment for critical-impact artificial 
                            intelligence systems.
Sec. 207. Certification of critical-impact artificial intelligence 
                            systems.
Sec. 208. Enforcement.
Sec. 209. Developer and deployer overlap.
Sec. 210. Artificial intelligence consumer education.
Sec. 211. Severability.

        TITLE I--ARTIFICIAL INTELLIGENCE RESEARCH AND INNOVATION

SEC. 101. OPEN DATA POLICY AMENDMENTS.

    Section 3502 of title 44, United States Code, is amended--
            (1) in paragraph (22)--
                    (A) by inserting ``or data model'' after ``a data 
                asset''; and
                    (B) by striking ``and'' at the end;
            (2) in paragraph (23), by striking the period at the end 
        and inserting a semicolon; and
            (3) by adding at the end the following:
            ``(24) the term `data model' means a mathematical, 
        economic, or statistical representation of a system or process 
        used to assist in making calculations and predictions, 
        including through the use of algorithms, computer programs, or 
        artificial intelligence systems; and
            ``(25) the term `artificial intelligence system' means a 
        machine-based system that, for explicit or implicit objectives, 
        infers from the input the system receives how to generate 
        outputs, such as predictions, content, recommendations, or 
        decisions that can influence physical or virtual 
        environments.''.

SEC. 102. ONLINE CONTENT AUTHENTICITY AND PROVENANCE STANDARDS RESEARCH 
              AND DEVELOPMENT.

    (a) Research.--
            (1) In general.--Not later than 180 days after the date of 
        the enactment of this Act, the Under Secretary of Commerce for 
        Standards and Technology shall carry out research to facilitate 
        the development and promote the standardization of means to 
        provide authenticity and provenance information for digital 
        content generated by human authors and artificial intelligence 
        systems.
            (2) Elements.--The research carried out pursuant to 
        paragraph (1) shall cover the following:
                    (A) Secure and mandatory methods for human content 
                to append statements of provenance information through 
                the use of unique credentials, watermarking, or other 
                data or metadata-based approaches.
                    (B) Methods for the verification of statements of 
                digital content provenance to ensure authenticity such 
                as watermarking or classifiers, which are trained 
                models that distinguish artificial intelligence-
                generated content.
                    (C) Methods for displaying clear and conspicuous 
                labels of digital content provenance to users.
                    (D) Technologies, applications, or infrastructure 
                needed to facilitate the creation and verification of 
                digital content provenance information.
                    (E) Mechanisms to ensure that any technologies and 
                methods developed under this subsection are minimally 
                burdensome on content producers to implement.
                    (F) Use of digital content transparency 
                technologies to enable attribution for human-created 
                content.
                    (G) Such other related processes, technologies, or 
                applications as the Under Secretary considers 
                appropriate.
            (3) Implementation.--The Under Secretary shall carry out 
        the research required by paragraph (1) as part of the research 
        directives pursuant to section 22A(b)(1) of the National 
        Institute of Standards and Technology Act (15 U.S.C. 278h-
        1(b)(1)).
    (b) Technical Assistance on the Development of Standards.--
            (1) In general.--For methodologies and applications related 
        to content provenance and authenticity deemed by the Under 
        Secretary to be at a readiness level sufficient for 
        standardization, the Under Secretary shall provide technical 
        review and assistance to such other Federal agencies and 
        nongovernmental standards organizations as the Under Secretary 
        considers appropriate.
            (2) Considerations.--In providing any technical review and 
        assistance related to the development of digital content 
        provenance and authenticity standards under this subsection, 
        the Under Secretary may--
                    (A) consider whether a proposed standard is 
                reasonable, practicable, and appropriate for the 
                particular type of media and media environment for 
                which the standard is proposed;
                    (B) consult with relevant stakeholders; and
                    (C) review industry standards issued by 
                nongovernmental standards organizations.
    (c) Pilot Program.--
            (1) In general.--The Under Secretary shall carry out a 
        pilot program to assess the feasibility and advisability of 
        using available technologies and creating guidelines to 
        facilitate the creation and verification of digital content 
        provenance information.
            (2) Locations.--The pilot program required by paragraph (1) 
        shall be carried out at not more than 2 Federal agencies the 
        Under Secretary shall select for purposes of the pilot program 
        required by paragraph (1).
            (3) Requirements.--In carrying out the pilot program 
        required by paragraph (1), the Under Secretary shall--
                    (A) apply and evaluate methods for authenticating 
                the origin of and modifications to government-produced 
                digital content, either by Federal Government employees 
                or a private entity under the terms of a government 
                contract, using technology and guidelines described in 
                paragraph (1); and
                    (B) make available to the public digital content 
                embedded with provenance data or other authentication 
                provided by the heads of the Federal agencies selected 
                pursuant to paragraph (2) for the purposes of the pilot 
                program.
            (4) Briefing required.--Not later than 1 year after the 
        date of the enactment of this Act, and annually thereafter 
        until the date described in paragraph (5), the Under Secretary 
        shall brief the Committee on Commerce, Science, and 
        Transportation of the Senate and the Committee on Science, 
        Space, and Technology of the House of Representatives on the 
        findings of the Under Secretary with respect to the pilot 
        program carried out under this subsection.
            (5) Termination.--The pilot program shall terminate on the 
        date that is 10 years after the date of the enactment of this 
        Act.
    (d) Report to Congress.--Not later than 1 year after the date of 
the enactment of this Act, the Under Secretary shall submit to the 
Committee on Commerce, Science, and Transportation of the Senate and 
the Committee on Science, Space, and Technology of the House of 
Representatives a report outlining the progress of standardization 
initiatives relating to requirements under this section, as well as 
recommendations for legislative or administrative action to encourage 
or require the widespread adoption of such initiatives in the United 
States.

SEC. 103. STANDARDS FOR DETECTION OF ANOMALOUS BEHAVIOR AND ARTIFICIAL 
              INTELLIGENCE-GENERATED MEDIA.

    Section 22A(b)(1) of the National Institute of Standards and 
Technology Act (15 U.S.C. 278h-1(b)(1)) is amended--
            (1) by redesignating subparagraph (I) as subparagraph (K);
            (2) in subparagraph (H), by striking ``; and'' and 
        inserting a semicolon; and
            (3) by inserting after subparagraph (H) the following:
                    ``(I) best practices for detecting outputs 
                generated by artificial intelligence systems, including 
                content such as text, audio, images, and videos;
                    ``(J) methods to detect and mitigate anomalous 
                behavior of artificial intelligence systems and 
                safeguards to mitigate potentially adversarial or 
                compromising anomalous behavior; and''.

SEC. 104. COMPTROLLER GENERAL STUDY ON BARRIERS AND BEST PRACTICES TO 
              USAGE OF AI IN GOVERNMENT.

    (a) In General.--Not later than 1 year after the date of enactment 
of this Act, the Comptroller General of the United States shall--
            (1) conduct a review of statutory, regulatory, and other 
        policy barriers to the use of artificial intelligence systems 
        to improve the functionality of the Federal Government; and
            (2) identify best practices for the adoption and 
        responsible use of artificial intelligence systems by the 
        Federal Government, including--
                    (A) ensuring that an artificial intelligence system 
                is proportional to the need of the Federal Government;
                    (B) restrictions on access to and use of an 
                artificial intelligence system based on the 
                capabilities and risks of the artificial intelligence 
                system; and
                    (C) safety measures that ensure that an artificial 
                intelligence system is appropriately limited to 
                necessary data and compartmentalized from other assets 
                of the Federal Government.
    (b) Report.--Not later than 2 years after the date of enactment of 
this Act, the Comptroller General of the United States shall submit to 
the Committee on Commerce, Science, and Transportation of the Senate 
and the Committee on Science, Space, and Technology of the House of 
Representatives a report that--
            (1) summarizes the results of the review conducted under 
        subsection (a)(1) and the best practices identified under 
        subsection (a)(2), including recommendations, as the 
        Comptroller General of the United States considers appropriate;
            (2) describes any laws, regulations, guidance documents, or 
        other policies that may prevent the adoption of artificial 
        intelligence systems by the Federal Government to improve 
        certain functions of the Federal Government, including--
                    (A) data analysis and processing;
                    (B) paperwork reduction;
                    (C) contracting and procurement practices; and
                    (D) other Federal Government services; and
            (3) includes, as the Comptroller General of the United 
        States considers appropriate, recommendations to modify or 
        eliminate barriers to the use of artificial intelligence 
        systems by the Federal Government.

            TITLE II--ARTIFICIAL INTELLIGENCE ACCOUNTABILITY

SEC. 201. DEFINITIONS.

    In this title:
            (1) Appropriate congressional committees.--The term 
        ``appropriate congressional committees'' means--
                    (A) the Committee on Energy and Natural Resources 
                and the Committee on Commerce, Science, and 
                Transportation of the Senate;
                    (B) the Committee on Energy and Commerce of the 
                House of Representatives; and
                    (C) each congressional committee with jurisdiction 
                over an applicable covered agency.
            (2) Artificial intelligence system.--The term ``artificial 
        intelligence system'' means a machine-based system that, for 
        explicit and implicit objectives, infers from the input the 
        system receives how to generate outputs such as predictions, 
        content, recommendations, or decisions that can influence 
        physical or virtual environments.
            (3) Covered agency.--The term ``covered agency'' means an 
        agency for which a guideline is developed under section 
        22B(b)(1) of the National Institute of Standards and Technology 
        Act, as added by section 204 of this Act, including--
                    (A) the Department of Commerce;
                    (B) the Department of State;
                    (C) the Department of Homeland Security;
                    (D) the Department of Health and Human Services;
                    (E) the Department of Agriculture;
                    (F) the Department of Housing and Urban 
                Development;
                    (G) the Department of the Interior;
                    (H) the Department of Education;
                    (I) the Department of Energy;
                    (J) the Department of Labor;
                    (K) the Department of Transportation;
                    (L) the Department of Justice;
                    (M) the Department of the Treasury;
                    (N) the Department of Veterans Affairs; and
                    (O) any other agency the Secretary determines 
                appropriate.
            (4) Critical-impact ai organization.--The term ``critical-
        impact AI organization'' means a nongovernmental organization 
        that serves as the deployer of a critical-impact artificial 
        intelligence system.
            (5) Critical-impact artificial intelligence system.--The 
        term ``critical-impact artificial intelligence system'' means 
        an artificial intelligence system that--
                    (A) is deployed for a purpose other than solely for 
                use by the Department of Defense or an intelligence 
                agency (as defined in section 504(e) of the National 
                Security Act of 1947 (50 U.S.C. 3094(3))); and
                    (B) is used or intended to be used--
                            (i) to make a decision or substantially 
                        replace or facilitate the discretionary human 
                        decisionmaking process regarding--
                                    (I) the real-time or ex post facto 
                                collection or analysis of biometric 
                                data of a natural person by biometric 
                                identification systems without the 
                                consent of the natural person;
                                    (II) an operational component 
                                involved in the direct management of 
                                infrastructure determined by the 
                                Secretary of Homeland Security to be 
                                critical infrastructure (as defined in 
                                section 1016(e) of the USA PATRIOT Act 
                                (42 U.S.C. 5195c(e))) that is--
                                            (aa) transportation 
                                        infrastructure;
                                            (bb) energy infrastructure;
                                            (cc) electrical 
                                        infrastructure;
                                            (dd) communications 
                                        infrastructure;
                                            (ee) manufacturing 
                                        infrastructure; or
                                            (ff) infrastructure used in 
                                        the supply and production of 
                                        water and hazardous materials; 
                                        or
                                    (III) a government or government 
                                contractor's actions pertaining to 
                                criminal justice (as defined in section 
                                901 of title I of the Omnibus Crime 
                                Control and Safe Streets Act of 1968 
                                (34 U.S.C. 10251)); and
                            (ii) in a manner that poses a significant 
                        risk to safety or violates rights afforded 
                        under the Constitution of the United States.
            (6) Deployer.--The term ``deployer''--
                    (A) means an entity that--
                            (i) uses or operates an artificial 
                        intelligence system for internal use or for use 
                        by a third party;
                            (ii) substantially modifies an artificial 
                        intelligence system, or trains an artificial 
                        intelligence system using new data, for 
                        internal use or for use by a third party; or
                            (iii) performs the functions described in 
                        clauses (i) and (ii); and
                    (B) does not include an entity that is solely an 
                end user of a system.
            (7) Developer.--The term ``developer'' means an entity 
        that--
                    (A) initially designs, codes, produces, or owns an 
                artificial intelligence system for internal use or for 
                use by a third party as a baseline model; and
                    (B) is not a deployer of the artificial 
                intelligence system described in subparagraph (A).
            (8) End user.--The term ``end user'' means an entity that, 
        with respect to an artificial intelligence system procured from 
        a deployer for which the deployer submits a transparency report 
        under section 203 or a risk management assessment under section 
        206--
                    (A) uses or operates the artificial intelligence 
                system; and
                    (B) does not substantially edit or modify the 
                artificial intelligence system.
            (9) Generative artificial intelligence system.--The term 
        ``generative artificial intelligence system'' means an 
        artificial intelligence system that generates output, such as 
        data or content in a written, audio, or visual format.
            (10) High-impact artificial intelligence system.--The term 
        ``high-impact artificial intelligence system'' means an 
        artificial intelligence system--
                    (A) deployed for a purpose other than solely for 
                use by the Department of Defense or an intelligence 
                agency (as defined in section 3094(e) of the National 
                Security Act of 1947 (50 U.S.C. 3094(3))); and
                    (B) that is specifically deployed to make a 
                decision or substantially replace the discretionary 
                human decisionmaking process regarding the access of an 
                individual to housing, employment, credit, education, 
                healthcare, government services, or insurance in a 
                manner that poses a significant risk to safety or 
                violates rights afforded under the Constitution of the 
                United States or Federal law.
            (11) Online platform.--The term ``online platform'' means 
        any public-facing website, online service, online application, 
        or mobile application that predominantly provides a community 
        forum for user-generated content, such as sharing videos, 
        images, games, audio files, or other content, including a 
        social media service, social network, or virtual reality 
        environment.
            (12) Secretary.--The term ``Secretary'' means the Secretary 
        of Commerce.
            (13) Significant risk.--The term ``significant risk'' means 
        the risk of--
                    (A) high-impact, severe, high-intensity, or long-
                duration harm to individuals; or
                    (B) a high probability of substantial harm to 
                individuals.
            (14) TEVV.--The term ``TEVV'' means the testing, 
        evaluation, validation, and verification of any artificial 
        intelligence system that includes--
                    (A) open, transparent, testable, and verifiable 
                specifications that characterize realistic operational 
                performance, such as validity and reliability for 
                relevant tasks;
                    (B) testing methodologies and metrics that enable 
                the evaluation of system trustworthiness, including 
                robustness and resilience;
                    (C) data quality standards for training and testing 
                datasets;
                    (D) requirements for system validation and 
                integration into production environments, automated 
                testing, and compliance with existing legal and 
                regulatory specifications;
                    (E) methods and tools for--
                            (i) the monitoring of system behavior;
                            (ii) the tracking of incidents or errors 
                        reported and their management; and
                            (iii) the detection of emergent properties 
                        and related impacts; and
                    (F) processes for redress and response.
            (15) Under secretary.--The term ``Under Secretary'' means 
        the Director of the National Institute of Standards and 
        Technology.

SEC. 202. GENERATIVE ARTIFICIAL INTELLIGENCE TRANSPARENCY.

    (a) Prohibition.--
            (1) Disclosure of use of generative artificial intelligence 
        systems.--
                    (A) In general.--A person operating an online 
                platform that uses a generative artificial intelligence 
                system shall provide notice to each user of the online 
                platform that the online platform uses a generative 
                artificial intelligence system to generate content the 
                user sees.
                    (B) Requirements.--A person providing the notice 
                described in subparagraph (A) to a user--
                            (i) subject to clause (ii), shall provide 
                        the notice in a clear and conspicuous manner on 
                        the online platform before the user interacts 
                        with content produced by a generative 
                        artificial intelligence system used by the 
                        online platform; and
                            (ii) may provide an option for the user to 
                        choose to see the notice described in clause 
                        (i) only upon the first interaction of the user 
                        with content produced by a generative 
                        artificial intelligence system.
    (b) Enforcement Action.--Upon learning that a person operating an 
online platform violates this section after receiving a report of 
noncompliance or pursuant to an investigation conducted under section 
208(f), the Secretary--
            (1) shall immediately--
                    (A) notify the person operating the online platform 
                of the finding; and
                    (B) order the person operating the online platform 
                to take remedial action to address the noncompliance of 
                the generative artificial intelligence system operated 
                by the online platform; and
            (2) may, as determined appropriate or necessary by the 
        Secretary, take enforcement action under section 208 if the 
        person operating the online platform does not take sufficient 
        action to remedy the noncompliance by the date that is 15 days 
        after the notification issued under paragraph (1)(A).
    (c) Effective Date.--This section shall take effect on the date 
that is 180 days after the date of enactment of this Act.

SEC. 203. TRANSPARENCY REPORTS FOR HIGH-IMPACT ARTIFICIAL INTELLIGENCE 
              SYSTEMS.

    (a) Transparency Reporting.--
            (1) In general.--Each deployer of a high-impact artificial 
        intelligence system shall--
                    (A) before deploying the high-impact artificial 
                intelligence system, and annually thereafter, submit to 
                the Secretary a transparency report for the high-impact 
                artificial intelligence system; and
                    (B) submit to the Secretary an updated transparency 
                report on the high-impact artificial intelligence 
                system if the deployer makes a material change to--
                            (i) the purpose for which the high-impact 
                        artificial intelligence system is used; or
                            (ii) the type of data or content the high-
                        impact artificial intelligence system processes 
                        or uses for training purposes.
            (2) Contents.--Each transparency report submitted under 
        paragraph (1) by a deployer of a high-impact artificial 
        intelligence system shall include--
                    (A) with respect to the organization of the 
                deployer--
                            (i) policies, processes, procedures, and 
                        practices across the organization relating to 
                        transparent and effective mapping, measuring, 
                        and managing of artificial intelligence risks, 
                        including--
                                    (I) how the organization 
                                understands, manages, and documents 
                                legal and regulatory requirements 
                                involving artificial intelligence;
                                    (II) how the organization 
                                integrates characteristics of 
                                trustworthy artificial intelligence, 
                                which include valid, reliable, safe, 
                                secure, resilient, accountable, 
                                transparent, globally and locally 
                                explainable, interpretable, privacy-
                                enhanced, and protecting of rights 
                                under the Constitution of the United 
                                States, and compliant with all relevant 
                                Federal laws, into organizational 
                                policies, processes, procedures, and 
                                practices;
                                    (III) a methodology to determine 
                                the needed level of risk management 
                                activities based on the risk tolerance 
                                of the organization; and
                                    (IV) how the organization 
                                establishes risk management processes 
                                and outcomes through transparent 
                                policies, procedures, and other 
                                controls based on organizational risk 
                                priorities;
                    (B) the structure, context, and capabilities of the 
                high-impact artificial intelligence system, including--
                            (i) how the context was established and 
                        understood;
                            (ii) capabilities, targeted uses, goals, 
                        and expected costs and benefits; and
                            (iii) how risks and benefits are mapped for 
                        each system component;
                    (C) a description of how the organization of the 
                deployer employs quantitative, qualitative, or mixed-
                method tools, techniques, and methodologies to analyze, 
                assess, benchmark, and monitor artificial intelligence 
                risk, including--
                            (i) identification of appropriate methods 
                        and metrics;
                            (ii) how artificial intelligence systems 
                        are evaluated for characteristics of 
                        trustworthy artificial intelligence;
                            (iii) mechanisms for tracking artificial 
                        intelligence system risks over time; and
                            (iv) processes for gathering and assessing 
                        feedback relating to the efficacy of 
                        measurement; and
                    (D) a description of allocation of risk resources 
                to map and measure risks on a regular basis, 
                including--
                            (i) how artificial intelligence risks based 
                        on assessments and other analytical outputs are 
                        prioritized, responded to, and managed;
                            (ii) how strategies to maximize artificial 
                        intelligence benefits and minimize negative 
                        impacts were planned, prepared, implemented, 
                        documented, and informed by input from relevant 
                        artificial intelligence deployers;
                            (iii) management of artificial intelligence 
                        system risks and benefits; and
                            (iv) regular monitoring of risk treatments, 
                        including response and recovery, and 
                        communication plans for the identified and 
                        measured artificial intelligence risks, as 
                        applicable.
            (3) Developer obligations.--The developer of a high-impact 
        artificial intelligence system that agrees to provide 
        technologies or services to a deployer of the high-impact 
        artificial intelligence system shall provide to the deployer of 
        the high-impact artificial intelligence system the information 
        reasonably necessary for compliance with paragraph (1), 
        including--
                    (A) an overview of the data used in training the 
                baseline artificial intelligence system provided by the 
                developer, including--
                            (i) size of datasets used;
                            (ii) content and data sources and types of 
                        data used;
                            (iii) content and data that may be subject 
                        to copyright protection and any steps taken to 
                        remove such content and data prior to training 
                        or other uses; and
                            (iv) whether and to what extent personal 
                        identifiable information makes up a portion of 
                        the training dataset, and what risk mitigation 
                        measures have been taken to prevent the 
                        disclosure of that personal identifiable 
                        information;
                    (B) documentation outlining the structure and 
                context of the baseline artificial intelligence system 
                of the developer, including--
                            (i) input modality;
                            (ii) system output and modality;
                            (iii) model size; and
                            (iv) model architecture;
                    (C) known or reasonably foreseeable capabilities, 
                limitations, and risks of the baseline artificial 
                intelligence system at the time of the development of 
                the artificial intelligence system; and
                    (D) documentation for downstream use, including--
                            (i) a statement of intended purpose;
                            (ii) guidelines for the intended use of the 
                        artificial intelligence system, including a 
                        list of permitted, restricted, and prohibited 
                        uses and users; and
                            (iii) a description of the potential for 
                        and risk of deviation from the intended purpose 
                        of the baseline artificial intelligence system, 
                        including recommended safeguards to mitigate 
                        and prevent risks to safety or to rights 
                        afforded under the Constitution of the United 
                        States or Federal law.
            (4) Considerations.--In carrying out this subsection, a 
        deployer or developer of a high-impact artificial intelligence 
        system shall consider the best practices outlined in the most 
        recent version of the risk management framework developed 
        pursuant to section 22A(c) of the National Institute of 
        Standards and Technology Act (15 U.S.C. 278h-1(c)).
    (b) Noncompliance and Enforcement Action.--Upon learning that a 
deployer of a high-impact artificial intelligence system violates this 
section with respect to a high-impact artificial intelligence system 
after receiving a report of noncompliance or pursuant to an 
investigation conducted under section 208(f), the Secretary--
            (1) shall immediately--
                    (A) notify the deployer of the finding; and
                    (B) order the deployer to immediately submit to the 
                Secretary the report required under subsection (a)(1); 
                and
            (2) if the deployer fails to submit the report by the date 
        that is 15 days after the date of the notification under 
        paragraph (1)(A), may take enforcement action under section 
        208.
    (c) Avoidance of Duplication.--With respect to a developer or 
deployer of a high-impact artificial intelligence system that maintains 
policies and procedures for risk management in accordance with any 
applicable rules, regulations, or supervisory guidance promulgated by a 
relevant Federal agency, the Secretary shall deem the developer or 
deployer to be in compliance with this section.
    (d) Rule of Construction.--Nothing in this section shall be 
construed to require a deployer of a high-impact artificial 
intelligence system to disclose any information, including data, 
content, or algorithms--
            (1) constituting a trade secret or other intellectual 
        property right; or
            (2) that is confidential business information.
    (e) Consolidation.--With respect to an instance in which multiple 
deployers participate in the deployment of a high-impact artificial 
intelligence system, the Secretary may establish through regulation a 
process under which the deployers may submit a single transparency 
report under subsection (a).

SEC. 204. GUIDELINES FOR FEDERAL AGENCIES AND PLANS FOR OVERSIGHT OF 
              HIGH-IMPACT ARTIFICIAL INTELLIGENCE SYSTEMS.

    (a) Guidelines for Federal Agencies for Oversight of Artificial 
Intelligence.--The National Institute of Standards and Technology Act 
(15 U.S.C. 271 et seq.) is amended by inserting after section 22A (15 
U.S.C. 278h-1) the following:

``SEC. 22B. GUIDELINES FOR FEDERAL AGENCIES FOR OVERSIGHT OF ARTIFICIAL 
              INTELLIGENCE.

    ``(a) Definition of High-impact Artificial Intelligence System.--In 
this section, the term `high-impact artificial intelligence system' 
means an artificial intelligence system--
            ``(1) deployed for purposes other than those solely for use 
        by the Department of Defense or an element of the intelligence 
        community (as defined in section 3 of the National Security Act 
        of 1947 (50 U.S.C. 3003)); and
            ``(2) that is specifically deployed to make a decision or 
        substantially replace the discretionary human decisionmaking 
        process regarding the access of an individual to housing, 
        employment, credit, education, health care, government 
        services, or insurance in a manner that poses a significant 
        risk to safety or violates rights afforded under the 
        Constitution of the United States.
    ``(b) Guidelines for Oversight of High-impact Artificial 
Intelligence Systems.--Not later than 1 year after the date of the 
enactment of the Artificial Intelligence Research, Innovation, and 
Accountability Act of 2024, the Director shall--
            ``(1) develop guidelines for Federal agencies to conduct 
        oversight of the non-Federal and, as may be appropriate, 
        Federal use of high-impact artificial intelligence systems to 
        improve the safe and responsible use of such systems; and
            ``(2) not less frequently than biennially, update the 
        guidelines to account for changes in technological capabilities 
        or artificial intelligence use cases.
    ``(c) Use of Voluntary Risk Management Framework.--In developing 
guidelines under subsection (b), the Director shall use the voluntary 
risk management framework required by section 22A(c) to identify and 
provide guidelines for Federal agencies on establishing regulations, 
standards, guidelines, best practices, methodologies, procedures, or 
processes--
            ``(1) to facilitate oversight of non-Federal use of high-
        impact artificial intelligence systems; and
            ``(2) to mitigate risks from such high-impact artificial 
        intelligence systems.
    ``(d) Authorized Elements.--In developing guidelines under 
subsection (b), the Director may include the following:
            ``(1) Key design choices made during high-impact artificial 
        intelligence model development, including rationale and 
        assumptions made.
            ``(2) Intended use and users, other possible use cases, 
        including any anticipated undesirable or potentially harmful 
        use cases, and what good faith efforts model developers can 
        take to mitigate the harms caused by the use of the system.
            ``(3) Methods for evaluating the safety of high-impact 
        artificial intelligence systems and approaches for responsible 
        use.
    ``(e) Consultation.--In developing guidelines under subsection (b), 
the Director may consult with such stakeholders representing 
perspectives from civil society, academia, technologists, engineers, 
and creators as the Director considers applicable, practicable, and 
relevant.''.
    (b) Agency-specific Plans for Oversight of High-impact Artificial 
Intelligence Systems.--
            (1) Plans required.--Not later than 2 years after the date 
        of the enactment of this Act, the head of each covered agency 
        shall--
                    (A) develop sector-specific plans for the covered 
                agency to conduct oversight of the non-Federal and, as 
                may be appropriate, Federal use of high-impact 
                artificial intelligence systems to improve the safe and 
                responsible use of such systems; and
                    (B) not less frequently than biennially, update the 
                sector-specific recommendations to account for changes 
                in technological capabilities or artificial 
                intelligence use cases.
            (2) Requirements.--In developing plans under paragraph (1), 
        the head of each covered agency shall follow the guidelines 
        established under section 22B(b) of the National Institute of 
        Standards and Technology Act, as added by subsection (a), to 
        develop plans to mitigate risks from such high-impact 
        artificial intelligence systems.
            (3) Authorized elements.--In developing plans under 
        paragraph (1), the head of a covered agency may include the 
        following:
                    (A) Intended use and users, other possible use 
                cases, including any anticipated undesirable or 
                potentially harmful use cases, and what good faith 
                efforts model developers can take to mitigate the use 
                of the system in harmful ways.
                    (B) Methods for evaluating the safety of high-
                impact artificial intelligence systems and approaches 
                for responsible use.
                    (C) Sector-specific differences in what constitutes 
                acceptable high-impact artificial intelligence model 
                functionality and trustworthiness, metrics used to 
                determine high-impact artificial intelligence model 
                performance, and any test results reflecting 
                application of these metrics to evaluate high-impact 
                artificial intelligence model performance across 
                different sectors.
                    (D) Recommendations to support iterative 
                development of subsequent recommendations under 
                paragraph (1).
            (4) Consultation.--In developing plans under paragraph (1), 
        the head of each covered agency shall consult with--
                    (A) the Under Secretary; and
                    (B) such stakeholders representing perspectives 
                from civil society, academia, technologists, engineers, 
                and creators as the head of the agency considers 
                applicable, practicable, and relevant.

SEC. 205. OFFICE OF MANAGEMENT AND BUDGET OVERSIGHT GUIDELINES AND 
              AGENCY OVERSIGHT PLANS.

    (a) Agency Oversight Plan.--In this section, the term ``agency 
oversight plan'' means a guideline developed under section 22B(b)(1) of 
the National Institute of Standards and Technology Act, as added by 
section 204 of this Act.
    (b) Recommendations.--Not later than 2 years after the date of 
enactment of this Act, the Under Secretary and the head of each covered 
agency shall submit to the Director of the Office of Management and 
Budget and the appropriate congressional committees each agency 
oversight plan.
    (c) Reporting Requirements.--
            (1) Annual agency oversight status reports.--
                    (A) In general.--On the first February 1 occurring 
                after the date that is 2 years after the date of 
                enactment of this Act, and annually thereafter until 
                the date described in subparagraph (B), the head of 
                each covered agency shall submit to the Director of the 
                Office of Management and Budget a report containing the 
                implementation status of each agency oversight plan.
                    (B) Continued reporting.--The date described in 
                this subparagraph is the date on which the head of a 
                covered agency--
                            (i) takes final implementation action with 
                        respect to an agency oversight plan; and
                            (ii) determines and states in a report 
                        required under subparagraph (A) that no further 
                        implementation action should be taken with 
                        respect to an agency oversight plan.
            (2) Compliance report to congress.--On April 1 of each year 
        occurring after the date that is 2 years after the date of 
        enactment of this Act, the Director of the Office of Management 
        and Budget shall transmit comments on the reports required 
        under paragraph (1) to the heads of covered agencies and the 
        appropriate congressional committees.
            (3) Failure to report.--If, on March 1 of each year 
        occurring after the date that is 2 years after the date of 
        enactment of this Act, the Director of the Office of Management 
        and Budget has not received a report required from the head of 
        a covered agency under paragraph (1), the Director shall notify 
        the appropriate congressional committees of the failure.
    (d) Technical Assistance in Carrying Out Agency Oversight Plans.--
The Under Secretary shall provide assistance to the heads of covered 
agencies relating to the implementation of the agency oversight plan 
the heads of covered agencies intend to carry out.
    (e) Regulation Review and Improvement.--The Administrator of the 
Office of Information and Regulatory Affairs of the Office of 
Management and Budget, in consultation with the Under Secretary, shall 
develop and periodically revise performance indicators and measures for 
sector-specific regulation of artificial intelligence.

SEC. 206. RISK MANAGEMENT ASSESSMENT FOR CRITICAL-IMPACT ARTIFICIAL 
              INTELLIGENCE SYSTEMS.

    (a) Requirement.--
            (1) In general.--Each critical-impact AI organization shall 
        perform a risk management assessment in accordance with this 
        section.
            (2) Assessment.--Each critical-impact AI organization 
        shall--
                    (A) not later than 30 days before the date on which 
                a critical-impact artificial intelligence system is 
                deployed or made publicly available by the critical-
                impact AI organization, perform a risk management 
                assessment; and
                    (B) not less frequently than biennially during the 
                period beginning on the date of enactment of this Act 
                and ending on the date on which the applicable 
                critical-impact artificial intelligence system is no 
                longer being deployed or made publicly available by the 
                critical-impact AI organization, as applicable, conduct 
                an updated risk management assessment that--
                            (i) if no significant changes were made to 
                        the critical-impact artificial intelligence 
                        system, may find that no significant changes 
                        were made to the critical-impact artificial 
                        intelligence system; and
                            (ii) provides, to the extent practicable, 
                        aggregate results of any significant deviation 
                        from expected performance detailed in the 
                        assessment performed under subparagraph (A) or 
                        the most recent assessment performed under this 
                        subparagraph.
            (3) Review.--
                    (A) In general.--Not later than 90 days after the 
                date of completion of a risk management assessment by a 
                critical-impact AI organization under this section, the 
                critical-impact AI organization shall submit to the 
                Secretary a report--
                            (i) outlining the assessment performed 
                        under this section; and
                            (ii) that is in a consistent format, as 
                        determined by the Secretary.
                    (B) Additional information.--Subject to subsection 
                (d), the Secretary may request that a critical-impact 
                AI organization submit to the Secretary any related 
                additional or clarifying information with respect to a 
                risk management assessment performed under this 
                section.
            (4) Limitation.--The Secretary may not prohibit a critical-
        impact AI organization from making a critical-impact artificial 
        intelligence system available to the public based on the review 
        by the Secretary of a report submitted under paragraph (3)(A) 
        or additional or clarifying information submitted under 
        paragraph (3)(B).
    (b) Assessment Subject Areas.--Each assessment performed by a 
critical-impact AI organization under subsection (a) shall describe the 
means by which the critical-impact AI organization is addressing, 
through a documented TEVV process, the following categories:
            (1) Policies, processes, procedures, and practices across 
        the organization relating to transparent and effective mapping, 
        measuring, and managing of artificial intelligence risks, 
        including--
                    (A) how the organization understands, manages, and 
                documents legal and regulatory requirements involving 
                critical-impact artificial intelligence systems;
                    (B) how the organization integrates the 
                characteristics of trustworthy artificial intelligence, 
                which include valid, reliable, safe, secure, resilient, 
                accountable, transparent, globally and locally 
                explainable, interpretable, privacy-enhanced, 
                protecting of rights under the Constitution of the 
                United States, and compliant with all relevant Federal 
                laws, into organizational policies, processes, 
                procedures, and practices for deploying critical-impact 
                artificial intelligence systems;
                    (C) a methodology to determine the needed level of 
                risk management activities for critical-impact 
                artificial intelligence systems based on the 
                organization's risk tolerance; and
                    (D) how the organization establishes risk 
                management processes and outcomes through transparent 
                policies, procedures, and other controls based on 
                organizational risk priorities.
            (2) The structure, context, and capabilities of the 
        critical-impact artificial intelligence system, including--
                    (A) how the context was established and understood;
                    (B) capabilities, targeted uses, goals, and 
                expected costs and benefits; and
                    (C) how risks and benefits are mapped for each 
                system component.
            (3) A description of how the organization employs 
        quantitative, qualitative, or mixed-method tools, techniques, 
        and methodologies to analyze, assess, benchmark, and monitor 
        artificial intelligence risk, including--
                    (A) identification of appropriate methods and 
                metrics;
                    (B) how artificial intelligence systems are 
                evaluated for characteristics of trustworthy artificial 
                intelligence;
                    (C) mechanisms for tracking artificial intelligence 
                system risks over time; and
                    (D) processes for gathering and assessing feedback 
                relating to the efficacy of measurement.
            (4) A description of allocation of risk resources to map 
        and measure risks on a regular basis as described in paragraph 
        (1), including--
                    (A) how artificial intelligence risks based on 
                assessments and other analytical outputs described in 
                paragraphs (2) and (3) are prioritized, responded to, 
                and managed;
                    (B) how strategies to maximize artificial 
                intelligence benefits and minimize negative impacts 
                were planned, prepared, implemented, documented, and 
                informed by input from relevant artificial intelligence 
                deployers;
                    (C) management of artificial intelligence system 
                risks and benefits; and
                    (D) regular monitoring of risk treatments, 
                including response and recovery, and communication 
                plans for the identified and measured artificial 
                intelligence risks, as applicable.
    (c) Developer Obligations.--The developer of a critical-impact 
artificial intelligence system that agrees to provide technologies or 
services to a deployer of the critical-impact artificial intelligence 
system shall provide to the deployer of the critical-impact artificial 
intelligence system the information reasonably necessary for the 
deployer to comply with the requirements under subsection (a), 
including--
            (1) an overview of the data used in training the baseline 
        artificial intelligence system provided by the developer, 
        including--
                    (A) content and size of datasets used;
                    (B) content and types of data used;
                    (C) content and data that may be subject to 
                copyright protection, and any steps taken to remove 
                such content and data prior to training; and
                    (D) whether and to what extent personal 
                identifiable information makes up a portion of the 
                training dataset, and what risk mitigation measures 
                have been taken to prevent the disclosure of that 
                personal identifiable information;
            (2) documentation outlining the structure and context of 
        the baseline artificial intelligence system of the developer, 
        including--
                    (A) input modality;
                    (B) system output and modality;
                    (C) model size; and
                    (D) model architecture;
            (3) known or reasonably foreseeable capabilities, 
        limitations, and risks of the baseline artificial intelligence 
        system at the time of the development of the artificial 
        intelligence system; and
            (4) documentation for downstream use, including--
                    (A) a statement of intended purpose;
                    (B) guidelines for the intended use of the 
                artificial intelligence system, including a list of 
                permitted, restricted, and prohibited uses and users; 
                and
                    (C) a description of the potential for and risk of 
                deviation from the intended purpose of the baseline 
                artificial intelligence system, including recommended 
                safeguards to mitigate and prevent risks to safety or 
                to rights afforded under the Constitution of the United 
                States or Federal law.
    (d) Termination of Obligation to Disclose Information.--
            (1) In general.--The obligation of a critical-impact AI 
        organization to provide information, upon a request of the 
        Secretary, relating to a specific assessment category under 
        subsection (b) shall end on the date of issuance of a relevant 
        standard applicable to the same category of a critical -impact 
        artificial intelligence system by--
                    (A) the Secretary under section 207(c) with respect 
                to a critical-impact artificial intelligence system;
                    (B) another department or agency of the Federal 
                Government, as determined applicable by the Secretary; 
                or
                    (C) a nongovernmental standards organization, as 
                determined appropriate by the Secretary.
            (2) Effect of new standard.--In adopting any standard 
        applicable to critical-impact artificial intelligence systems 
        under section 207(c), the Secretary shall--
                    (A) identify the category under subsection (b) to 
                which the standard relates, if any; and
                    (B) specify the information that is no longer 
                required to be included in a report required under 
                subsection (a) as a result of the new standard.
    (e) Rule of Construction.--Nothing in this section shall be 
construed to require a critical-impact AI organization or permit the 
Secretary to disclose any information, including data or algorithms--
            (1) constituting a trade secret or other intellectual 
        property right; or
            (2) that is confidential business information.
    (f) Consolidation.--With respect to an instance in which multiple 
critical-impact AI organizations participate in the deployment of a 
high-impact artificial intelligence system, the Secretary may establish 
through regulation a process under which the critical-impact AI 
organizations may submit a single risk management assessment under 
subsection (a).

SEC. 207. CERTIFICATION OF CRITICAL-IMPACT ARTIFICIAL INTELLIGENCE 
              SYSTEMS.

    (a) Establishment of Artificial Intelligence Certification Advisory 
Committee.--
            (1) In general.--Not later than 180 days after the date of 
        enactment of this Act, the Secretary shall establish an 
        advisory committee to provide advice and recommendations on 
        TEVV standards and the certification of critical-impact 
        artificial intelligence systems.
            (2) Duties.--The advisory committee established under this 
        section shall advise the Secretary on matters relating to the 
        testing and certification of critical-impact artificial 
        intelligence systems, including by--
                    (A) providing recommendations to the Secretary on 
                proposed TEVV standards to ensure such standards--
                            (i) maximize alignment and interoperability 
                        with standards issued by nongovernmental 
                        standards organizations and international 
                        standards bodies; and
                            (ii) are performance-based, impact-based, 
                        and risk-based;
                    (B) reviewing prospective TEVV standards submitted 
                by the Secretary to ensure such standards align with 
                recommendations under subparagraph (A);
                    (C) upon completion of the review under 
                subparagraph (B), providing consensus recommendations 
                to the Secretary on--
                            (i) whether a TEVV standard should be 
                        issued, modified, revoked, or added; and
                            (ii) if such a standard should be issued, 
                        how best to align the standard with the 
                        considerations described in subsection (c)(2) 
                        and recommendations described in subparagraph 
                        (A); and
                    (D) reviewing and providing advice and 
                recommendations on the plan and subsequent updates to 
                the plan submitted under subsection (b).
            (3) Composition.--The advisory committee established under 
        this subsection shall be appointed by the Secretary and 
        composed of not more than 15 members with a balanced 
        composition of representatives of the private sector, 
        institutions of higher education, and nonprofit organizations, 
        including--
                    (A) representatives of--
                            (i) institutions of higher education;
                            (ii) companies developing or operating 
                        artificial intelligence systems;
                            (iii) consumers or consumer advocacy 
                        groups;
                            (iv) enabling technology companies; and
                            (v) labor organizations representing the 
                        technology sector; and
                    (B) any other members the Secretary considers to be 
                appropriate.
    (b) Artificial Intelligence Certification Plan.--
            (1) In general.--Not later than 1 year after the date of 
        enactment of this Act, the Secretary shall establish a 3-year 
        implementation plan for the certification of critical-impact 
        artificial intelligence systems.
            (2) Periodic update.--As the Secretary determines 
        appropriate, the Secretary shall update the plan established 
        under paragraph (1).
            (3) Contents.--The plan established under paragraph (1) 
        shall include--
                    (A) a methodology for gathering and using relevant, 
                objective, and available information relating to TEVV;
                    (B) a process for considering whether prescribing 
                certain TEVV standards under subsection (c) for 
                critical-impact artificial intelligence systems is 
                appropriate, necessary, or duplicative of existing 
                international standards;
                    (C) if TEVV standards are considered appropriate, a 
                process for prescribing such standards for critical-
                impact artificial intelligence systems;
                    (D) a mechanism for determining compliance with 
                TEVV standards; and
                    (E) an outline of standards proposed to be issued, 
                including an estimation of the timeline and sequencing 
                of such standards.
            (4) Consultation.--In developing the plan required under 
        paragraph (1), the Secretary shall consult the following:
                    (A) The National Artificial Intelligence Initiative 
                Office.
                    (B) The interagency committee established under 
                section 5103 of the National Artificial Intelligence 
                Initiative Act of 2020 (15 U.S.C. 9413).
                    (C) The National Artificial Intelligence Advisory 
                Committee.
                    (D) Consensus standards issued by nongovernmental 
                standards organizations.
                    (E) The Cybersecurity and Infrastructure Security 
                Agency.
                    (F) Other departments, agencies, and 
                instrumentalities of the Federal Government, as 
                considered appropriate by the Secretary.
            (5) Submission to certification advisory committee.--Upon 
        completing the initial plan required under this subsection and 
        upon completing periodic updates to the plan under paragraph 
        (2), the Secretary shall submit the plan to the advisory 
        committee established under subsection (a) for review.
            (6) Submission to committees of congress.--Upon completing 
        the plan required under this subsection, the Secretary shall 
        submit to the appropriate congressional committees a report 
        containing the plan.
            (7) Limitation.--The Secretary may not issue TEVV standards 
        under subsection (c) until the date of the submission of the 
        plan under paragraphs (5) and (6).
    (c) Standards.--
            (1) Standards.--
                    (A) In general.--The Secretary shall issue TEVV 
                standards for critical-impact artificial intelligence 
                systems.
                    (B) Requirements.--Each standard issued under this 
                subsection shall--
                            (i) be practicable;
                            (ii) meet the need for safe, secure, and 
                        transparent operations of critical-impact 
                        artificial intelligence systems;
                            (iii) with respect to a relevant standard 
                        issued by a nongovernmental standards 
                        organization that is already in place, not 
                        unintentionally contradict that standard;
                            (iv) provide for a mechanism to, not less 
                        frequently than once every 2 years, solicit 
                        public comment and update the standard to 
                        reflect evidence about the utility of risk 
                        mitigation approaches and advancements in 
                        technology and system architecture; and
                            (v) be stated in objective terms.
            (2) Considerations.--In issuing TEVV standards for 
        critical-impact artificial intelligence systems under this 
        subsection, the Secretary shall--
                    (A) consider relevant available information 
                concerning critical-impact artificial intelligence 
                systems, including--
                            (i) transparency reports submitted under 
                        section 203(a);
                            (ii) risk management assessments conducted 
                        under section 206(a); and
                            (iii) any additional information provided 
                        to the Secretary pursuant to section 
                        203(a)(1)(B);
                    (B) consider whether a proposed standard is 
                reasonable, practicable, and appropriate for the 
                particular type of critical-impact artificial 
                intelligence system for which the standard is proposed;
                    (C) consult with stakeholders with expertise in 
                addressing risks and design of artificial intelligence 
                systems and review standards issued by nongovernmental 
                standards organizations;
                    (D) pursuant to paragraph (1)(B)(iii), consider 
                whether adoption of a relevant standard issued by a 
                nongovernmental standards organization as a TEVV 
                standard is the most appropriate action; and
                    (E) consider whether the standard takes into 
                account--
                            (i) transparent, replicable, and objective 
                        assessments of critical-impact artificial 
                        intelligence system risk, structure, 
                        capabilities, and design;
                            (ii) the risk posed to the public by an 
                        applicable critical-impact artificial 
                        intelligence system; and
                            (iii) the diversity of methodologies and 
                        innovative technologies and approaches 
                        available to meet the objectives of the 
                        standard.
            (3) Consultation.--Before finalizing a TEVV standard issued 
        under this subsection, the Secretary shall submit the TEVV 
        standard to the advisory committee established under subsection 
        (a) for review.
            (4) Public comment.--Before issuing any TEVV standard under 
        this subsection, the Secretary shall--
                    (A) publish a notice describing the TEVV standard; 
                and
                    (B) provide an opportunity for public comment 
                pursuant to section 553 of title 5, United States Code.
            (5) Cooperation.--In developing a TEVV standard under this 
        subsection, the Secretary may, as determined appropriate, 
        advise, assist, and cooperate with departments, agencies, and 
        instrumentalities of the Federal Government, States, and other 
        public and private agencies.
            (6) Effective date of standards.--
                    (A) In general.--The Secretary shall specify the 
                effective date of a TEVV standard issued under this 
                subsection in the order issuing the standard.
                    (B) Limitation.--Subject to subparagraph (C), a 
                TEVV standard issued under this subsection may not 
                become effective--
                            (i) during the 180-day period following the 
                        date on which the TEVV standard is issued; and
                            (ii) more than 1 year after the date on 
                        which the TEVV standard is issued.
                    (C) Exception.--Subparagraph (B) shall not apply to 
                the effective date of a TEVV standard issued under this 
                section if the Secretary--
                            (i) finds, for good cause shown, that a 
                        different effective date is in the public 
                        interest; and
                            (ii) publishes the reasons for the finding 
                        under clause (i).
            (7) Rule of construction.--Nothing in this subsection shall 
        be construed to authorize the Secretary to impose any 
        requirements on or take any enforcement actions under this 
        section or section 208 relating to a critical-impact AI 
        organization before a TEVV standard relating to those 
        requirements is prescribed.
    (d) Exemptions.--
            (1) Authority to exempt and procedures.--
                    (A) In general.--The Secretary may exempt, on a 
                temporary basis, a critical-impact artificial 
                intelligence system from a TEVV standard issued under 
                subsection (c) on terms the Secretary considers 
                appropriate.
                    (B) Renewal.--An exemption under subparagraph (A)--
                            (i) may be renewed only on reapplication; 
                        and
                            (ii) shall conform to the requirements of 
                        this paragraph.
                    (C) Proceedings.--
                            (i) In general.--The Secretary may begin a 
                        proceeding to grant an exemption to a critical-
                        impact artificial intelligence system under 
                        this paragraph if the critical-impact AI 
                        organization that deployed the critical-impact 
                        artificial intelligence system applies for an 
                        exemption or a renewal of an exemption.
                            (ii) Notice and comment.--The Secretary 
                        shall publish notice of the application under 
                        clause (i) and provide an opportunity for 
                        public comment under section 553 of title 5, 
                        United States Code.
                            (iii) Filing.--An application for an 
                        exemption or for a renewal of an exemption 
                        under this paragraph shall be filed at such 
                        time and in such manner and contain such 
                        information as the Secretary may require.
                    (D) Actions.--The Secretary may grant an exemption 
                under this paragraph upon finding that--
                            (i) the exemption is consistent with the 
                        public interest and this section; and
                            (ii) the exemption would facilitate the 
                        development or evaluation of a feature or 
                        characteristic of a critical-impact artificial 
                        intelligence system providing a safety and 
                        security level that is not less than the TEVV 
                        standard level.
            (2) Disclosure.--Not later than 30 days after the date on 
        which an application is filed under this subsection, the 
        Secretary may make public information contained in the 
        application or relevant to the application, unless the 
        information concerns or constitutes a trade secret or other 
        confidential information not relevant to the application.
            (3) Notice of decision.--The Secretary shall publish in the 
        Federal Register a notice of each decision granting or denying 
        an exemption under this subsection and the reasons for granting 
        or denying that exemption, including a justification with 
        supporting information for the selected approach.
    (e) Certification of Compliance.--
            (1) In general.--Subject to paragraph (2), with respect to 
        each critical-impact artificial intelligence system of a 
        critical-impact AI organization, the critical-impact AI 
        organization shall certify to the Secretary that the critical-
        impact artificial intelligence system complies with applicable 
        TEVV standards issued under this section.
            (2) Exception.--A critical-impact AI organization may not 
        issue a certification under paragraph (1) if, in exercising 
        reasonable care, the critical-impact AI organization has 
        constructive knowledge that the certification is false or 
        misleading in a material respect.
            (3) Developer obligations.--The developer of a critical-
        impact artificial intelligence system that enters into a 
        contractual or licensing agreement with a critical impact AI 
        organization shall be subject to the same disclosure 
        obligations as a developer of a critical impact artificial 
        intelligence system under section 206(c).
    (f) Noncompliance Findings and Enforcement Action.--
            (1) Finding of noncompliance by secretary.--Upon learning 
        that a critical-impact artificial intelligence system deployed 
        by a critical-impact AI organization violates this section upon 
        receiving a report of noncompliance pursuant to an 
        investigation conducted under section 208(f) or through other 
        means established through TEVV standards pursuant to this 
        section, the Secretary shall--
                    (A) immediately--
                            (i) notify the critical-impact AI 
                        organization of the finding; and
                            (ii) order the critical-impact AI 
                        organization to take remedial action to address 
                        the noncompliance of the artificial 
                        intelligence system; and
                    (B) may, as determined appropriate or necessary by 
                the Secretary, and if the Secretary determines that 
                actions taken by a critical-impact AI organization are 
                insufficient to remedy the noncompliance of the 
                critical-impact AI organization with this section, take 
                enforcement action under section 208.
            (2) Actions by critical-impact ai organization.--If a 
        critical-impact AI organization finds that a critical-impact 
        artificial intelligence system deployed by the critical-impact 
        AI organization is noncompliant with an applicable TEVV 
        standard issued under this section or the critical-impact AI 
        organization is notified of noncompliance by the Secretary 
        under paragraph (1)(A)(i), the critical-impact AI organization 
        shall--
                    (A) without undue delay, notify the Secretary by 
                certified mail or electronic mail of the noncompliance 
                or receipt of the notification of noncompliance;
                    (B) take remedial action to address the 
                noncompliance; and
                    (C) not later than 10 days after the date of the 
                notification or receipt under subparagraph (A), submit 
                to the Secretary a report containing information on--
                            (i) the nature and discovery of the 
                        noncompliant aspect of the critical-impact 
                        artificial intelligence system;
                            (ii) measures taken to remedy such 
                        noncompliance; and
                            (iii) actions taken by the critical-impact 
                        AI organization to address stakeholders 
                        affected by such noncompliance.

SEC. 208. ENFORCEMENT.

    (a) In General.--The Secretary shall take an action described in 
this section--
            (1) upon discovering noncompliance with a provision of this 
        Act by a deployer of a high-impact artificial intelligence 
        system, a critical-impact AI organization, or a developer of a 
        critical-impact artificial intelligence system; and
            (2) if the Secretary determines that actions taken by the 
        deployer of a high-impact artificial intelligence system, a 
        critical-impact AI organization, or the developer of a 
        critical-impact artificial intelligence system are insufficient 
        to remedy the noncompliance.
    (b) Civil Penalties.--
            (1) In general.--The Secretary may impose a penalty 
        described in paragraph (2) on a deployer of a high-impact 
        artificial intelligence system or a critical-impact AI 
        organization for each violation by that entity of this Act or 
        any regulation or order issued under this Act.
            (2) Penalty described.--The penalty described in this 
        paragraph is the greater of--
                    (A) an amount not to exceed $300,000; or
                    (B) an amount that is twice the value of the 
                artificial intelligence system product deployed that is 
                the basis of the violation with respect to which the 
                penalty is imposed.
    (c) Violation With Intent.--
            (1) In general.--If the Secretary determines that a 
        deployer of a high-impact artificial intelligence system or a 
        critical-impact AI organization intentionally violates this Act 
        or any regulation or order issued under this Act, the Secretary 
        may prohibit the critical-impact AI organization or deployer, 
        as applicable, from deploying a critical-impact artificial 
        intelligence system or a high-impact artificial intelligence 
        system.
            (2) In addition .--A prohibition imposed under paragraph 
        (1) shall be in addition to any other civil penalties provided 
        under this Act.
    (d) Factors.--The Secretary may by regulation provide standards for 
establishing levels of civil penalty under this section based upon 
factors, such as the seriousness of the violation, the culpability of 
the violator, and such mitigating factors as the violator's record of 
cooperation with the Secretary in disclosing the violation.
    (e) Civil Action.--
            (1) In general.--Upon referral by the Secretary, the 
        Attorney General may bring a civil action in a United States 
        district court to--
                    (A) enjoin a violation of section 207; or
                    (B) collect a civil penalty upon a finding of 
                noncompliance with this Act.
            (2) Venue.--A civil action may be brought under paragraph 
        (1) in the judicial district in which the violation occurred or 
        the defendant is found, resides, or does business.
            (3) Process.--Process in a civil action under paragraph (1) 
        may be served in any judicial district in which the defendant 
        resides or is found.
    (f) Authority to Investigate.--The Secretary may conduct an 
investigation--
            (1) that may be necessary to enforce this Act or a TEVV 
        standard or regulation prescribed pursuant to this Act; or
            (2) related to a report of noncompliance with this Act from 
        a third party, a deployer or developer of an artificial 
        intelligence system subject to the requirements of this Act, or 
        discovered by the Secretary.
    (g) Rule of Construction.--Nothing in this section shall be 
construed to require a deployer of a critical-impact artificial 
intelligence system to disclose any information, including data or 
algorithms--
            (1) constituting a trade secret or other protected 
        intellectual property right; or
            (2) that is confidential business information.

SEC. 209. DEVELOPER AND DEPLOYER OVERLAP.

    With respect to an entity that is a deployer and a developer, the 
entity shall be subject to the requirements of deployers and developers 
under this Act.

SEC. 210. ARTIFICIAL INTELLIGENCE CONSUMER EDUCATION.

    (a) Establishment.--Not later than 180 days after the date of 
enactment of this Act, the Secretary shall establish a working group 
relating to responsible education efforts for artificial intelligence 
systems.
    (b) Membership.--
            (1) In general.--The Secretary shall appoint to serve as 
        members of the working group established under this section not 
        more than 15 individuals with expertise relating to artificial 
        intelligence systems, including--
                    (A) representatives of--
                            (i) institutions of higher education;
                            (ii) companies developing or operating 
                        artificial intelligence systems;
                            (iii) consumers or consumer advocacy 
                        groups;
                            (iv) public health organizations;
                            (v) marketing professionals;
                            (vi) entities with national experience 
                        relating to consumer education, including 
                        technology education;
                            (vii) public safety organizations;
                            (viii) rural workforce development 
                        advocates;
                            (ix) enabling technology companies; and
                            (x) nonprofit technology industry trade 
                        associations; and
                    (B) any other members the Secretary considers to be 
                appropriate.
            (2) Compensation.--A member of the working group 
        established under this section shall serve without 
        compensation.
    (c) Duties.--
            (1) In general.--The working group established under this 
        section shall--
                    (A) identify recommended education and programs 
                that may be voluntarily employed by industry to 
                inform--
                            (i) consumers and other stakeholders with 
                        respect to artificial intelligence systems as 
                        those systems--
                                    (I) become available; or
                                    (II) are soon to be made widely 
                                available for public use or 
                                consumption; and
                    (B) submit to Congress, and make available to the 
                public, a report containing the findings and 
                recommendations under subparagraph (A).
            (2) Factors for consideration.--The working group 
        established under this section shall take into consideration 
        topics relating to--
                    (A) the intent, capabilities, and limitations of 
                artificial intelligence systems;
                    (B) use cases of artificial intelligence 
                applications that improve lives of the people of the 
                United States, such as improving government efficiency, 
                filling critical roles, and reducing mundane work 
                tasks;
                    (C) artificial intelligence research breakthroughs;
                    (D) engagement and interaction methods, including 
                how to adequately inform consumers of interaction with 
                an artificial intelligence system;
                    (E) human-machine interfaces;
                    (F) emergency fallback scenarios;
                    (G) operational boundary responsibilities;
                    (H) potential mechanisms that could change function 
                behavior in service;
                    (I) consistent nomenclature and taxonomy for safety 
                features and systems; and
                    (J) digital literacy.
            (3) Consultation.--The Secretary shall consult with the 
        Chair of the Federal Trade Commission with respect to the 
        recommendations of the working group established under this 
        section, as appropriate.
    (d) Termination.--The working group established under this section 
shall terminate on the date that is 2 years after the date of enactment 
of this Act.

SEC. 211. SEVERABILITY.

    If any provision of this title, or an amendment made by this title, 
or the application of such provision to any person or circumstance is 
held to be unconstitutional, the remainder of this title, or an 
amendment made by this title, and the application of the provisions of 
such to all other persons or circumstances shall not be affected 
thereby.
                                                       Calendar No. 723

118th CONGRESS

  2d Session

                                S. 3312

_______________________________________________________________________

                                 A BILL

   To provide a framework for artificial intelligence innovation and 
                accountability, and for other purposes.

_______________________________________________________________________

            December 18 (legislative day, December 16), 2024

                       Reported with an amendment