[Congressional Bills 118th Congress]
[From the U.S. Government Publishing Office]
[S. 4495 Reported in Senate (RS)]

<DOC>





                                                       Calendar No. 697
118th CONGRESS
  2d Session
                                S. 4495

                          [Report No. 118-291]

 To enable safe, responsible, and agile procurement, development, and 
use of artificial intelligence by the Federal Government, and for other 
                               purposes.


_______________________________________________________________________


                   IN THE SENATE OF THE UNITED STATES

                             June 11, 2024

Mr. Peters (for himself and Mr. Tillis) introduced the following bill; 
which was read twice and referred to the Committee on Homeland Security 
                        and Governmental Affairs

                           December 16, 2024

               Reported by Mr. Peters, with an amendment
 [Strike out all after the enacting clause and insert the part printed 
                               in italic]

_______________________________________________________________________

                                 A BILL


 
 To enable safe, responsible, and agile procurement, development, and 
use of artificial intelligence by the Federal Government, and for other 
                               purposes.

    Be it enacted by the Senate and House of Representatives of the 
United States of America in Congress assembled,

<DELETED>SECTION 1. SHORT TITLE.</DELETED>

<DELETED>    This Act may be cited as the ``Promoting Responsible 
Evaluation and Procurement to Advance Readiness for Enterprise-wide 
Deployment for Artificial Intelligence Act'' or the ``PREPARED for AI 
Act''.</DELETED>

<DELETED>SEC. 2. DEFINITIONS.</DELETED>

<DELETED>    In this Act:</DELETED>
        <DELETED>    (1) Adverse incident.--The term ``adverse 
        incident'' means any incident or malfunction of artificial 
        intelligence that directly or indirectly leads to--</DELETED>
                <DELETED>    (A) harm impacting rights or safety, as 
                described in section 7(a)(2)(D);</DELETED>
                <DELETED>    (B) the death of an individual or damage 
                to the health of an individual;</DELETED>
                <DELETED>    (C) material or irreversible disruption of 
                the management and operation of critical 
                infrastructure, as described in section 
                7(a)(2)(D)(i)(II)(cc);</DELETED>
                <DELETED>    (D) material damage to property or the 
                environment;</DELETED>
                <DELETED>    (E) loss of a mission-critical system or 
                equipment;</DELETED>
                <DELETED>    (F) failure of the mission of an 
                agency;</DELETED>
                <DELETED>    (G) the denial of a benefit, payment, or 
                other service to an individual or group of individuals 
                who would have otherwise been eligible;</DELETED>
                <DELETED>    (H) the denial of an employment, contract, 
                grant, or similar opportunity that would have otherwise 
                been offered; or</DELETED>
                <DELETED>    (I) another consequence, as determined by 
                the Director with public notice.</DELETED>
        <DELETED>    (2) Agency.--The term ``agency''--</DELETED>
                <DELETED>    (A) has the meaning given that term in 
                section 3502(1) of title 44, United States Code; 
                and</DELETED>
                <DELETED>    (B) includes each of the independent 
                regulatory agencies described in section 3502(5) of 
                title 44, United States Code.</DELETED>
        <DELETED>    (3) Artificial intelligence.--The term 
        ``artificial intelligence''--</DELETED>
                <DELETED>    (A) has the meaning given that term in 
                section 5002 of the National Artificial Intelligence 
                Initiative Act of 2020 (15 U.S.C. 9401); and</DELETED>
                <DELETED>    (B) includes the artificial systems and 
                techniques described in paragraphs (1) through (5) of 
                section 238(g) of the John S. McCain National Defense 
                Authorization Act for Fiscal Year 2019 (Public Law 115-
                232; 10 U.S.C. 4061 note prec.).</DELETED>
        <DELETED>    (4) Biometric data.--The term ``biometric data'' 
        means data resulting from specific technical processing 
        relating to the unique physical, physiological, or behavioral 
        characteristics of an individual, including facial images, 
        dactyloscopic data, physical movement and gait, breath, voice, 
        DNA, blood type, and expression of emotion, thought, or 
        feeling.</DELETED>
        <DELETED>    (5) Commercial technology.--The term ``commercial 
        technology''--</DELETED>
                <DELETED>    (A) means a technology, process, or 
                method, including research or development; 
                and</DELETED>
                <DELETED>    (B) includes commercial products, 
                commercial services, and other commercial items, as 
                defined in the Federal Acquisition Regulation, 
                including any addition or update thereto by the Federal 
                Acquisition Regulatory Council.</DELETED>
        <DELETED>    (6) Council.--The term ``Council'' means the Chief 
        Artificial Intelligence Officers Council established under 
        section 5(a).</DELETED>
        <DELETED>    (7) Deployer.--The term ``deployer'' means an 
        entity that operates or provides artificial intelligence, 
        whether developed internally or by a third-party 
        developer.</DELETED>
        <DELETED>    (8) Developer.--The term ``developer'' means an 
        entity that designs, codes, produces, or owns artificial 
        intelligence.</DELETED>
        <DELETED>    (9) Director.--The term ``Director'' means the 
        Director of the Office of Management and Budget.</DELETED>
        <DELETED>    (10) Impact assessment.--The term ``impact 
        assessment'' means a structured process for considering the 
        implications of a proposed artificial intelligence use 
        case.</DELETED>
        <DELETED>    (11) Operational design domain.--The term 
        ``operational design domain'' means a set of operating 
        conditions for an automated system.</DELETED>
        <DELETED>    (12) Procure or obtain.--The term ``procure or 
        obtain'' means--</DELETED>
                <DELETED>    (A) to acquire through contract actions 
                awarded pursuant to the Federal Acquisition Regulation, 
                including through interagency agreements, multi-agency 
                use, and purchase card transactions;</DELETED>
                <DELETED>    (B) to acquire through contracts and 
                agreements awarded through other special procurement 
                authorities, including through other transactions and 
                commercial solutions opening authorities; or</DELETED>
                <DELETED>    (C) to obtain through other means, 
                including through open source platforms or 
                freeware.</DELETED>
        <DELETED>    (13) Relevant congressional committees.--The term 
        ``relevant congressional committees'' means the Committee on 
        Homeland Security and Governmental Affairs of the Senate and 
        the Committee on Oversight and Accountability of the House of 
        Representatives.</DELETED>
        <DELETED>    (14) Risk.--The term ``risk'' means the 
        combination of the probability of an occurrence of harm and the 
        potential severity of that harm.</DELETED>
        <DELETED>    (15) Use case.--The term ``use case'' means the 
        ways and context in which artificial intelligence is operated 
        to perform a specific function.</DELETED>

<DELETED>SEC. 3. IMPLEMENTATION OF REQUIREMENTS.</DELETED>

<DELETED>    (a) Agency Implementation.--Not later than 1 year after 
the date of enactment of this Act, the Director shall ensure that 
agencies have implemented the requirements of this Act.</DELETED>
<DELETED>    (b) Annual Briefing.--Not later than 180 days after the 
date of enactment of this Act, and annually thereafter, the Director 
shall brief the appropriate Congressional committees on implementation 
of this Act and related considerations.</DELETED>

<DELETED>SEC. 4. PROCUREMENT OF ARTIFICIAL INTELLIGENCE.</DELETED>

<DELETED>    (a) Government-Wide Requirements.--</DELETED>
        <DELETED>    (1) In general.--Not later than 1 year after the 
        date of enactment of this Act, the Federal Acquisition 
        Regulatory Council shall review Federal Acquisition Regulation 
        acquisition planning, source selection, and other requirements 
        and update the Federal Acquisition Regulation as needed to 
        ensure that agency procurement of artificial intelligence 
        includes--</DELETED>
                <DELETED>    (A) a requirement to address the outcomes 
                of the risk evaluation and impact assessments required 
                under section 8(a);</DELETED>
                <DELETED>    (B) a requirement for consultation with an 
                interdisciplinary team of agency experts prior to, and 
                throughout, as necessary, procuring or obtaining 
                artificial intelligence; and</DELETED>
                <DELETED>    (C) any other considerations determined 
                relevant by the Federal Acquisition Regulatory 
                Council.</DELETED>
        <DELETED>    (2) Interdisciplinary team of experts.--The 
        interdisciplinary team of experts described in paragraph (1)(B) 
        may--</DELETED>
                <DELETED>    (A) vary depending on the use case and the 
                risks determined to be associated with the use case; 
                and</DELETED>
                <DELETED>    (B) include technologists, information 
                security personnel, domain experts, privacy officers, 
                data officers, civil rights and civil liberties 
                officers, contracting officials, legal counsel, 
                customer experience professionals, and 
                others.</DELETED>
        <DELETED>    (3) Acquisition planning.--The acquisition 
        planning updates described in paragraph (1) shall include 
        considerations for, at minimum, as appropriate depending on the 
        use case--</DELETED>
                <DELETED>    (A) data ownership and privacy;</DELETED>
                <DELETED>    (B) data information security;</DELETED>
                <DELETED>    (C) interoperability 
                requirements;</DELETED>
                <DELETED>    (D) data and model assessment 
                processes;</DELETED>
                <DELETED>    (E) scope of use;</DELETED>
                <DELETED>    (F) ongoing monitoring 
                techniques;</DELETED>
                <DELETED>    (G) type and scope of artificial 
                intelligence audits;</DELETED>
                <DELETED>    (H) environmental impact; and</DELETED>
                <DELETED>    (I) safety and security risk mitigation 
                techniques, including a plan for how adverse event 
                reporting can be incorporated, pursuant to section 
                5(g).</DELETED>
<DELETED>    (b) Requirements for High Risk Use Cases.--</DELETED>
        <DELETED>    (1) In general.--</DELETED>
                <DELETED>    (A) Establishment.--Beginning on the date 
                that is 1 year after the date of enactment of this Act, 
                the head of an agency may not procure or obtain 
                artificial intelligence for a high risk use case, as 
                defined in section 7(a)(2)(D), prior to establishing 
                and incorporating certain terms into relevant 
                contracts, agreements, and employee guidelines for 
                artificial intelligence, including--</DELETED>
                        <DELETED>    (i) a requirement that the use of 
                        the artificial intelligence be limited to its 
                        operational design domain;</DELETED>
                        <DELETED>    (ii) requirements for safety, 
                        security, and trustworthiness, including--
                        </DELETED>
                                <DELETED>    (I) a reporting mechanism 
                                through which agency personnel are 
                                notified by the deployer of any adverse 
                                incident;</DELETED>
                                <DELETED>    (II) a requirement, in 
                                accordance with section 5(g), that 
                                agency personnel receive from the 
                                deployer a notification of any adverse 
                                incident, an explanation of the cause 
                                of the adverse incident, and any data 
                                directly connected to the adverse 
                                incident in order to address and 
                                mitigate the harm; and</DELETED>
                                <DELETED>    (III) that the agency has 
                                the right to temporarily or permanently 
                                suspend use of the artificial 
                                intelligence if--</DELETED>
                                        <DELETED>    (aa) the risks of 
                                        the artificial intelligence to 
                                        rights or safety become 
                                        unacceptable, as determined 
                                        under the agency risk 
                                        classification system pursuant 
                                        to section 7; or</DELETED>
                                        <DELETED>    (bb) on or after 
                                        the date that is 180 days after 
                                        the publication of the most 
                                        recently updated version of the 
                                        framework developed and updated 
                                        pursuant to section 22(A)(c) of 
                                        the National Institute of 
                                        Standards and Technology Act 
                                        (15 U.S.C. 278h-1(c)), the 
                                        deployer is found not to comply 
                                        with such most recent 
                                        update;</DELETED>
                        <DELETED>    (iii) requirements for quality, 
                        relevance, sourcing and ownership of data, as 
                        appropriate by use case, and applicable unless 
                        the head of the agency waives such requirements 
                        in writing, including--</DELETED>
                                <DELETED>    (I) retention of rights to 
                                Government data and any modification to 
                                the data including to protect the data 
                                from unauthorized disclosure and use to 
                                subsequently train or improve the 
                                functionality of commercial products 
                                offered by the deployer, any relevant 
                                developers, or others; and</DELETED>
                                <DELETED>    (II) a requirement that 
                                the deployer and any relevant 
                                developers or other parties isolate 
                                Government data from all other data, 
                                through physical separation, electronic 
                                separation via secure copies with 
                                strict access controls, or other 
                                computational isolation 
                                mechanisms;</DELETED>
                        <DELETED>    (iv) requirements for evaluation 
                        and testing of artificial intelligence based on 
                        use case, to be performed on an ongoing basis; 
                        and</DELETED>
                        <DELETED>    (v) requirements that the deployer 
                        and any relevant developers provide 
                        documentation, as determined necessary and 
                        requested by the agency, in accordance with 
                        section 8(b).</DELETED>
                <DELETED>    (B) Review.--The Senior Procurement 
                Executive, in coordination with the Chief Artificial 
                Intelligence Officer, shall consult with technologists, 
                information security personnel, domain experts, privacy 
                officers, data officers, civil rights and civil 
                liberties officers, contracting officials, legal 
                counsel, customer experience professionals, and other 
                relevant agency officials to review the requirements 
                described in clauses (i) through (v) of subparagraph 
                (A) and determine whether it may be necessary to 
                incorporate additional requirements into relevant 
                contracts or agreements.</DELETED>
                <DELETED>    (C) Regulation.--The Federal Acquisition 
                Regulatory Council shall revise the Federal Acquisition 
                Regulation as necessary to implement the requirements 
                of this subsection.</DELETED>
        <DELETED>    (2) Rules of construction.--This Act shall 
        supersede any requirements that conflict with this Act under 
        the guidance required to be produced by the Director pursuant 
        to section 7224(d) of the Advancing American AI Act (40 U.S.C. 
        11301 note).</DELETED>

<DELETED>SEC. 5. INTERAGENCY GOVERNANCE OF ARTIFICIAL 
              INTELLIGENCE.</DELETED>

<DELETED>    (a) Chief Artificial Intelligence Officers Council.--Not 
later than 60 days after the date of enactment of this Act, the 
Director shall establish a Chief Artificial Intelligence Officers 
Council.</DELETED>
<DELETED>    (b) Duties.--The duties of the Council shall include--
</DELETED>
        <DELETED>    (1) coordinating agency development and use of 
        artificial intelligence in agency programs and operations, 
        including practices relating to the design, operation, risk 
        management, and performance of artificial 
        intelligence;</DELETED>
        <DELETED>    (2) sharing experiences, ideas, best practices, 
        and innovative approaches relating to artificial intelligence; 
        and</DELETED>
        <DELETED>    (3) assisting the Director, as necessary, with 
        respect to--</DELETED>
                <DELETED>    (A) the identification, development, and 
                coordination of multi-agency projects and other 
                initiatives, including initiatives to improve 
                Government performance;</DELETED>
                <DELETED>    (B) the management of risks relating to 
                developing, obtaining, or using artificial 
                intelligence, including by developing a common template 
                to guide agency Chief Artificial Intelligence Officers 
                in implementing a risk classification system that may 
                incorporate best practices, such as those from--
                </DELETED>
                        <DELETED>    (i) the most recently updated 
                        version of the framework developed and updated 
                        pursuant to section 22A(c) of the National 
                        Institute of Standards and Technology Act (15 
                        U.S.C. 278h-1(c)); and</DELETED>
                        <DELETED>    (ii) the report published by the 
                        Government Accountability Office entitled 
                        ``Artificial Intelligence: An Accountability 
                        Framework for Federal Agencies and Other 
                        Entities'' (GAO-21-519SP), published on June 
                        30, 2021;</DELETED>
                <DELETED>    (C) promoting the development and use of 
                efficient, effective, common, shared, or other 
                approaches to key processes that improve the delivery 
                of services for the public; and</DELETED>
                <DELETED>    (D) soliciting and providing perspectives 
                on matters of concern, including from and to--
                </DELETED>
                        <DELETED>    (i) interagency 
                        councils;</DELETED>
                        <DELETED>    (ii) Federal Government 
                        entities;</DELETED>
                        <DELETED>    (iii) private sector, public 
                        sector, nonprofit, and academic 
                        experts;</DELETED>
                        <DELETED>    (iv) State, local, Tribal, 
                        territorial, and international governments; 
                        and</DELETED>
                        <DELETED>    (v) other individuals and 
                        entities, as determined relevant by the 
                        Council.</DELETED>
<DELETED>    (c) Membership of the Council.--</DELETED>
        <DELETED>    (1) Co-chairs.--The Council shall have 2 co-
        chairs, which shall be--</DELETED>
                <DELETED>    (A) the Director; and</DELETED>
                <DELETED>    (B) an individual selected by a majority 
                of the members of the Council.</DELETED>
        <DELETED>    (2) Members.--Other members of the Council shall 
        include--</DELETED>
                <DELETED>    (A) the Chief Artificial Intelligence 
                Officer of each agency; and</DELETED>
                <DELETED>    (B) the senior official for artificial 
                intelligence of the Office of Management and 
                Budget.</DELETED>
<DELETED>    (d) Standing Committees; Working Groups.--The Council 
shall have the authority to establish standing committees, including an 
executive committee, and working groups.</DELETED>
<DELETED>    (e) Council Staff.--The Council may enter into an 
interagency agreement with the Administrator of General Services for 
shared services for the purpose of staffing the Council.</DELETED>
<DELETED>    (f) Development, Adaptation, and Documentation.--
</DELETED>
        <DELETED>    (1) Guidance.--Not later than 90 days after the 
        date of enactment of this Act, the Director, in consultation 
        with the Council, shall issue guidance relating to--</DELETED>
                <DELETED>    (A) developments in artificial 
                intelligence and implications for management of agency 
                programs;</DELETED>
                <DELETED>    (B) the agency impact assessments 
                described in section 8(a) and other relevant impact 
                assessments as determined appropriate by the Director, 
                including the appropriateness of substituting pre-
                existing assessments, including privacy impact 
                assessments, for purposes of an artificial intelligence 
                impact assessment;</DELETED>
                <DELETED>    (C) documentation for agencies to require 
                from deployers of artificial intelligence;</DELETED>
                <DELETED>    (D) a model template for the explanations 
                for use case risk classifications that each agency must 
                provide under section 8(a)(4); and</DELETED>
                <DELETED>    (E) other matters, as determined relevant 
                by the Director.</DELETED>
        <DELETED>    (2) Annual review.--The Director, in consultation 
        with the Council, shall periodically, but not less frequently 
        than annually, review and update, as needed, the guidelines 
        issued under paragraph (1).</DELETED>
<DELETED>    (g) Incident Reporting.--</DELETED>
        <DELETED>    (1) In general.--Not later than 180 days after the 
        date of enactment of this Act, the Director, in consultation 
        with the Council, shall develop procedures for ensuring that--
        </DELETED>
                <DELETED>    (A) adverse incidents involving artificial 
                intelligence procured, obtained, or used by agencies 
                are reported promptly to the agency by the developer or 
                deployer, or to the developer or deployer by the 
                agency, whichever first becomes aware of the adverse 
                incident; and</DELETED>
                <DELETED>    (B) information relating to an adverse 
                incident described in subparagraph (A) is appropriately 
                shared among agencies.</DELETED>
        <DELETED>    (2) Single report.--Adverse incidents also 
        qualifying for incident reporting under section 3554 of title 
        44, United States Code, or other relevant laws or policies, may 
        be reported under such other reporting requirement and are not 
        required to be additionally reported under this 
        subsection.</DELETED>
        <DELETED>    (3) Notice to deployer.--</DELETED>
                <DELETED>    (A) In general.--If an adverse incident is 
                discovered by an agency, the agency shall report the 
                adverse incident to the deployer and the deployer, in 
                consultation with any relevant developers, shall take 
                immediate action to resolve the adverse incident and 
                mitigate the potential for future adverse 
                incidents.</DELETED>
                <DELETED>    (B) Waiver.--</DELETED>
                        <DELETED>    (i) In general.--Unless otherwise 
                        required by law, the head of an agency may 
                        issue a written waiver that waives the 
                        applicability of some or all of the 
                        requirements under subparagraph (A), with 
                        respect to a specific adverse 
                        incident.</DELETED>
                        <DELETED>    (ii) Written waiver contents.--A 
                        written waiver under clause (i) shall include 
                        justification for the waiver.</DELETED>
                        <DELETED>    (iii) Notice.--The head of an 
                        agency shall forward advance notice of any 
                        waiver under this subparagraph to the Director, 
                        or the designee of the Director.</DELETED>

<DELETED>SEC. 6. AGENCY GOVERNANCE OF ARTIFICIAL 
              INTELLIGENCE.</DELETED>

<DELETED>    (a) In General.--The head of an agency shall--</DELETED>
        <DELETED>    (1) ensure the responsible adoption of artificial 
        intelligence, including by--</DELETED>
                <DELETED>    (A) articulating a clear vision of what 
                the head of the agency wants to achieve by developing, 
                procuring or obtaining, or using artificial 
                intelligence;</DELETED>
                <DELETED>    (B) ensuring the agency develops, 
                procures, obtains, or uses artificial intelligence that 
                follows the principles of trustworthy artificial 
                intelligence in government set forth under Executive 
                Order 13960 (85 Fed. Reg. 78939; relating to promoting 
                the use of trustworthy artificial intelligence in 
                Federal Government) and the principles for safe, 
                secure, and trustworthy artificial intelligence in 
                government set forth under section 2 of Executive Order 
                14110 (88 Fed. Reg. 75191; relating to the safe, 
                secure, and trustworthy development and use of 
                artificial intelligence);</DELETED>
                <DELETED>    (C) testing, validating, and monitoring 
                artificial intelligence and the use case-specific 
                performance of artificial intelligence, among others, 
                to--</DELETED>
                        <DELETED>    (i) ensure all use of artificial 
                        intelligence is appropriate to and improves the 
                        effectiveness of the mission of the 
                        agency;</DELETED>
                        <DELETED>    (ii) guard against bias in data 
                        collection, use, and dissemination;</DELETED>
                        <DELETED>    (iii) ensure reliability, 
                        fairness, and transparency; and</DELETED>
                        <DELETED>    (iv) protect against impermissible 
                        discrimination;</DELETED>
                <DELETED>    (D) developing, adopting, and applying a 
                suitable enterprise risk management framework approach 
                to artificial intelligence, incorporating the 
                requirements under this Act;</DELETED>
                <DELETED>    (E) continuing to develop a workforce 
                that--</DELETED>
                        <DELETED>    (i) understands the strengths and 
                        weaknesses of artificial intelligence, 
                        including artificial intelligence embedded in 
                        agency data systems and operations;</DELETED>
                        <DELETED>    (ii) is aware of the benefits and 
                        risk of artificial intelligence;</DELETED>
                        <DELETED>    (iii) is able to provide human 
                        oversight for the design, implementation, and 
                        end uses of artificial intelligence; 
                        and</DELETED>
                        <DELETED>    (iv) is able to review and provide 
                        redress for erroneous decisions made in the 
                        course of artificial intelligence-assisted 
                        processes; and</DELETED>
                <DELETED>    (F) ensuring implementation of the 
                requirements under section 8(a) for the identification 
                and evaluation of risks posed by the deployment of 
                artificial intelligence in agency use cases;</DELETED>
        <DELETED>    (2) designate a Chief Artificial Intelligence 
        Officer, whose duties shall include--</DELETED>
                <DELETED>    (A) ensuring appropriate use of artificial 
                intelligence;</DELETED>
                <DELETED>    (B) coordinating agency use of artificial 
                intelligence;</DELETED>
                <DELETED>    (C) promoting artificial intelligence 
                innovation;</DELETED>
                <DELETED>    (D) managing the risks of use of 
                artificial intelligence;</DELETED>
                <DELETED>    (E) supporting the head of the agency with 
                developing the risk classification system required 
                under section 7(a) and complying with other 
                requirements of this Act; and</DELETED>
                <DELETED>    (F) supporting agency personnel leading 
                the procurement and deployment of artificial 
                intelligence to comply with the requirements under this 
                Act; and</DELETED>
        <DELETED>    (3) form and convene an Artificial Intelligence 
        Governance Board, as described in subsection (b), which shall 
        coordinate and govern artificial intelligence issues across the 
        agency.</DELETED>
<DELETED>    (b) Artificial Intelligence Governance Board.--</DELETED>
        <DELETED>    (1) Leadership.--Each Artificial Intelligence 
        Governance Board (referred to in this subsection as ``Board'') 
        of an agency shall be chaired by the Deputy Secretary of the 
        agency or equivalent official and vice-chaired by the Chief 
        Artificial Intelligence Officer of the agency. Neither the 
        chair nor the vice-chair may assign or delegate these roles to 
        other officials.</DELETED>
        <DELETED>    (2) Representation.--The Board shall, at a 
        minimum, include representatives comprised of senior agency 
        officials from operational components, if relevant, program 
        officials responsible for implementing artificial intelligence, 
        and officials responsible for information technology, data, 
        privacy, civil rights and civil liberties, human capital, 
        procurement, finance, legal counsel, and customer 
        experience.</DELETED>
        <DELETED>    (3) Existing bodies.--An agency may rely on an 
        existing governance body to fulfill the requirements of this 
        subsection if the body satisfies or is adjusted to satisfy the 
        leadership and representation requirements of paragraphs (1) 
        and (2).</DELETED>
<DELETED>    (c) Designation of Chief Artificial Intelligence 
Officer.--The head of an agency may designate as Chief Artificial 
Intelligence Officer an existing official within the agency, including 
the Chief Technology Officer, Chief Data Officer, Chief Information 
Officer, or other official with relevant or complementary authorities 
and responsibilities, if such existing official has expertise in 
artificial intelligence and meets the requirements of this 
section.</DELETED>
<DELETED>    (d) Effective Date.--Beginning on the date that is 120 
days after the date of enactment of this Act, an agency shall not 
develop or procure or obtain artificial intelligence prior to 
completing the requirements under paragraphs (2) and (3) of subsection 
(a).</DELETED>

<DELETED>SEC. 7. AGENCY RISK CLASSIFICATION OF ARTIFICIAL INTELLIGENCE 
              USE CASES FOR PROCUREMENT AND USE.</DELETED>

<DELETED>    (a) Risk Classification System.--</DELETED>
        <DELETED>    (1) Development.--The head of each agency shall be 
        responsible for developing, not later than 1 year after the 
        date of enactment of this Act, a risk classification system for 
        agency use cases of artificial intelligence, without respect to 
        whether artificial intelligence is embedded in a commercial 
        product.</DELETED>
        <DELETED>    (2) Requirements.--</DELETED>
                <DELETED>    (A) Risk classifications.--The risk 
                classification system under paragraph (1) shall, at a 
                minimum, include unacceptable, high, medium, and low 
                risk classifications.</DELETED>
                <DELETED>    (B) Factors for risk classifications.--In 
                developing the risk classifications under subparagraph 
                (A), the head of the agency shall consider the 
                following:</DELETED>
                        <DELETED>    (i) Mission and operation.--The 
                        mission and operations of the agency.</DELETED>
                        <DELETED>    (ii) Scale.--The seriousness and 
                        probability of adverse impacts.</DELETED>
                        <DELETED>    (iii) Scope.--The breadth of 
                        application, such as the number of individuals 
                        affected.</DELETED>
                        <DELETED>    (iv) Optionality.--The degree of 
                        choice that an individual, group, or entity has 
                        as to whether to be subject to the effects of 
                        artificial intelligence.</DELETED>
                        <DELETED>    (v) Standards and frameworks.--
                        Standards and frameworks for risk 
                        classification of use cases that support 
                        democratic values, such as the standards and 
                        frameworks developed by the National Institute 
                        of Standards and Technology, the International 
                        Standards Organization, and the Institute of 
                        Electrical and Electronics Engineers.</DELETED>
                <DELETED>    (C) Classification variance.--</DELETED>
                        <DELETED>    (i) Certain lower risk use 
                        cases.--The risk classification system may 
                        allow for an operational use case to be 
                        categorized under a lower risk classification, 
                        even if the use case is a part of a larger area 
                        of the mission of the agency that is 
                        categorized under a higher risk 
                        classification.</DELETED>
                        <DELETED>    (ii) Changes based on testing or 
                        new information.--The risk classification 
                        system may allow for changes to the risk 
                        classification of an artificial intelligence 
                        use case based on the results from procurement 
                        process testing or other information that 
                        becomes available.</DELETED>
                <DELETED>    (D) High risk use cases.--</DELETED>
                        <DELETED>    (i) In general.--High risk 
                        classification shall, at a minimum, apply to 
                        use cases for which the outputs of the system--
                        </DELETED>
                                <DELETED>    (I) are presumed to serve 
                                as a principal basis for a decision or 
                                action that has a legal, material, 
                                binding, or similarly significant 
                                effect, with respect to an individual 
                                or community, on--</DELETED>
                                        <DELETED>    (aa) civil rights, 
                                        civil liberties, or 
                                        privacy;</DELETED>
                                        <DELETED>    (bb) equal 
                                        opportunities, including in 
                                        access to education, housing, 
                                        insurance, credit, employment, 
                                        and other programs where civil 
                                        rights and equal opportunity 
                                        protections apply; or</DELETED>
                                        <DELETED>    (cc) access to or 
                                        the ability to apply for 
                                        critical government resources 
                                        or services, including 
                                        healthcare, financial services, 
                                        public housing, social 
                                        services, transportation, and 
                                        essential goods and services; 
                                        or</DELETED>
                                <DELETED>    (II) are presumed to serve 
                                as a principal basis for a decision 
                                that substantially impacts the safety 
                                of, or has the potential to 
                                substantially impact the safety of--
                                </DELETED>
                                        <DELETED>    (aa) the well-
                                        being of an individual or 
                                        community, including loss of 
                                        life, serious injury, bodily 
                                        harm, biological or chemical 
                                        harms, occupational hazards, 
                                        harassment or abuse, or mental 
                                        health;</DELETED>
                                        <DELETED>    (bb) the 
                                        environment, including 
                                        irreversible or significant 
                                        environmental damage;</DELETED>
                                        <DELETED>    (cc) critical 
                                        infrastructure, including the 
                                        critical infrastructure sectors 
                                        defined in Presidential Policy 
                                        Directive 21, entitled 
                                        ``Critical Infrastructure 
                                        Security and Resilience'' 
                                        (dated February 12, 2013) (or 
                                        any successor directive) and 
                                        the infrastructure for voting 
                                        and protecting the integrity of 
                                        elections; or</DELETED>
                                        <DELETED>    (dd) strategic 
                                        assets or resources, including 
                                        high-value property and 
                                        information marked as sensitive 
                                        or classified by the Federal 
                                        Government and controlled 
                                        unclassified 
                                        information.</DELETED>
                        <DELETED>    (ii) Additions.--The head of each 
                        agency shall add other use cases to the high 
                        risk category, as appropriate.</DELETED>
                <DELETED>    (E) Medium and low risk use cases.--If a 
                use case is not high risk, as described in subparagraph 
                (D), the head of an agency shall have the discretion to 
                define the risk classification.</DELETED>
                <DELETED>    (F) Unacceptable risk.--If an agency 
                identifies, through testing, adverse incident, or other 
                means or information available to the agency, that a 
                use or outcome of an artificial intelligence use case 
                is a clear threat to human safety or rights that cannot 
                be adequately or practicably mitigated, the agency 
                shall identify the risk classification of that use case 
                as unacceptable risk.</DELETED>
        <DELETED>    (3) Transparency.--The risk classification system 
        under paragraph (1) shall be published on a public-facing 
        website, with the methodology used to determine different risk 
        levels and examples of particular use cases for each category 
        in language that is easy to understand to the people affected 
        by the decisions and outcomes of artificial 
        intelligence.</DELETED>
<DELETED>    (b) Effective Date.--This section shall take effect on the 
date that is 180 days after the date of enactment of this Act, on and 
after which an agency that has not complied with the requirements of 
this section may not develop, procure or obtain, or use artificial 
intelligence until the agency complies with such 
requirements.</DELETED>

<DELETED>SEC. 8. AGENCY REQUIREMENTS FOR USE OF ARTIFICIAL 
              INTELLIGENCE.</DELETED>

<DELETED>    (a) Risk Evaluation Process.--</DELETED>
        <DELETED>    (1) In general.--Not later than 180 days after the 
        effective date in section 7(b), the Chief Artificial 
        Intelligence Officer of each agency, in coordination with the 
        Artificial Intelligence Governance Board of the agency, shall 
        develop and implement a process for the identification and 
        evaluation of risks posed by the deployment of artificial 
        intelligence in agency use cases to ensure an interdisciplinary 
        and comprehensive evaluation of potential risks and 
        determination of risk classifications under such 
        section.</DELETED>
        <DELETED>    (2) Process requirements.--The risk evaluation 
        process described in paragraph (1), shall include, for each 
        artificial intelligence use case--</DELETED>
                <DELETED>    (A) identification of the risks and 
                benefits of the artificial intelligence use 
                case;</DELETED>
                <DELETED>    (B) a plan to periodically review the 
                artificial intelligence use case to examine whether 
                risks have changed or evolved and to update the 
                corresponding risk classification as 
                necessary;</DELETED>
                <DELETED>    (C) a determination of the need for 
                targeted impact assessments to further evaluate 
                specific risks of the artificial intelligence use case 
                within certain impact areas, which shall include 
                privacy, security, civil rights and civil liberties, 
                accessibility, environmental impact, health and safety, 
                and any other impact area relating to high risk 
                classification under section 7(a)(2)(D) as determined 
                appropriate by the Chief Artificial Intelligence 
                Officer; and</DELETED>
                <DELETED>    (D) if appropriate, consultation with and 
                feedback from affected communities and the public on 
                the design, development, and use of the artificial 
                intelligence use case.</DELETED>
        <DELETED>    (3) Review.--</DELETED>
                <DELETED>    (A) Existing use cases.--With respect to 
                each use case that an agency is planning, developing, 
                or using on the date of enactment of this Act, not 
                later than 1 year after such date, the Chief Artificial 
                Intelligence Officer of the agency shall identify and 
                review the use case to determine the risk 
                classification of the use case, pursuant to the risk 
                evaluation process under paragraphs (1) and 
                (2).</DELETED>
                <DELETED>    (B) New use cases.--</DELETED>
                        <DELETED>    (i) In general.--Beginning on the 
                        date of enactment of this Act, the Chief 
                        Artificial Intelligence Officer of an agency 
                        shall identify and review any artificial 
                        intelligence use case that the agency will 
                        plan, develop, or use and determine the risk 
                        classification of the use case, pursuant to the 
                        risk evaluation process under paragraphs (1) 
                        and (2), before procuring or obtaining, 
                        developing, or using the use case.</DELETED>
                        <DELETED>    (ii) Development.--For any use 
                        case described in clause (i) that is developed 
                        by the agency, the agency shall perform an 
                        additional risk evaluation prior to deployment 
                        in a production or operational 
                        environment.</DELETED>
        <DELETED>    (4) Rationale for risk classification.--Risk 
        classification of an artificial intelligence use case shall be 
        accompanied by an explanation from the agency of how the risk 
        classification was determined, which shall be included in the 
        artificial intelligence use case inventory of the agency, and 
        written referencing the model template developed by the 
        Director under section 5(f)(1)(D).</DELETED>
<DELETED>    (b) Model Card Documentation Requirements.--</DELETED>
        <DELETED>    (1) In general.--Beginning on the date that is 180 
        days after the date of enactment of this Act, any time during 
        developing, procuring or obtaining, or using artificial 
        intelligence, an agency shall require, as determined necessary 
        by the Chief Artificial Intelligence Officer, that the deployer 
        and any relevant developer submit documentation about the 
        artificial intelligence, including--</DELETED>
                <DELETED>    (A) a description of the architecture of 
                the artificial intelligence, highlighting key 
                parameters, design choices, and the machine learning 
                techniques employed;</DELETED>
                <DELETED>    (B) information on the training of the 
                artificial intelligence, including computational 
                resources utilized;</DELETED>
                <DELETED>    (C) an account of the source of the data, 
                size of the data, any licenses under which the data is 
                used, collection methods and dates of the data, and any 
                preprocessing of the data undertaken, including human 
                or automated refinement, review, or feedback;</DELETED>
                <DELETED>    (D) information on the management and 
                collection of personal data, outlining data protection 
                and privacy measures adhered to in compliance with 
                applicable laws;</DELETED>
                <DELETED>    (E) a description of the methodologies 
                used to evaluate the performance of the artificial 
                intelligence, including key metrics and outcomes; 
                and</DELETED>
                <DELETED>    (F) an estimate of the energy consumed by 
                the artificial intelligence during training and 
                inference.</DELETED>
        <DELETED>    (2) Additional documentation for medium and high 
        risk use cases.--Beginning on the date that is 270 days after 
        the date of enactment of this Act, with respect to use cases 
        categorized as medium risk or higher, an agency shall require 
        that the deployer of artificial intelligence, in consultation 
        with any relevant developers, submit (including proactively, as 
        material updates of the artificial intelligence occur) the 
        following documentation:</DELETED>
                <DELETED>    (A) Model architecture.--Detailed 
                information on the model or models used in the 
                artificial intelligence, including model date, model 
                version, model type, key parameters (including number 
                of parameters), interpretability measures, and 
                maintenance and updating policies.</DELETED>
                <DELETED>    (B) Advanced training details.--A detailed 
                description of training algorithms, methodologies, 
                optimization techniques, computational resources, and 
                the environmental impact of the training 
                process.</DELETED>
                <DELETED>    (C) Data provenance and integrity.--A 
                detailed description of the training and testing data, 
                including the origins, collection methods, 
                preprocessing steps, and demographic distribution of 
                the data, and known discriminatory impacts and 
                mitigation measures with respect to the data.</DELETED>
                <DELETED>    (D) Privacy and data protection.--Detailed 
                information on data handling practices, including 
                compliance with legal standards, anonymization 
                techniques, data security measures, and whether and how 
                permission for use of data is obtained.</DELETED>
                <DELETED>    (E) Rigorous testing and oversight.--A 
                comprehensive disclosure of performance evaluation 
                metrics, including accuracy, precision, recall, and 
                fairness metrics, and test dataset results.</DELETED>
                <DELETED>    (F) NIST artificial intelligence risk 
                management framework.--Documentation demonstrating 
                compliance with the most recently updated version of 
                the framework developed and updated pursuant to section 
                22A(c) of the National Institute of Standards and 
                Technology Act (15 U.S.C. 278h-1(c)).</DELETED>
        <DELETED>    (3) Review of requirements.--Not later than 1 year 
        after the date of enactment of this Act, the Comptroller 
        General shall conduct a review of the documentation 
        requirements under paragraphs (1) and (2) to--</DELETED>
                <DELETED>    (A) examine whether agencies and deployers 
                are complying with the requirements under those 
                paragraphs; and</DELETED>
                <DELETED>    (B) make findings and recommendations to 
                further assist in ensuring safe, responsible, and 
                efficient artificial intelligence.</DELETED>
        <DELETED>    (4) Security of provided documentation.--The head 
        of each agency shall ensure that appropriate security measures 
        and access controls are in place to protect documentation 
        provided pursuant to this section.</DELETED>
<DELETED>    (c) Information and Use Protections.--Information provided 
to an agency under subsection (b)(3) is exempt from disclosure under 
section 552 of title 5, United States Code (commonly known as the 
``Freedom of Information Act'') and may be used by the agency, 
consistent with otherwise applicable provisions of Federal law, solely 
for--</DELETED>
        <DELETED>    (1) assessing the ability of artificial 
        intelligence to achieve the requirements and objectives of the 
        agency and the requirements of this Act; and</DELETED>
        <DELETED>    (2) identifying--</DELETED>
                <DELETED>    (A) adverse effects of artificial 
                intelligence on the rights or safety factors identified 
                in section 7(a)(2)(D);</DELETED>
                <DELETED>    (B) cyber threats, including the sources 
                of the cyber threats; and</DELETED>
                <DELETED>    (C) security vulnerabilities.</DELETED>
<DELETED>    (d) Pre-Deployment Requirements for High Risk Use Cases.--
Beginning on the date that is 1 year after the date of enactment of 
this Act, the head of an agency shall not deploy or use artificial 
intelligence for a high risk use case prior to--</DELETED>
        <DELETED>    (1) collecting documentation of the artificial 
        intelligence, source, and use case in agency software and use 
        case inventories;</DELETED>
        <DELETED>    (2) testing of the artificial intelligence in an 
        operational, real-world setting with privacy, civil rights, and 
        civil liberty safeguards to ensure the artificial intelligence 
        is capable of meeting its objectives;</DELETED>
        <DELETED>    (3) establishing appropriate agency rules of 
        behavior for the use case, including required human involvement 
        in, and user-facing explainability of, decisions made in whole 
        or part by the artificial intelligence, as determined by the 
        Chief Artificial Intelligence Officer in coordination with the 
        program manager or equivalent agency personnel; and</DELETED>
        <DELETED>    (4) establishing appropriate agency training 
        programs, including documentation of completion of training 
        prior to use of artificial intelligence, that educate agency 
        personnel involved with the application of artificial 
        intelligence in high risk use cases on the capacities and 
        limitations of artificial intelligence, including training on--
        </DELETED>
                <DELETED>    (A) monitoring the operation of artificial 
                intelligence in high risk use cases to detect and 
                address anomalies, dysfunctions, and unexpected 
                performance in a timely manner to mitigate 
                harm;</DELETED>
                <DELETED>    (B) lessening reliance or over-reliance on 
                the output produced by artificial intelligence in a 
                high risk use case, particularly if artificial 
                intelligence is used to make decisions impacting 
                individuals;</DELETED>
                <DELETED>    (C) accurately interpreting the output of 
                artificial intelligence, particularly considering the 
                characteristics of the system and the interpretation 
                tools and methods available;</DELETED>
                <DELETED>    (D) when to not use, disregard, override, 
                or reverse the output of artificial 
                intelligence;</DELETED>
                <DELETED>    (E) how to intervene or interrupt the 
                operation of artificial intelligence;</DELETED>
                <DELETED>    (F) limiting the use of artificial 
                intelligence to its operational design domain; 
                and</DELETED>
                <DELETED>    (G) procedures for reporting incidents 
                involving misuse, faulty results, safety and security 
                issues, and other problems with use of artificial 
                intelligence that does not function as 
                intended.</DELETED>
<DELETED>    (e) Ongoing Monitoring of Artificial Intelligence in High 
Risk Use Cases.--The Chief Artificial Intelligence Officer of each 
agency shall--</DELETED>
        <DELETED>    (1) establish a reporting system, consistent with 
        section 5(g), and suspension and shut-down protocols for 
        defects or adverse impacts of artificial intelligence, and 
        conduct ongoing monitoring, as determined necessary by use 
        case;</DELETED>
        <DELETED>    (2) oversee the development and implementation of 
        ongoing testing and evaluation processes for artificial 
        intelligence in high risk use cases to ensure continued 
        mitigation of the potential risks identified in the risk 
        evaluation process;</DELETED>
        <DELETED>    (3) implement a process to ensure that risk 
        mitigation efforts for artificial intelligence are reviewed not 
        less than annually and updated as necessary to account for the 
        development of new versions of artificial intelligence and 
        changes to the risk profile; and</DELETED>
        <DELETED>    (4) adhere to pre-deployment requirements under 
        subsection (d) in each case in which a low or medium risk 
        artificial intelligence use case becomes a high risk artificial 
        intelligence use case.</DELETED>
<DELETED>    (f) Exemption From Requirements for Select Use Cases.--The 
Chief Artificial Intelligence Officer of each agency--</DELETED>
        <DELETED>    (1) may designate select, low risk use cases, 
        including current and future use cases, that do not have to 
        comply with all or some of the requirements in this Act; 
        and</DELETED>
        <DELETED>    (2) shall publicly disclose all use cases exempted 
        under paragraph (1) with a justification for each exempted use 
        case.</DELETED>
<DELETED>    (g) Exception.--The requirements under subsections (a) and 
(b) shall not apply to an algorithm software update, enhancement, 
derivative, correction, defect, or fix for artificial intelligence that 
does not materially change the compliance of the deployer with the 
requirements of those subsections, unless determined otherwise by the 
agency Chief Artificial Intelligence Officer.</DELETED>
<DELETED>    (h) Waivers.--</DELETED>
        <DELETED>    (1) In general.--The head of an agency, on a case 
        by case basis, may waive 1 or more requirements under 
        subsection (d) for a specific use case after making a written 
        determination, based upon a risk assessment conducted by a 
        human with respect to the specific use case, that fulfilling 
        the requirement or requirements prior to procuring or 
        obtaining, developing, or using artificial intelligence would 
        increase risks to safety or rights overall or would create an 
        unacceptable impediment to critical agency 
        operations.</DELETED>
        <DELETED>    (2) Requirements; limitations.--A waiver under 
        this subsection shall be--</DELETED>
                <DELETED>    (A) in the national security interests of 
                the United States, as determined by the head of the 
                agency;</DELETED>
                <DELETED>    (B) submitted to the relevant 
                congressional committees not later than 15 days after 
                the head of the agency grants the waiver; and</DELETED>
                <DELETED>    (C) limited to a duration of 1 year, at 
                which time the head of the agency may renew the waiver 
                and submit the renewed waiver to the relevant 
                congressional committees.</DELETED>
<DELETED>    (i) Infrastructure Security.--The head of an agency, in 
consultation with the agency Chief Artificial Intelligence Officer, 
Chief Information Officer, Chief Data Officer, and other relevant 
agency officials, shall reevaluate infrastructure security protocols 
based on the artificial intelligence use cases and associated risks to 
infrastructure security of the agency.</DELETED>
<DELETED>    (j) Compliance Deadline.--Not later than 270 days after 
the date of enactment of this Act, the requirements of subsections (a) 
through (i) of this section shall apply with respect to artificial 
intelligence that is already in use on the date of enactment of this 
Act.</DELETED>

<DELETED>SEC. 9. PROHIBITION ON SELECT ARTIFICIAL INTELLIGENCE USE 
              CASES.</DELETED>

<DELETED>    No agency may develop, procure or obtain, or use 
artificial intelligence for--</DELETED>
        <DELETED>    (1) mapping facial biometric features of an 
        individual to assign corresponding emotion and potentially take 
        action against the individual;</DELETED>
        <DELETED>    (2) categorizing and taking action against an 
        individual based on biometric data of the individual to deduce 
        or infer race, political opinion, religious or philosophical 
        beliefs, trade union status, sexual orientation, or other 
        personal trait;</DELETED>
        <DELETED>    (3) evaluating, classifying, rating, or scoring 
        the trustworthiness or social standing of an individual based 
        on multiple data points and time occurrences related to the 
        social behavior of the individual in multiple contexts or known 
        or predicted personal or personality characteristics in a 
        manner that may lead to discriminatory outcomes; or</DELETED>
        <DELETED>    (4) any other use found by the agency to pose an 
        unacceptable risk under the risk classification system of the 
        agency, pursuant to section 7.</DELETED>

<DELETED>SEC. 10. AGENCY PROCUREMENT INNOVATION LABS.</DELETED>

<DELETED>    (a) In General.--An agency subject to the Chief Financial 
Officers Act of 1990 (31 U.S.C. 901 note; Public Law 101-576) that does 
not have a Procurement Innovation Lab on the date of enactment of this 
Act should consider establishing a lab or similar mechanism to test new 
approaches, share lessons learned, and promote best practices in 
procurement, including for commercial technology, such as artificial 
intelligence, that is trustworthy and best-suited for the needs of the 
agency.</DELETED>
<DELETED>    (b) Functions.--The functions of the Procurement 
Innovation Lab or similar mechanism should include--</DELETED>
        <DELETED>    (1) providing leadership support as well as 
        capability and capacity to test, document, and help agency 
        programs adopt new and better practices through all stages of 
        the acquisition lifecycle, beginning with project definition 
        and requirements development;</DELETED>
        <DELETED>    (2) providing the workforce of the agency with a 
        clear pathway to test and document new acquisition practices 
        and facilitate fresh perspectives on existing 
        practices;</DELETED>
        <DELETED>    (3) helping programs and integrated project teams 
        successfully execute emerging and well-established acquisition 
        practices to achieve better results; and</DELETED>
        <DELETED>    (4) promoting meaningful collaboration among 
        offices that are responsible for requirements development, 
        contracting officers, and others, including financial and legal 
        experts, that share in the responsibility for making a 
        successful procurement.</DELETED>
<DELETED>    (c) Structure.--An agency should consider placing the 
Procurement Innovation Lab or similar mechanism as a supporting arm of 
the Chief Acquisition Officer or Senior Procurement Executive of the 
agency and shall have wide latitude in structuring the Procurement 
Innovation Lab or similar mechanism and in addressing associated 
personnel staffing issues.</DELETED>

<DELETED>SEC. 11. MULTI-PHASE COMMERCIAL TECHNOLOGY TEST 
              PROGRAM.</DELETED>

<DELETED>    (a) Test Program.--The head of an agency may procure 
commercial technology through a multi-phase test program of contracts 
in accordance with this section.</DELETED>
<DELETED>    (b) Purpose.--A test program established under this 
section shall--</DELETED>
        <DELETED>    (1) provide a means by which an agency may post a 
        solicitation, including for a general need or area of interest, 
        for which the agency intends to explore commercial technology 
        solutions and for which an offeror may submit a bid based on 
        existing commercial capabilities of the offeror with minimal 
        modifications or a technology that the offeror is developing 
        for commercial purposes; and</DELETED>
        <DELETED>    (2) use phases, as described in subsection (c), to 
        minimize government risk and incentivize competition.</DELETED>
<DELETED>    (c) Contracting Procedures.--Under a test program 
established under this section, the head of an agency may acquire 
commercial technology through a competitive evaluation of proposals 
resulting from general solicitation in the following phases:</DELETED>
        <DELETED>    (1) Phase 1 (viability of potential solution).--
        Selectees may be awarded a portion of the total contract award 
        and have a period of performance of not longer than 1 year to 
        prove the merits, feasibility, and technological benefit the 
        proposal would achieve for the agency.</DELETED>
        <DELETED>    (2) Phase 2 (major details and scaled test).--
        Selectees may be awarded a portion of the total contract award 
        and have a period of performance of not longer than 1 year to 
        create a detailed timeline, establish an agreeable intellectual 
        property ownership agreement, and implement the proposal on a 
        small scale.</DELETED>
        <DELETED>    (3) Phase 3 (implementation or recycle).--
        </DELETED>
                <DELETED>    (A) In general.--Following successful 
                performance on phase 1 and 2, selectees may be awarded 
                up to the full remainder of the total contract award to 
                implement the proposal, depending on the agreed upon 
                costs and the number of contractors selected.</DELETED>
                <DELETED>    (B) Failure to find suitable selectees.--
                If no selectees are found suitable for phase 3, the 
                agency head may determine not to make any selections 
                for phase 3, terminate the solicitation and utilize any 
                remaining funds to issue a modified general 
                solicitation for the same area of interest.</DELETED>
<DELETED>    (d) Treatment as Competitive Procedures.--The use of 
general solicitation competitive procedures for a test program under 
this section shall be considered to be use of competitive procedures as 
defined in section 152 of title 41, United States Code.</DELETED>
<DELETED>    (e) Limitation.--The head of an agency shall not enter 
into a contract under the test program for an amount in excess of 
$25,000,000.</DELETED>
<DELETED>    (f) Guidance.--</DELETED>
        <DELETED>    (1) Federal acquisition regulatory council.--The 
        Federal Acquisition Regulatory Council shall revise the Federal 
        Acquisition Regulation as necessary to implement this section, 
        including requirements for each general solicitation under a 
        test program to be made publicly available through a means that 
        provides access to the notice of the general solicitation 
        through the System for Award Management or subsequent 
        government-wide point of entry, with classified solicitations 
        posted to the appropriate government portal.</DELETED>
        <DELETED>    (2) Agency procedures.--The head of an agency may 
        not award contracts under a test program until the agency 
        issues guidance with procedures for use of the authority. The 
        guidance shall be issued in consultation with the relevant 
        Acquisition Regulatory Council and shall be publicly 
        available.</DELETED>
<DELETED>    (g) Sunset.--The authority for a test program under this 
section shall terminate on the date that is 5 years after the date the 
Federal Acquisition Regulation is revised pursuant to subsection (f)(1) 
to implement the program.</DELETED>

<DELETED>SEC. 12. RESEARCH AND DEVELOPMENT PROJECT PILOT 
              PROGRAM.</DELETED>

<DELETED>    (a) Pilot Program.--The head of an agency may carry out 
research and prototype projects in accordance with this 
section.</DELETED>
<DELETED>    (b) Purpose.--A pilot program established under this 
section shall provide a means by which an agency may--</DELETED>
        <DELETED>    (1) carry out basic, applied, and advanced 
        research and development projects; and</DELETED>
        <DELETED>    (2) carry out prototype projects that address--
        </DELETED>
                <DELETED>    (A) a proof of concept, model, or process, 
                including a business process;</DELETED>
                <DELETED>    (B) reverse engineering to address 
                obsolescence;</DELETED>
                <DELETED>    (C) a pilot or novel application of 
                commercial technologies for agency mission 
                purposes;</DELETED>
                <DELETED>    (D) agile development activity;</DELETED>
                <DELETED>    (E) the creation, design, development, or 
                demonstration of operational utility; or</DELETED>
                <DELETED>    (F) any combination of items described in 
                subparagraphs (A) through (E).</DELETED>
<DELETED>    (c) Contracting Procedures.--Under a pilot program 
established under this section, the head of an agency may carry out 
research and prototype projects--</DELETED>
        <DELETED>    (1) using small businesses to the maximum extent 
        practicable;</DELETED>
        <DELETED>    (2) using cost sharing arrangements where 
        practicable;</DELETED>
        <DELETED>    (3) tailoring intellectual property terms and 
        conditions relevant to the project and commercialization 
        opportunities; and</DELETED>
        <DELETED>    (4) ensuring that such projects do not duplicate 
        research being conducted under existing agency 
        programs.</DELETED>
<DELETED>    (d) Treatment as Competitive Procedures.--The use of 
research and development contracting procedures under this section 
shall be considered to be use of competitive procedures, as defined in 
section 152 of title 41, United States Code.</DELETED>
<DELETED>    (e) Treatment as Commercial Technology.--The use of 
research and development contracting procedures under this section 
shall be considered to be use of commercial technology, as defined in 
section 2.</DELETED>
<DELETED>    (f) Follow-On Projects or Phases.--A follow-on contract 
provided for in a contract opportunity announced under this section 
may, at the discretion of the head of the agency, be awarded to a 
participant in the original project or phase if the original project or 
phase was successfully completed.</DELETED>
<DELETED>    (g) Limitation.--The head of an agency shall not enter 
into a contract under the pilot program for an amount in excess of 
$10,000,000.</DELETED>
<DELETED>    (h) Guidance.--</DELETED>
        <DELETED>    (1) Federal acquisition regulatory council.--The 
        Federal Acquisition Regulatory Council shall revise the Federal 
        Acquisition Regulation research and development contracting 
        procedures as necessary to implement this section, including 
        requirements for each research and development project under a 
        pilot program to be made publicly available through a means 
        that provides access to the notice of the opportunity through 
        the System for Award Management or subsequent government-wide 
        point of entry, with classified solicitations posted to the 
        appropriate government portal.</DELETED>
        <DELETED>    (2) Agency procedures.--The head of an agency may 
        not award contracts under a pilot program until the agency, in 
        consultation with the relevant Acquisition Regulatory Council 
        issues and makes publicly available guidance on procedures for 
        use of the authority.</DELETED>
<DELETED>    (i) Reporting.--Contract actions entered into under this 
section shall be reported to the Federal Procurement Data System, or 
any successor system.</DELETED>
<DELETED>    (j) Sunset.--The authority for a pilot program under this 
section shall terminate on the date that is 5 years from the date the 
Federal Acquisition Regulation is revised pursuant to subsection (h)(1) 
to implement the program.</DELETED>

<DELETED>SEC. 13. DEVELOPMENT OF TOOLS AND GUIDANCE FOR TESTING AND 
              EVALUATING ARTIFICIAL INTELLIGENCE.</DELETED>

<DELETED>    (a) Agency Report Requirements.--In a manner specified by 
the Director, the Chief Artificial Intelligence Officer shall identify 
and annually submit to the Council a report on obstacles encountered in 
the testing and evaluation of artificial intelligence, specifying--
</DELETED>
        <DELETED>    (1) the nature of the obstacles;</DELETED>
        <DELETED>    (2) the impact of the obstacles on agency 
        operations, mission achievement, and artificial intelligence 
        adoption;</DELETED>
        <DELETED>    (3) recommendations for addressing the identified 
        obstacles, including the need for particular resources or 
        guidance to address certain obstacles; and</DELETED>
        <DELETED>    (4) a timeline that would be needed to implement 
        proposed solutions.</DELETED>
<DELETED>    (b) Council Review and Collaboration.--</DELETED>
        <DELETED>    (1) Annual review.--Not less frequently than 
        annually, the Council shall conduct a review of agency reports 
        under subsection (a) to identify common challenges and 
        opportunities for cross-agency collaboration.</DELETED>
        <DELETED>    (2) Development of tools and guidance.--</DELETED>
                <DELETED>    (A) In general.--Not later than 2 years 
                after the date of enactment of this Act, the Director, 
                in consultation with the Council, shall convene a 
                working group to--</DELETED>
                        <DELETED>    (i) develop tools and guidance to 
                        assist agencies in addressing the obstacles 
                        that agencies identify in the reports under 
                        subsection (a);</DELETED>
                        <DELETED>    (ii) support interagency 
                        coordination to facilitate the identification 
                        and use of relevant voluntary standards, 
                        guidelines, and other consensus-based 
                        approaches for testing and evaluation and other 
                        relevant areas; and</DELETED>
                        <DELETED>    (iii) address any additional 
                        matters determined appropriate by the 
                        Director.</DELETED>
                <DELETED>    (B) Working group membership.--The working 
                group described in subparagraph (A) shall include 
                Federal interdisciplinary personnel, such as 
                technologists, information security personnel, domain 
                experts, privacy officers, data officers, civil rights 
                and civil liberties officers, contracting officials, 
                legal counsel, customer experience professionals, and 
                others, as determined by the Director.</DELETED>
        <DELETED>    (3) Information sharing.--The Director, in 
        consultation with the Council, shall establish a mechanism for 
        sharing tools and guidance developed under paragraph (2) across 
        agencies.</DELETED>
<DELETED>    (c) Congressional Reporting.--</DELETED>
        <DELETED>    (1) In general.--Each agency shall submit the 
        annual report under subsection (a) to relevant congressional 
        committees.</DELETED>
        <DELETED>    (2) Consolidated report.--The Director, in 
        consultation with the Council, may suspend the requirement 
        under paragraph (1) and submit to the relevant congressional 
        committees a consolidated report that conveys government-wide 
        testing and evaluation challenges, recommended solutions, and 
        progress toward implementing recommendations from prior reports 
        developed in fulfillment of this subsection.</DELETED>
<DELETED>    (d) Sunset.--The requirements under this section shall 
terminate on the date that is 10 years after the date of enactment of 
this Act.</DELETED>

<DELETED>SEC. 14. UPDATES TO ARTIFICIAL INTELLIGENCE USE CASE 
              INVENTORIES.</DELETED>

<DELETED>    (a) Amendments.--</DELETED>
        <DELETED>    (1) Advancing american ai act.--The Advancing 
        American AI Act (Public Law 117-263; 40 U.S.C. 11301 note) is 
        amended--</DELETED>
                <DELETED>    (A) in section 7223(3), by striking the 
                period and inserting ``and in section 5002 of the 
                National Artificial Intelligence Initiative Act of 2020 
                (15 U.S.C. 9401).''; and</DELETED>
                <DELETED>    (B) in section 7225, by striking 
                subsection (d).</DELETED>
        <DELETED>    (2) Executive order 13960.--The provisions of 
        section 5 of Executive Order 13960 (85 Fed. Reg. 78939; 
        relating to promoting the use of trustworthy artificial 
        intelligence in Federal Government) that exempt classified and 
        sensitive use cases from agency inventories of artificial 
        intelligence use cases shall cease to have legal 
        effect.</DELETED>
<DELETED>    (b) Compliance.--</DELETED>
        <DELETED>    (1) In general.--The Director shall ensure that 
        agencies submit artificial intelligence use case inventories 
        and that the inventories comply with applicable artificial 
        intelligence inventory guidance.</DELETED>
        <DELETED>    (2) Annual report.--The Director shall submit to 
        the relevant congressional committees an annual report on 
        agency compliance with artificial intelligence inventory 
        guidance.</DELETED>
<DELETED>    (c) Disclosure.--</DELETED>
        <DELETED>    (1) In general.--The artificial intelligence 
        inventory of each agency shall publicly disclose--</DELETED>
                <DELETED>    (A) whether artificial intelligence was 
                developed internally by the agency or procured 
                externally, without excluding any use case on basis 
                that the use case is ``sensitive'' solely because it 
                was externally procured;</DELETED>
                <DELETED>    (B) data provenance information, including 
                identifying the source of the training data of the 
                artificial intelligence, including internal government 
                data, public data, commercially held data, or similar 
                data;</DELETED>
                <DELETED>    (C) the level of risk at which the agency 
                has classified the artificial intelligence use case and 
                a brief explanation for how the determination was 
                made;</DELETED>
                <DELETED>    (D) a list of targeted impact assessments 
                conducted pursuant to section 7(a)(2)(C); and</DELETED>
                <DELETED>    (E) the number of artificial intelligence 
                use cases excluded from public reporting as being 
                ``sensitive.''</DELETED>
        <DELETED>    (2) Updates.--</DELETED>
                <DELETED>    (A) In general.--When an agency updates 
                the public artificial intelligence use case inventory 
                of the agency, the agency shall disclose the date of 
                the modification and make change logs publicly 
                available and accessible.</DELETED>
                <DELETED>    (B) Guidance.--The Director shall issue 
                guidance to agencies that describes how to 
                appropriately update artificial intelligence use case 
                inventories and clarifies how sub-agencies and 
                regulatory agencies should participate in the 
                artificial intelligence use case inventorying 
                process.</DELETED>
<DELETED>    (d) Congressional Reporting.--The head of each agency 
shall submit to the relevant congressional committees a copy of the 
annual artificial intelligence use case inventory of the agency, 
including--</DELETED>
        <DELETED>    (1) the use cases that have been identified as 
        ``sensitive'' and not for public disclosure; and</DELETED>
        <DELETED>    (2) a classified annex of classified use 
        cases.</DELETED>
<DELETED>    (e) Government Trends Report.--Beginning 1 year after the 
date of enactment of this Act, and annually thereafter, the Director, 
in coordination with the Council, shall issue a report, based on the 
artificial intelligence use cases reported in use case inventories, 
that describes trends in the use of artificial intelligence in the 
Federal Government.</DELETED>
<DELETED>    (f) Comptroller General.--</DELETED>
        <DELETED>    (1) Report required.--Not later than 1 year after 
        the date of enactment of this Act, and annually thereafter, the 
        Comptroller General of the United States shall submit to 
        relevant congressional committees a report on whether agencies 
        are appropriately classifying use cases.</DELETED>
        <DELETED>    (2) Appropriate classification.--The Comptroller 
        General of the United States shall examine whether the 
        appropriate level of disclosure of artificial intelligence use 
        cases by agencies should be included on the High Risk List of 
        the Government Accountability Office.</DELETED>

SECTION 1. SHORT TITLE.

    This Act may be cited as the ``Promoting Responsible Evaluation and 
Procurement to Advance Readiness for Enterprise-wide Deployment for 
Artificial Intelligence Act'' or the ``PREPARED for AI Act''.

SEC. 2. DEFINITIONS.

    In this Act:
            (1) Adverse outcome.--The term ``adverse outcome'' means 
        any behavior or malfunction, such as a hallucination, 
        algorithmic bias, or inconsistent output, of artificial 
        intelligence that leads to--
                    (A) harm impacting rights or safety, as described 
                in section 7(a)(3);
                    (B) the death of an individual or damage to the 
                health of an individual;
                    (C) material or irreversible disruption of the 
                management and operation of critical infrastructure, as 
                described in section 7(a)(3)(A)(ii)(III);
                    (D) material damage to property or the environment;
                    (E) loss of a mission-critical system or equipment;
                    (F) failure of the mission of an agency;
                    (G) the wrongful denial of a benefit, payment, or 
                other service to an individual or group of individuals 
                who would have otherwise been eligible;
                    (H) the denial of an employment, contract, grant, 
                or similar opportunity that would have otherwise been 
                offered; or
                    (I) another consequence, as determined by the 
                Director with public notice.
            (2) Agency.--The term ``agency''--
                    (A) means each agency described in section 3502(1) 
                of title 44, United States Code; and
                    (B) does not include each of the independent 
                regulatory agencies described in section 3502(5) of 
                title 44, United States Code.
            (3) Artificial intelligence.--The term ``artificial 
        intelligence''--
                    (A) has the meaning given that term in section 5002 
                of the National Artificial Intelligence Initiative Act 
                of 2020 (15 U.S.C. 9401); and
                    (B) includes the artificial systems and techniques 
                described in paragraphs (1) through (5) of section 
                238(g) of the John S. McCain National Defense 
                Authorization Act for Fiscal Year 2019 (Public Law 115-
                232; 10 U.S.C. 4061 note prec.).
            (4) Biometric data.--The term ``biometric data'' means data 
        resulting from specific technical processing relating to the 
        unique physical, physiological, or behavioral characteristics 
        of an individual, including facial images, dactyloscopic data, 
        physical movement and gait, breath, voice, DNA, blood type, and 
        expression of emotion, thought, or feeling.
            (5) Commercial technology.--The term ``commercial 
        technology''--
                    (A) means a technology, process, or method, 
                including research or development; and
                    (B) includes commercial products, commercial 
                services, and other commercial items, as defined in the 
                Federal Acquisition Regulation, including any addition 
                or update thereto by the Federal Acquisition Regulatory 
                Council.
            (6) Council.--The term ``Council'' means the Chief 
        Artificial Intelligence Officers Council established under 
        section 5(a).
            (7) Deployer.--The term ``deployer'' means an entity that 
        operates, whether for the entity itself or on behalf of a third 
        party, artificial intelligence, whether developed internally or 
        by a third-party developer.
            (8) Developer.--The term ``developer'' means an entity that 
        designs, codes, or produces artificial intelligence, including 
        materially modifying artificial intelligence designed, coded, 
        or produced by another entity.
            (9) Director.--The term ``Director'' means the Director of 
        the Office of Management and Budget.
            (10) Government data.--The term ``Government data'' means 
        data collected, processed, maintained, disseminated, or managed 
        by an agency, including data reported to an agency.
            (11) Impact assessment.--The term ``impact assessment'' 
        means a structured process for considering and evaluating the 
        implications of a proposed artificial intelligence use case.
            (12) Relevant congressional committees.--The term 
        ``relevant congressional committees'' means the Committee on 
        Homeland Security and Governmental Affairs of the Senate and 
        the Committee on Oversight and Accountability of the House of 
        Representatives.
            (13) Risk.--The term ``risk'' means the combination of the 
        probability of an occurrence of harm and the potential severity 
        of that harm.
            (14) Use case.--The term ``use case'' means the ways and 
        context in which artificial intelligence is deployed to achieve 
        a specific objective.

SEC. 3. IMPLEMENTATION OF REQUIREMENTS.

    (a) Agency Implementation.--The Director shall facilitate the 
implementation of the requirements of this Act, including through the 
issuance of binding or nonbinding guidance, as the Director determines 
appropriate.
    (b) Annual Briefing.--Not later than 180 days after the date of 
enactment of this Act, and annually thereafter, the Director shall 
brief the appropriate Congressional committees on implementation of 
this Act and related considerations.

SEC. 4. PROCUREMENT OF ARTIFICIAL INTELLIGENCE.

    (a) Government-wide Requirements.--
            (1) In general.--Not later than 15 months after the date of 
        enactment of this Act, the Federal Acquisition Regulatory 
        Council shall review Federal Acquisition Regulation acquisition 
        planning, source selection, and other requirements and update 
        the Federal Acquisition Regulation as needed for agency 
        procurement of artificial intelligence, including--
                    (A) a requirement to address the outcomes of the 
                risk evaluation and impact assessments required under 
                section 7(a);
                    (B) a requirement for an interdisciplinary approach 
                that includes consultation with agency experts prior 
                to, and throughout, as necessary, procuring or 
                obtaining artificial intelligence; and
                    (C) any other considerations determined relevant by 
                the Federal Acquisition Regulatory Council.
            (2) Harmonization.--The Federal Acquisition Regulation 
        review described in paragraph (1) shall determine the extent to 
        which existing requirements and procedures need to be revised 
        or supplemented to address risks and opportunities specific to 
        procurement of artificial intelligence.
            (3) Interdisciplinary approach.--The interdisciplinary 
        approach described in paragraph (1)(B) may--
                    (A) vary depending on the use case and the risks 
                determined to be associated with the use case; and
                    (B) include, as practicable, technologists, 
                information security personnel, domain experts, privacy 
                officers, data officers, civil rights and civil 
                liberties officers, contracting officials, legal 
                counsel, customer experience professionals, and others.
            (4) Acquisition planning.--The updates described in 
        paragraph (1) shall, at a minimum, include--
                    (A) data ownership and privacy;
                    (B) data information security;
                    (C) interoperability requirements;
                    (D) data and model assessment processes;
                    (E) scope of use;
                    (F) ongoing monitoring and evaluation techniques;
                    (G) environmental impact;
                    (H) cybersecurity minimum standards, including 
                regular vulnerability testing and patching and 
                cybersecurity monitoring;
                    (I) risk mitigation techniques, including a plan 
                for minimizing the likelihood of adverse outcomes and 
                reporting adverse outcomes, pursuant to section 5(h); 
                and
                    (J) developer and deployer disclosure requirements 
                necessary to comply with the requirements of this Act.
    (b) Requirements for High Risk Artificial Intelligence Use Cases.--
            (1) Establishment.--Beginning on the date that is 1 year 
        after the date of enactment of this Act, the head of an agency 
        may not procure or obtain artificial intelligence for a high 
        risk use case, as described in section 7(a)(3), prior to 
        establishing and incorporating certain terms into relevant 
        contracts and agreements for an artificial intelligence use 
        case, including--
                    (A) a requirement to disclose to the agency the 
                purpose for which the artificial intelligence was 
                intended to be used and any potential risks from the 
                use of the artificial intelligence;
                    (B) requirements for safety, security, privacy, and 
                trustworthiness, including--
                            (i) a reporting mechanism through which 
                        agency personnel are notified of an adverse 
                        outcome involving artificial intelligence 
                        procured or obtained by the agency;
                            (ii) a requirement, in accordance with 
                        section 5(h), that agency personnel receive a 
                        notification of an adverse outcome involving 
                        artificial intelligence procured or obtained by 
                        the agency, and, at a minimum, an explanation 
                        of the cause of the adverse outcome and any 
                        data directly connected to the adverse outcome;
                            (iii) that the agency may consider 
                        temporarily or permanently suspending use of 
                        the artificial intelligence, with minimal 
                        impact on unrelated services, if the risks of 
                        the artificial intelligence to rights or safety 
                        outweigh the benefits of the use case; and
                            (iv) a requirement that the deployer and 
                        any relevant developer utilize the most 
                        recently updated version of the framework 
                        developed and updated pursuant to section 
                        22(A)(c) of the National Institute of Standards 
                        and Technology Act (15 U.S.C. 278h-1(c));
                    (C) requirements to disclose to the agency 
                sufficient descriptive information relating to the 
                ownership of data, as appropriate by use case, 
                including--
                            (i) requirements for retention of rights to 
                        Government data and any modification to 
                        Government data, including to protect 
                        Government data from unauthorized disclosure 
                        and use to subsequently train or improve the 
                        functionality of commercial products offered by 
                        the deployer, any relevant developers, or 
                        others; and
                            (ii) a requirement that the deployer, if 
                        the deployer is not the agency, and any 
                        relevant developers or other parties isolate 
                        non-public Government data from all other data 
                        through methods, such as physical separation, 
                        electronic separation via secure copies with 
                        strict access controls, or other computational 
                        isolation mechanisms;
                    (D) requirements for evaluation and testing of 
                artificial intelligence based on use case, to be 
                performed on an ongoing basis; and
                    (E) requirements to provide documentation, as 
                determined necessary and requested by the agency, in 
                accordance with section 7(b).
            (2) Review.--The Senior Procurement Executive, in 
        coordination with the Chief Artificial Intelligence Officer, 
        shall, as practicable, consult with technologists, information 
        security and cybersecurity personnel, domain experts, privacy 
        officers, data officers, civil rights and civil liberties 
        officers, contracting officials, legal counsel, customer 
        experience professionals, program evaluation officers, and 
        other relevant agency officials to review the requirements 
        described in subparagraphs (A) through (E) of paragraph (1) and 
        determine whether it may be necessary to incorporate additional 
        requirements into relevant contracts or agreements.
            (3) Regulation.--The Federal Acquisition Regulatory Council 
        shall revise the Federal Acquisition Regulation as necessary to 
        implement the requirements of this subsection.

SEC. 5. INTERAGENCY GOVERNANCE OF ARTIFICIAL INTELLIGENCE.

    (a) Chief Artificial Intelligence Officers Council.--Not later than 
60 days after the date of enactment of this Act, the Director shall 
establish a Chief Artificial Intelligence Officers Council.
    (b) Duties.--The duties of the Council shall include--
            (1) coordinating agency development and use of artificial 
        intelligence in agency programs and operations, including 
        practices relating to the design, operation, risk management, 
        and performance of artificial intelligence;
            (2) sharing experiences, ideas, best practices, and 
        innovative approaches relating to artificial intelligence;
            (3) identifying, developing, and coordinating multi-agency 
        projects and other initiatives;
            (4) harmonizing agency management of risks relating to 
        developing, obtaining, or using artificial intelligence, 
        including by developing a common template to guide agency Chief 
        Artificial Intelligence Officers in implementing a risk 
        evaluation process that may incorporate best practices, such as 
        those from--
                    (A) the most recently updated version of the 
                framework developed and updated pursuant to section 
                22A(c) of the National Institute of Standards and 
                Technology Act (15 U.S.C. 278h-1(c)); and
                    (B) the report published by the Government 
                Accountability Office entitled ``Artificial 
                Intelligence: An Accountability Framework for Federal 
                Agencies and Other Entities'' (GAO-21-519SP), published 
                on June 30, 2021;
            (5) promoting the development and use of secure, common, 
        shared, or other approaches to key processes that improve the 
        delivery of services for the public;
            (6) soliciting and providing perspectives on matters of 
        concern, including from and to--
                    (A) interagency councils;
                    (B) Federal Government entities;
                    (C) private sector, public sector, nonprofit, and 
                academic experts;
                    (D) State, local, Tribal, territorial, and 
                international governments; and
                    (E) other individuals and entities, as determined 
                relevant by the Council;
            (7) working with the Chief Acquisition Officers Council--
                    (A) to ensure contractors, including small 
                businesses, have the benefit of integrity, fairness, 
                competition, openness, and efficiency in accordance 
                with the statutory functions of the Chief Acquisition 
                Officers Council, as described in section 1312 of title 
                41, United States Code; and
                    (B) which shall establish a working group for the 
                purpose described in subparagraph (A) and related 
                purposes; and
            (8) any other matters determined by the Council to be 
        relevant.
    (c) Membership of the Council.--
            (1) Leaders.--
                    (A) Chair.--The Director shall serve as Chair of 
                the Council.
                    (B) Vice chair.--The Council shall have a Vice 
                Chair, who shall be an individual selected by a 
                majority of the members of the Council.
                    (C) Additional roles.--The Council may establish 
                additional leadership roles, at the discretion of the 
                Council.
            (2) Members.--Other members of the Council shall include--
                    (A) the Chief Artificial Intelligence Officer of 
                each agency; and
                    (B) the senior official for artificial intelligence 
                of the Office of Management and Budget.
    (d) Standing Committees; Working Groups.--The Council shall have 
the authority to establish standing committees, including an executive 
committee, and working groups.
    (e) Council Staff.--The Council may enter into an interagency 
agreement with the Administrator of General Services for shared 
services for the purpose of staffing the Council.
    (f) Reports.--
            (1) In general.--Not later than 3 years after the date of 
        enactment of this Act, the Comptroller General of the United 
        States shall submit to the relevant congressional committees a 
        report that--
                    (A) identifies, to the extent practicable, ways to 
                improve coordination with other councils throughout the 
                Federal Government; and
                    (B) recommends ways to improve the utility of the 
                Council for the public and other agencies.
            (2) Consolidation.--In fulfilling the requirement under 
        paragraph (1), the Comptroller General of the United States 
        may, if desired, consolidate the report under that paragraph 
        with another report concerning interagency coordination.
    (g) Development, Adaptation, and Documentation.--
            (1) Guidance.--Not later than 1 year after the date of 
        enactment of this Act, the Director shall issue guidance on--
                    (A) how to conduct the agency impact assessments 
                described in section 7(a) and other relevant impact 
                assessments as determined appropriate by the Director, 
                including the appropriateness of adapting pre-existing 
                assessments, including privacy and security impact 
                assessments, for purposes of an artificial intelligence 
                impact assessment;
                    (B) development of a model template for the risk 
                classification explanations that each agency must 
                provide under section 7(a)(6);
                    (C) development of a model template for procurement 
                of artificial intelligence intended to help agencies 
                use consistent terms, definitions, and documentation 
                requirements; and
                    (D) additional matters relating to the 
                implementation of this Act, as determined relevant by 
                the Director.
            (2) Biennial review.--The Director shall periodically, but 
        not less frequently than biennially, review and update, as 
        needed, the guidance issued under paragraph (1).
    (h) Adverse Outcome Reporting.--
            (1) In general.--Not later than 1 year after the date of 
        enactment of this Act, the Director shall develop procedures 
        for ensuring that, at a minimum--
                    (A) adverse outcomes involving artificial 
                intelligence procured or obtained or used by agencies 
                are reported promptly to the relevant agency or 
                agencies by the developer or deployer, if the deployer 
                is not the agency, or to the developer or deployer by 
                the relevant agency, whichever first becomes aware of 
                the adverse outcome; and
                    (B) information relating to an adverse outcome 
                described in subparagraph (A) is appropriately shared 
                among agencies.
            (2) Single report.--Adverse outcomes also qualifying for 
        incident reporting under section 3554 of title 44, United 
        States Code, or other relevant laws or policies, may be 
        reported under such other reporting requirement and are not 
        required to be additionally reported under this subsection.
            (3) Notice to developers and deployers.--
                    (A) In general.--If and upon discovery of an 
                adverse outcome by an agency, the agency shall--
                            (i) report the adverse outcome to the 
                        deployer, if the deployer is not the agency, 
                        and any relevant developers; and
                            (ii) in consultation with any relevant 
                        deployers and developers, take action to 
                        resolve the adverse outcome and mitigate the 
                        potential for future adverse outcomes.
                    (B) Waiver.--
                            (i) In general.--Unless otherwise required 
                        by law, the head of an agency may issue a 
                        written waiver that waives the applicability of 
                        some or all of the requirements under 
                        subparagraph (A), with respect to a specific 
                        adverse outcome.
                            (ii) Written waiver contents.--A written 
                        waiver under clause (i) shall include 
                        justification for the waiver.
                            (iii) Notice.--The head of an agency shall 
                        forward advance notice of any waiver under this 
                        subparagraph to the Director.

SEC. 6. AGENCY GOVERNANCE OF ARTIFICIAL INTELLIGENCE.

    (a) In General.--The head of an agency shall--
            (1) ensure the responsible adoption of artificial 
        intelligence, including by--
                    (A) requiring the development or revision of 
                relevant agency policies and directives;
                    (B) testing, verifying, validating, and monitoring 
                artificial intelligence and the use case-specific 
                performance of artificial intelligence, proportionate 
                to risk level, to minimize the likelihood of adverse 
                outcomes by--
                            (i) ensuring the use of artificial 
                        intelligence is appropriate to and improves the 
                        effectiveness of the mission of the agency;
                            (ii) guarding against bias in data 
                        collection, use, and dissemination;
                            (iii) ensuring reliability, fairness, and 
                        transparency; and
                            (iv) protecting against impermissible 
                        discrimination;
                    (C) continuing to hire, train, and develop a 
                workforce that--
                            (i) understands the risks and benefits of 
                        artificial intelligence, including artificial 
                        intelligence embedded in agency systems and 
                        operations;
                            (ii) is able to provide human oversight for 
                        the design, implementation, and end uses of 
                        artificial intelligence; and
                            (iii) is able to review and provide redress 
                        for erroneous decisions made in the course of 
                        artificial intelligence-assisted processes; and
                    (D) ensuring implementation of the agency 
                requirements under this Act;
            (2) designate a Chief Artificial Intelligence Officer, 
        whose duties shall include--
                    (A) ensuring appropriate use of artificial 
                intelligence;
                    (B) coordinating agency use of artificial 
                intelligence;
                    (C) promoting artificial intelligence innovation;
                    (D) managing the risks of use of artificial 
                intelligence;
                    (E) minimizing the likelihood of adverse outcomes;
                    (F) supporting the head of the agency with 
                developing the risk evaluation process required under 
                section 7(a) and complying with other requirements of 
                this Act;
                    (G) supporting agency personnel leading the 
                procurement and deployment of artificial intelligence 
                to comply with the requirements under this Act; and
                    (H) coordinating with other responsible officials 
                and appropriate stakeholders with respect to the duties 
                described in subparagraphs (A) through (G), as 
                appropriate; and
            (3) form and convene an Artificial Intelligence Governance 
        Board, if required by subsection (c), which shall coordinate 
        and govern artificial intelligence issues across the agency.
    (b) Designation of Chief Artificial Intelligence Officer.--The head 
of an agency may designate as Chief Artificial Intelligence Officer an 
existing official within the agency, including the Chief Technology 
Officer, Chief Data Officer, Chief Information Officer, or other 
official with relevant or complementary authorities and 
responsibilities, if such existing official has expertise in artificial 
intelligence and meets the requirements of this section.
    (c) Artificial Intelligence Governance Board.--
            (1) Leadership.--Each agency identified in section 901(b) 
        of title 31, United States Code, shall establish an Artificial 
        Intelligence Governance Board (referred to in this subsection 
        as ``Board'') that shall be chaired by the deputy head of the 
        agency or equivalent official and vice-chaired by the Chief 
        Artificial Intelligence Officer of the agency. Neither the 
        chair nor the vice-chair may assign or delegate these roles to 
        other officials.
            (2) Representation.--The Board shall, at a minimum, include 
        representatives consisting of--
                    (A) senior agency officials from operational 
                components, if relevant;
                    (B) program officials responsible for implementing 
                artificial intelligence; and
                    (C) officials responsible for information 
                technology, data, cybersecurity, privacy, statistics, 
                civil rights and civil liberties, human capital, 
                procurement, finance, legal counsel, agency management, 
                program evaluation, and customer experience.
            (3) Existing bodies.--An agency may rely on an existing 
        governance body to fulfill the requirements of this subsection 
        if the body satisfies or is adjusted to satisfy the leadership 
        and representation requirements of paragraphs (1) and (2).
    (d) Effective Date.--Beginning on the date that is 120 days after 
the date of enactment of this Act, an agency shall not develop, 
procure, or obtain artificial intelligence prior to completing the 
requirements under paragraphs (2) and (3) of subsection (a).

SEC. 7. AGENCY REQUIREMENTS FOR USE OF ARTIFICIAL INTELLIGENCE.

    (a) Risk Evaluation Process.--
            (1) In general.--Not later than 180 days after the date of 
        enactment of this Act, the Chief Artificial Intelligence 
        Officer of each agency, in coordination with the Artificial 
        Intelligence Governance Board of the agency, shall develop and 
        implement a process for identifying when the use of artificial 
        intelligence by the agency meets the classification of high 
        risk, as described in paragraph (3).
            (2) Process requirements.--The risk evaluation process 
        described in paragraph (1), shall include, for each artificial 
        intelligence use case--
                    (A) identification of the purpose, expected 
                benefits, and potential risks of the artificial 
                intelligence use case;
                    (B) a plan to periodically review the artificial 
                intelligence use case to examine whether the expected 
                benefits and potential risks identified under 
                subparagraph (A) have changed or evolved; and
                    (C) if a high risk determination has been made, the 
                need for targeted impact assessments, beyond those 
                required under any other provision of law, to further 
                evaluate specific risks of the artificial intelligence 
                use case in coordination with other responsible 
                officials within certain impact areas, which shall 
                include privacy, security, civil rights and civil 
                liberties, accessibility, environmental impact, health 
                and safety, and any other impact area relating to high 
                risk classification under paragraph (3) as determined 
                appropriate by the Chief Artificial Intelligence 
                Officer.
            (3) High risk use cases.--
                    (A) In general.--High risk classification shall, at 
                a minimum, apply to use cases for which the outputs 
                serve as a principal basis for--
                            (i) a decision or action that has a legal, 
                        material, binding, or similarly significant 
                        effect, with respect to an individual or 
                        community, on--
                                    (I) civil rights, civil liberties, 
                                or privacy;
                                    (II) access to education, housing, 
                                insurance, credit, employment, and 
                                other programs where civil rights and 
                                equal opportunity protections apply; or
                                    (III) access to or the ability to 
                                apply for critical government resources 
                                or services, including healthcare, 
                                financial services, public housing, 
                                social services, transportation, and 
                                essential goods and services; or
                            (ii) a decision that substantially impacts 
                        the safety of, or has the potential to 
                        substantially impact the safety of--
                                    (I) an individual or community, 
                                including loss of life, serious injury, 
                                bodily harm, biological or chemical 
                                harms, occupational hazards, harassment 
                                or abuse, or mental health;
                                    (II) the environment, including 
                                irreversible or significant 
                                environmental damage;
                                    (III) critical infrastructure, 
                                including the critical infrastructure 
                                sectors defined in National Security 
                                Memorandum 22 (NSM-22) (dated April 30, 
                                2024) (or any successor directive) and 
                                the infrastructure for voting and 
                                protecting the integrity of elections; 
                                or
                                    (IV) strategic assets or resources, 
                                including high-value property and 
                                information marked as sensitive or 
                                classified by the Federal Government.
                    (B) Classification variance.--
                            (i) Variance within a mission area.--The 
                        risk evaluation process under this paragraph 
                        may allow for a particular operational use case 
                        to not be classified as high risk, even if the 
                        use case is a part of a larger area of the 
                        mission of the agency that is thought to be 
                        high risk, if the operational use case is 
                        determined not to be high risk based on the 
                        required risk evaluation under paragraph (1).
                            (ii) Changes based on testing or new 
                        information.--The risk evaluation process under 
                        this paragraph may allow for changes to the 
                        risk classification of an artificial 
                        intelligence use case based on the results from 
                        testing during the procurement process or other 
                        information that becomes available.
            (4) Review.--Not later than 1 year after the date of 
        enactment of this Act, the Chief Artificial Intelligence 
        Officer of the agency shall--
                    (A) certify whether each existing use case presents 
                a high risk; and
                    (B) identify and review any use cases the agency is 
                planning, developing, procuring, or obtaining to 
                determine whether each such use cases presents a high 
                risk.
            (5) Development.--For any artificial intelligence that is 
        developed by the agency, the agency shall ensure a risk 
        evaluation is conducted prior to deployment in a production or 
        operational environment that is fit for the intended use.
            (6) Rationale for risk classification.--
                    (A) In general.--A high risk classification of an 
                artificial intelligence use case shall be accompanied 
                by an explanation from the agency, that a reasonable 
                person would consider sufficient to understand, of how 
                the classification was determined, which shall be 
                included in the artificial intelligence use case 
                inventory of the agency.
                    (B) Template.--A risk classification explanation 
                under subparagraph (A) shall utilize the model template 
                developed by the Director under section 5(g)(1)(B) if 
                the explanation is written after the date that such 
                model template has become available.
    (b) Documentation Requirements.--
            (1) Documentation for high risk use cases.--Beginning on 
        the date that is 1 year after the date of enactment of this 
        Act, prior to developing, procuring or obtaining, or using 
        artificial intelligence to be used in a high risk use case, an 
        agency shall require the deployer, if the deployer is not the 
        agency, in consultation with any relevant developers, to submit 
        the following documentation:
                    (A) A description of the types of data sources used 
                to train the artificial intelligence, whether the data 
                is from licensed material, and an identification of the 
                specific issues related to safety, bias, and fairness, 
                that may be expected to arise from the use of the data, 
                and any mitigation techniques used, if applicable.
                    (B) A description of the methodologies used to 
                evaluate the performance of the artificial intelligence 
                for its intended use.
                    (C) Documentation demonstrating implementation of 
                risk evaluation and management measures, including the 
                evaluation and management of safety, bias, and fairness 
                risks, as appropriate.
                    (D) Information on the collection, management, and 
                protection of data, in compliance with applicable laws.
                    (E) Documentation of the known limitations of the 
                artificial intelligence, and if applicable, 
                supplementary guidelines on how the artificial 
                intelligence is intended to be used.
            (2) Sufficiency of documentation.--The Chief Artificial 
        Intelligence Officer of an agency shall determine the 
        sufficiency of the documentation provided in meeting the 
        requirements under paragraph (1).
            (3) Updates.--An agency shall require that a deployer, if 
        the deployer is not the agency, in consultation with any 
        relevant developers, submit updates to the documentation 
        required under paragraph (1), if and when there are any 
        material changes to the information in such documentation.
            (4) Review of requirements.--Not later than 2 years after 
        the date of enactment of this Act, the Comptroller General of 
        the United States shall conduct a review of the documentation 
        requirements under paragraphs (1) and (3) to--
                    (A) examine whether agencies, third-party 
                deployers, and developers are complying with the 
                requirements under those paragraphs, and make 
                associated findings and recommendations; and
                    (B) make general findings and recommendations to 
                further assist in ensuring safe, responsible, and 
                efficient agency procurement and use of artificial 
                intelligence.
            (5) Security of provided documentation.--The head of each 
        agency shall ensure that appropriate security measures and 
        access controls are in place to protect documentation provided 
        pursuant to this section.
    (c) Information and Use Protections.--Information provided to an 
agency under subsection (b) may be used by the agency, consistent with 
otherwise applicable provisions of Federal law, solely for--
            (1) assessing the ability of artificial intelligence to 
        achieve the requirements and objectives of the agency and the 
        requirements of this Act; and
            (2) identifying--
                    (A) adverse effects of artificial intelligence on 
                the rights or safety factors identified in subsection 
                (a)(3);
                    (B) cyber threats, including the sources of the 
                cyber threats; and
                    (C) security vulnerabilities.
    (d) Pre-deployment Requirements for High Risk Artificial 
Intelligence Use Cases.--Beginning on the date that is 18 months after 
the date of enactment of this Act, the head of an agency shall not 
deploy or use artificial intelligence for a high risk use case prior 
to--
            (1) complying with the requirements of subsection (a);
            (2) obtaining documentation of the artificial intelligence 
        described in subsection (b)(2), source, and use case in agency 
        software and use case inventories;
            (3) testing the artificial intelligence in an operational, 
        real-world setting with privacy, security, civil rights, and 
        civil liberty safeguards to ensure the artificial intelligence 
        is capable of meeting its objectives, and to determine, to the 
        maximum extent practicable, the likelihood and impact of 
        adverse outcomes occurring during use;
            (4) establishing appropriate agency rules of behavior for 
        the use case, including required human involvement in, and 
        reasonable plain-language notice about, decisions made in whole 
        or part by the artificial intelligence, as determined by the 
        Chief Artificial Intelligence Officer in coordination with the 
        program manager or equivalent agency personnel;
            (5) if appropriate, consultation with and collection of 
        feedback from affected communities and the public on the 
        design, development, and use of the high risk use case;
            (6) establishing appropriate agency training programs, 
        including documentation of completion of training prior to use 
        of artificial intelligence, that educate agency personnel 
        involved with the application of artificial intelligence in 
        high risk use cases on the capacities and limitations of 
        artificial intelligence, including training on--
                    (A) monitoring, detecting, and reporting anomalies, 
                dysfunctions, and unexpected performance in a timely 
                manner;
                    (B) reducing over-reliance on the output produced 
                by artificial intelligence in a high risk use case, 
                particularly if artificial intelligence is used to make 
                decisions impacting individuals;
                    (C) accurately interpreting the output of 
                artificial intelligence, particularly considering the 
                characteristics of the system and the interpretation 
                tools and methods available;
                    (D) when to not use, disregard, override, or 
                reverse the output of artificial intelligence;
                    (E) how to intervene or interrupt the operation of 
                artificial intelligence;
                    (F) limiting the use of artificial intelligence to 
                its intended purpose; and
                    (G) procedures for reporting adverse outcomes, as 
                determined under section 5(h), and other problems that 
                may arise with artificial intelligence that does not 
                function as intended; and
            (7) determining whether the benefits of the use case 
        outweigh the risks by--
                    (A) evaluating the information learned from 
                completing the requirements under paragraphs (2) and 
                (3); and
                    (B) assessing whether the requirements under 
                paragraphs (2) through (6) have been accomplished and 
                known risks have been effectively mitigated.
    (e) Determinations.--
            (1) Requests for determination information.--The head of an 
        agency shall make available to the relevant congressional 
        committees or the Director, upon request, a determination under 
        subsection (d)(7) and the respective supporting documentation.
            (2) Reevaluation.--If it is determined under subsection 
        (d)(7) that the benefits of a use case do not outweigh the 
        risks and the risks cannot be effectively mitigated, the agency 
        may decide to reevaluate the use case indefinitely or until 
        appropriate measures under the requirements in paragraphs (2) 
        through (6) of that subsection are established.
    (f) Ongoing Monitoring of Artificial Intelligence in High Risk Use 
Cases.--Beginning on the date that is 1 year after the date of 
enactment of this Act, the Chief Artificial Intelligence Officer of 
each agency shall--
            (1) establish a reporting system, consistent with section 
        5(h), and suspension and shut-down protocols for defects or 
        adverse outcomes of artificial intelligence, and conduct 
        ongoing monitoring, as determined necessary by use case;
            (2) oversee the development and implementation of ongoing 
        testing and evaluation processes for artificial intelligence in 
        high risk use cases to ensure continued mitigation of the 
        potential risks identified in the risk evaluation process; and
            (3) implement a process to ensure that risk mitigation 
        efforts for artificial intelligence are reviewed not less than 
        annually and updated as necessary to account for the 
        development of new versions of artificial intelligence and 
        changes to the risk profile.
    (g) Changed Risks.--In the process of complying with subsections 
(d) and (f), an agency shall determine whether an intended use case 
should be paused, stopped permanently, or continued if new information 
changes the risks associated with the use case or requires new testing 
and monitoring procedures under those subsections.
    (h) Exception.--The requirements under subsections (a) and (b) 
shall not apply to an algorithm software update, enhancement, 
derivative, correction, defect, or fix for artificial intelligence that 
does not materially change the compliance of the deployer with the 
requirements of those subsections, unless determined otherwise by the 
agency Chief Artificial Intelligence Officer.
    (i) Waivers.--
            (1) In general.--The head of an agency, or 1 or more deputy 
        heads of an agency designated by the head of the agency, may 
        waive 1 or more requirements under subsection (d) for a 
        specific use case after making a written determination, based 
        upon a risk assessment conducted by a human, that fulfilling 
        the requirement or requirements would increase risks to safety 
        or rights overall, would create an unacceptable impediment to 
        critical agency operations, or would not be in the national 
        security interests of the United States.
            (2) Requirements.--A waiver under paragraph (1) shall--
                    (A) include, at a minimum, the reasons for the 
                waiver and a plan to bring the specific use case into 
                compliance with subsection (d) before the end of the 
                waiver, pursuant to paragraph (4); and
                    (B) be submitted to the relevant congressional 
                committees and the Director not later than 15 days 
                after the head of the agency grants the waiver.
            (3) Review.--The Director shall review the waiver and 
        relevant documentation to determine whether the waiver was 
        improperly granted.
            (4) Duration.--A waiver under paragraph (1) shall be 
        limited to a duration of 1 year, at which time, if the agency 
        is unable to bring the specific use case into compliance with 
        subsection (d), the agency shall cease use or deployment of the 
        use case until the use case can be brought into compliance with 
        that subsection.
    (j) Infrastructure Security.--The head of an agency, in 
consultation with the agency Chief Artificial Intelligence Officer, 
Chief Information Officer, Chief Data Officer, and other relevant 
agency officials, shall reevaluate infrastructure security protocols 
based on the artificial intelligence use cases and associated risks to 
infrastructure security of the agency.
    (k) Compliance Deadline.--Not later than 270 days after the date of 
enactment of this Act, the requirements of subsections (a) through (j) 
of this section shall apply with respect to artificial intelligence 
that is already in use on the date of enactment of this Act.

SEC. 8. PROHIBITION ON SELECT ARTIFICIAL INTELLIGENCE USE CASES.

    No agency may develop, procure, obtain, or use artificial 
intelligence for--
            (1) mapping facial biometric features of an individual to 
        assign corresponding emotion and potentially take action 
        against the individual;
            (2) categorizing and taking action against an individual 
        based on biometric data of the individual to deduce or infer 
        race, political opinion, religious or philosophical beliefs, 
        trade union status, sexual orientation, or other personal 
        trait, with the exception of deducing or inferring age in the 
        context of investigating child sexual abuse; or
            (3) evaluating, classifying, rating, or scoring the 
        trustworthiness or social standing of an individual based on 
        multiple data points and time occurrences related to the social 
        behavior of the individual in multiple contexts or known or 
        predicted personal or personality characteristics in a manner 
        that may lead to discriminatory outcomes.

SEC. 9. AGENCY PROCUREMENT INNOVATION LABS.

    (a) In General.--Each agency identified in 901(b) of title 31, 
United States Code, that does not have a Procurement Innovation Lab on 
the date of enactment of this Act should consider establishing a lab or 
similar mechanism to test new approaches, share lessons learned, and 
promote best practices in procurement, including for commercial 
technology, such as artificial intelligence, that is trustworthy and 
best-suited for the needs of the agency.
    (b) Functions.--The functions of the Procurement Innovation Lab or 
similar mechanism should include--
            (1) providing leadership support as well as capability and 
        capacity to test, document, and help agency programs adopt new 
        and better practices through all stages of the acquisition 
        lifecycle, beginning with project definition and requirements 
        development;
            (2) providing the workforce of the agency with a clear 
        pathway to test and document new acquisition practices and 
        facilitate fresh perspectives on existing practices;
            (3) helping programs and integrated project teams 
        successfully execute emerging and well-established acquisition 
        practices to achieve better results; and
            (4) promoting meaningful collaboration among offices that 
        are responsible for requirements development, contracting 
        officers, and others, including financial and legal experts, 
        that share in the responsibility for making a successful 
        procurement.
    (c) Structure.--An agency should consider placing the Procurement 
Innovation Lab or similar mechanism as a supporting arm of the Chief 
Acquisition Officer or Senior Procurement Executive of the agency and 
shall have wide latitude in structuring the Procurement Innovation Lab 
or similar mechanism and in addressing associated personnel staffing 
issues.

SEC. 10. MULTI-PHASE COMMERCIAL TECHNOLOGY TEST PROGRAM.

    (a) Test Program.--The head of an agency may, if desired, procure 
commercial technology through a multi-phase test program of contracts 
in accordance with this section.
    (b) Purpose.--A test program established under this section shall--
            (1) provide a means by which an agency may post a 
        solicitation, including for a general need or area of interest, 
        for which the agency intends to explore commercial technology 
        solutions and for which an offeror may submit a bid based on 
        existing commercial capabilities of the offeror with minimal 
        modifications or a technology that the offeror is developing 
        for commercial purposes; and
            (2) use phases, as described in subsection (c), to minimize 
        government risk and incentivize competition.
    (c) Contracting Procedures.--Under a test program established under 
this section, the head of an agency may acquire commercial technology 
through a competitive evaluation of proposals resulting from general 
solicitation in the following phases:
            (1) Phase 1 (viability of potential solution).--Selectees 
        may be awarded a portion of the total contract award and have a 
        period of performance of not longer than 1 year to prove the 
        merits, feasibility, and technological benefit the proposal 
        would achieve for the agency.
            (2) Phase 2 (major details and scaled test).--Selectees may 
        be awarded a portion of the total contract award and have a 
        period of performance of not longer than 1 year to create a 
        detailed timeline, establish an agreeable intellectual property 
        ownership agreement, and implement the proposal on a small 
        scale.
            (3) Phase 3 (implementation or recycle).--
                    (A) In general.--Following successful performance 
                on phase 1 and 2, selectees may be awarded up to the 
                full remainder of the total contract award to implement 
                the proposal, depending on the agreed upon costs and 
                the number of contractors selected.
                    (B) Failure to find suitable selectees.--If no 
                selectees are found suitable for phase 3, the agency 
                head may determine not to make any selections for phase 
                3, terminate the solicitation and utilize any remaining 
                funds to issue a modified general solicitation for the 
                same area of interest.
    (d) Treatment as Competitive Procedures.--The use of general 
solicitation competitive procedures for a test program under this 
section shall be considered to be use of competitive procedures as 
defined in section 152 of title 41, United States Code.
    (e) Limitation.--The head of an agency shall not enter into a 
contract under the test program for an amount in excess of $25,000,000.
    (f) Guidance.--
            (1) Federal acquisition regulatory council.--The Federal 
        Acquisition Regulatory Council shall revise the Federal 
        Acquisition Regulation as necessary to implement this section, 
        including requirements for each general solicitation under a 
        test program to be made publicly available through a means that 
        provides access to the notice of the general solicitation 
        through the System for Award Management or subsequent 
        government-wide point of entry, with classified solicitations 
        posted to the appropriate government portal.
            (2) Agency procedures.--The head of an agency may not award 
        contracts under a test program until the agency issues guidance 
        with procedures for use of the authority. The guidance shall be 
        issued in consultation with the relevant Acquisition Regulatory 
        Council and shall be publicly available.
    (g) Sunset.--The authority for a test program under this section 
shall terminate on the date that is 5 years after the date the Federal 
Acquisition Regulation is revised pursuant to subsection (f)(1) to 
implement the program.

SEC. 11. RESEARCH AND DEVELOPMENT PROJECT PILOT PROGRAM.

    (a) Pilot Program.--The head of an agency may, if desired, carry 
out research and prototype projects in accordance with this section.
    (b) Purpose.--A pilot program established under this section shall 
provide a means by which an agency may--
            (1) carry out basic, applied, and advanced research and 
        development projects; and
            (2) carry out prototype projects that address--
                    (A) a proof of concept, model, or process, 
                including a business process;
                    (B) reverse engineering to address obsolescence;
                    (C) a pilot or novel application of commercial 
                technologies for agency mission purposes;
                    (D) agile development activity;
                    (E) the creation, design, development, or 
                demonstration of operational utility; or
                    (F) any combination of items described in 
                subparagraphs (A) through (E).
    (c) Contracting Procedures.--Under a pilot program established 
under this section, the head of an agency may carry out research and 
prototype projects--
            (1) using small businesses to the maximum extent 
        practicable;
            (2) using cost sharing arrangements where practicable;
            (3) tailoring intellectual property terms and conditions 
        relevant to the project and commercialization opportunities; 
        and
            (4) ensuring that such projects do not duplicate research 
        being conducted under existing agency programs.
    (d) Treatment as Competitive Procedures.--The use of research and 
development contracting procedures under this section shall be 
considered to be use of competitive procedures, as defined in section 
152 of title 41, United States Code.
    (e) Treatment as Commercial Technology.--The use of research and 
development contracting procedures under this section shall be 
considered to be use of commercial technology.
    (f) Follow-on Projects or Phases.--A follow-on contract provided 
for in a contract opportunity announced under this section may, at the 
discretion of the head of the agency, be awarded to a participant in 
the original project or phase if the original project or phase was 
successfully completed.
    (g) Limitation.--The head of an agency shall not enter into a 
contract under the pilot program under this section for an amount in 
excess of $10,000,000.
    (h) Guidance.--
            (1) Federal acquisition regulatory council.--The Federal 
        Acquisition Regulatory Council shall revise the Federal 
        Acquisition Regulation research and development contracting 
        procedures as necessary to implement this section, including 
        requirements for each research and development project under a 
        pilot program to be made publicly available through a means 
        that provides access to the notice of the opportunity through 
        the System for Award Management or subsequent government-wide 
        point of entry, with classified solicitations posted to the 
        appropriate government portal.
            (2) Agency procedures.--The head of an agency may not award 
        contracts under a pilot program until the agency, in 
        consultation with the relevant Acquisition Regulatory Council 
        issues and makes publicly available guidance on procedures for 
        use of the authority.
    (i) Reporting.--Contract actions entered into under this section 
shall be reported to the Federal Procurement Data System, or any 
successor system.
    (j) Sunset.--The authority for a pilot program under this section 
shall terminate on the date that is 5 years from the date the Federal 
Acquisition Regulation is revised pursuant to subsection (h)(1) to 
implement the program.

SEC. 12. DEVELOPMENT OF TOOLS AND GUIDANCE FOR TESTING AND EVALUATING 
              ARTIFICIAL INTELLIGENCE.

    (a) Agency Report Requirements.--In a manner specified by the 
Director, the Chief Artificial Intelligence Officer of each agency 
shall identify and annually submit to the Council a report on obstacles 
encountered in the testing and evaluation of artificial intelligence, 
specifying--
            (1) the nature of the obstacles;
            (2) the impact of the obstacles on agency operations, 
        mission achievement, and artificial intelligence adoption;
            (3) recommendations for addressing the identified 
        obstacles, including the need for particular resources or 
        guidance to address certain obstacles; and
            (4) a timeline that would be needed to implement proposed 
        solutions.
    (b) Council Review and Collaboration.--
            (1) Annual review.--Not less frequently than annually, the 
        Council shall conduct a review of agency reports under 
        subsection (a) to identify common challenges and opportunities 
        for cross-agency collaboration.
            (2) Development of tools and guidance.--
                    (A) In general.--Not later than 2 years after the 
                date of enactment of this Act, the Director, in 
                consultation with the Council, shall convene a working 
                group to--
                            (i) develop tools and guidance to assist 
                        agencies in addressing the obstacles that 
                        agencies identify in the reports under 
                        subsection (a);
                            (ii) support interagency coordination to 
                        facilitate the identification and use of 
                        relevant voluntary standards, guidelines, and 
                        other consensus-based approaches for testing 
                        and evaluation and other relevant areas; and
                            (iii) address any additional matters 
                        determined appropriate by the Director.
                    (B) Working group membership.--The working group 
                described in subparagraph (A) shall include Federal 
                interdisciplinary personnel, such as technologists, 
                information security and cybersecurity personnel, 
                domain experts, privacy officers, data officers, civil 
                rights and civil liberties officers, contracting 
                officials, legal counsel, customer experience 
                professionals, program evaluation officers, and others, 
                as determined by the Director.
            (3) Information sharing.--The Director, in consultation 
        with the Council, shall establish a mechanism for sharing tools 
        and guidance developed under paragraph (2) across agencies.
    (c) Congressional Reporting.--
            (1) In general.--Each agency shall submit the annual report 
        under subsection (a) to the relevant congressional committees.
            (2) Consolidated report.--The Director, in consultation 
        with the Council, may suspend the requirement under paragraph 
        (1) and submit to the relevant congressional committees a 
        consolidated report that conveys government-wide testing and 
        evaluation challenges, recommended solutions, and progress 
        toward implementing recommendations from prior reports 
        developed in fulfillment of this subsection.
    (d) Extremely Low Risk Artificial Intelligence Use Cases.--Not 
later than 2 years after the date of enactment of this Act, the Chief 
Artificial Intelligence Officers Council shall submit to the Director 
and the relevant congressional committees a report outlining--
            (1) a proposed framework for identifying extremely low risk 
        artificial intelligence use cases; and
            (2) opportunities to facilitate the deployment and use of 
        extremely low risk artificial intelligence.
    (e) Sunset.--The requirements under this section shall terminate on 
the date that is 10 years after the date of enactment of this Act.

SEC. 13. UPDATES TO ARTIFICIAL INTELLIGENCE USE CASE INVENTORIES.

    (a) Amendments.--
            (1) Advancing american ai act.--The Advancing American AI 
        Act (Public Law 117-263; 40 U.S.C. 11301 note) is amended--
                    (A) in section 7223(3), by striking the period and 
                inserting ``and in section 5002 of the National 
                Artificial Intelligence Initiative Act of 2020 (15 
                U.S.C. 9401).''; and
                    (B) in section 7225, by striking subsection (d).
            (2) Executive order 13960.--The provisions of section 5 of 
        Executive Order 13960 (85 Fed. Reg. 78939; relating to 
        promoting the use of trustworthy artificial intelligence in 
        Federal Government) that exempt classified and sensitive use 
        cases from agency inventories of artificial intelligence use 
        cases shall cease to have legal effect.
    (b) Disclosure.--
            (1) In general.--The artificial intelligence inventory of 
        each agency shall publicly disclose, subject to applicable laws 
        and policies relating to the protection of privacy and 
        classified and sensitive information--
                    (A) whether artificial intelligence was developed 
                internally by the agency or procured externally, 
                without excluding any use case on basis that the use 
                case is ``sensitive'' solely because it was externally 
                procured;
                    (B) data provenance information for high risk 
                artificial intelligence use cases to identify the types 
                of sources of the training data of the artificial 
                intelligence, including internal government data, 
                public data, commercially held data, or similar data;
                    (C) the level of risk at which the agency has 
                classified the artificial intelligence use case and a 
                brief explanation for how the determination was made; 
                and
                    (D) the number of artificial intelligence use cases 
                excluded from public reporting as being classified or 
                ``sensitive'', and an unclassified summary of each of 
                these use cases.
            (2) Updates.--
                    (A) In general.--When an agency updates the public 
                artificial intelligence use case inventory of the 
                agency, the agency shall disclose the date of the 
                modification and make change logs publicly available 
                and accessible.
                    (B) Guidance.--The Director shall issue guidance to 
                agencies that describes how to appropriately update 
                artificial intelligence use case inventories and 
                clarifies how sub-agencies and regulatory agencies 
                should participate in the artificial intelligence use 
                case inventorying process.
    (c) Congressional Reporting.--The head of each agency shall, upon 
request, submit to the relevant congressional committees a copy of the 
annual artificial intelligence use case inventory of the agency, 
including--
            (1) the use cases that have been identified as 
        ``sensitive'' and not for public disclosure; and
            (2) a classified annex of classified use cases.
    (d) Comptroller General.--
            (1) Reports required.--
                    (A) Appropriate classification.--Not later than 1 
                year after the date of enactment of this Act, and 
                annually thereafter for a period of 5 years, the 
                Comptroller General of the United States shall submit 
                to relevant congressional committees a report on 
                whether agencies are appropriately classifying use 
                cases.
                    (B) Government trends.--Beginning 2 years after the 
                date of enactment of this Act, and annually thereafter, 
                the Comptroller General of the United States, shall 
                issue a report, based on the artificial intelligence 
                use cases reported in use case inventories and other 
                relevant information, that describes trends in the use 
                of artificial intelligence by agencies and the impact 
                of--
                            (i) such use on the Federal workforce and 
                        any cost savings; and
                            (ii) this Act on Federal contractors that 
                        are small business concerns, including--
                                    (I) small business concerns owned 
                                and controlled by service-disabled 
                                veterans (as defined in section 3 of 
                                the Small Business Act (15 U.S.C. 
                                632));
                                    (II) qualified HUBZone small 
                                business concerns (as defined in 
                                section 31(b) of the Small Business Act 
                                (15 U.S.C. 657(b)(1)));
                                    (III) socially and economically 
                                disadvantaged small business concerns 
                                (as defined in section 8(a)(4) of the 
                                Small Business Act (15 U.S.C. 
                                637(a)(4))), and
                                    (IV) small business concerns owned 
                                and controlled by women (as defined in 
                                section 3 of the Small Business Act (15 
                                U.S.C. 632)).
            (2) Appropriate classification.--The Comptroller General of 
        the United States shall determine whether the appropriate level 
        of disclosure of artificial intelligence use cases by agencies 
        should be included on the High Risk List of the Government 
        Accountability Office.
                                                       Calendar No. 697

118th CONGRESS

  2d Session

                                S. 4495

                          [Report No. 118-291]

_______________________________________________________________________

                                 A BILL

 To enable safe, responsible, and agile procurement, development, and 
use of artificial intelligence by the Federal Government, and for other 
                               purposes.

_______________________________________________________________________

                           December 16, 2024

                       Reported with an amendment