December 9, 2019 - Issue: Vol. 165, No. 196 — Daily Edition116th Congress (2019 - 2020) - 1st Session
All in House sectionPrev21 of 83Next
IDENTIFYING OUTPUTS OF GENERATIVE ADVERSARIAL NETWORKS ACT; Congressional Record Vol. 165, No. 196
(House of Representatives - December 09, 2019)
Text available as:
Formatting necessary for an accurate reading of this text may be shown by tags (e.g., <DELETED> or <BOLD>) or may be missing from this TXT display. For complete and accurate display of this text, see the PDF.
[Pages H9363-H9364] From the Congressional Record Online through the Government Publishing Office [www.gpo.gov] IDENTIFYING OUTPUTS OF GENERATIVE ADVERSARIAL NETWORKS ACT Ms. JOHNSON of Texas. Mr. Speaker, I move to suspend the rules and pass the bill (H.R. 4355) to direct the Director of the National Science Foundation to support research on the outputs that may be generated by generative adversarial networks, otherwise known as deepfakes, and other comparable techniques that may be developed in the future, and for other purposes, as amended. The Clerk read the title of the bill. The text of the bill is as follows: H.R. 4355 Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, SECTION 1. SHORT TITLE. This Act may be cited as the ``Identifying Outputs of Generative Adversarial Networks Act'' or the ``IOGAN Act''. SEC. 2. FINDINGS. Congress finds the following: (1) Research gaps currently exist on the underlying technology needed to develop tools to identify authentic videos, voice reproduction, or photos from manipulated or synthesized content, including those generated by generative adversarial networks. (2) The National Science Foundation's focus to support research in artificial intelligence through computer and information science and engineering, cognitive science and psychology, economics and game theory, control theory, linguistics, mathematics, and philosophy, is building a better understanding of how new technologies are shaping the society and economy of the United States. (3) The National Science Foundation has identified the ``10 Big Ideas for NSF Future Investment'' including ``Harnessing the Data Revolution'' and the ``Future of Work at the Human- Technology Frontier'', in with artificial intelligence is a critical component. (4) The outputs generated by generative adversarial networks should be included under the umbrella of research described in paragraph (3) given the grave national security and societal impact potential of such networks. (5) Generative adversarial networks are not likely to be utilized as the sole technique of artificial intelligence or machine learning capable of creating credible deepfakes and other comparable techniques may be developed in the future to produce similar outputs. SEC. 3. NSF SUPPORT OF RESEARCH ON MANIPULATED OR SYNTHESIZED CONTENT AND INFORMATION SECURITY. The Director of the National Science Foundation, in consultation with other relevant Federal agencies, shall support merit-reviewed and competitively awarded research on manipulated or synthesized content and information authenticity, which may include-- [[Page H9364]] (1) fundamental research on digital forensic tools or other technologies for verifying the authenticity of information and detection of manipulated or synthesized content, including content generated by generative adversarial networks; (2) fundamental research on technical tools for identifying manipulated or synthesized content, such as watermarking systems for generated media; (3) social and behavioral research related to manipulated or synthesized content, including the ethics of the technology and human engagement with the content; (4) research on public understanding and awareness of manipulated and synthesized content, including research on best practices for educating the public to discern authenticity of digital content; and (5) research awards coordinated with other federal agencies and programs including the Networking and Information Technology Research and Development Program, the Defense Advanced Research Projects Agency and the Intelligence Advanced Research Projects Agency. SEC. 4. NIST SUPPORT FOR RESEARCH AND STANDARDS ON GENERATIVE ADVERSARIAL NETWORKS. (a) In General.--The Director of the National Institute of Standards and Technology shall support research for the development of measurements and standards necessary to accelerate the development of the technological tools to examine the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content. (b) Outreach.--The Director of the National Institute of Standards and Technology shall conduct outreach-- (1) to receive input from private, public, and academic stakeholders on fundamental measurements and standards research necessary to examine the function and outputs of generative adversarial networks; and (2) to consider the feasibility of an ongoing public and private sector engagement to develop voluntary standards for the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content. SEC. 5. REPORT ON FEASIBILITY OF PUBLIC-PRIVATE PARTNERSHIP TO DETECT MANIPULATED OR SYNTHESIZED CONTENT. Not later than one year after the date of the enactment of this Act, the Director of the National Science Foundation and the Director of the National Institute of Standards and Technology shall jointly submit to the Committee on Space, Science, and Technology of the House of Representatives and the Committee on Commerce, Science, and Transportation a report containing-- (1) the Directors' findings with respect to the feasibility for research opportunities with the private sector, including digital media companies to detect the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content; and (2) any policy recommendations of the Directors that could facilitate and improve communication and coordination between the private sector, the National Science Foundation, and relevant Federal agencies through the implementation of innovative approaches to detect digital content produced by generative adversarial networks or other technologies that synthesize or manipulate content. SEC. 6. GENERATIVE ADVERSARIAL NETWORK DEFINED. In this Act, the term ``generative adversarial network'' means, with respect to artificial intelligence, the machine learning process of attempting to cause a generator artificial neural network (referred to in this paragraph as the ``generator'' and a discriminator artificial neural network (referred to in this paragraph as a ``discriminator'') to compete against each other to become more accurate in their function and outputs, through which the generator and discriminator create a feedback loop, causing the generator to produce increasingly higher-quality artificial outputs and the discriminator to increasingly improve in detecting such artificial outputs. The SPEAKER pro tempore. Pursuant to the rule, the gentlewoman from Texas (Ms. Johnson) and the gentleman from Oklahoma (Mr. Lucas) each will control 20 minutes. The Chair recognizes the gentlewoman from Texas. General Leave Ms. JOHNSON of Texas. Mr. Speaker, I ask unanimous consent that all Members may have 5 legislative days within which to revise and extend their remarks and include extraneous material on H.R. 4355, the bill under consideration. The SPEAKER pro tempore. Is there objection to the request of the gentlewoman from Texas? There was no objection. Ms. JOHNSON of Texas. Mr. Speaker, I yield myself such time as I may consume. Mr. Speaker, I rise today in support of H.R. 4355, the Identifying Outputs of Generative Adversarial Networks Act. Deepfake technology, which manipulates photos, videos, or audio clips to produce content that seems real but is not, has become increasingly commonplace in recent years. This increase in prevalence has been spurred, in part, by increases in computing power, widespread availability of images and other data, and the use of artificial intelligence. In many cases, the applications of this technology may be benign, but bad actors can also use this technology to spread disinformation and cause great harm to individuals, organizations, and society as a whole. During the Science, Space, and Technology Committee hearing on online imposters and disinformation earlier this year, one of the witnesses showed us a demonstration of a deepfake video in which he swapped the likenesses of two Members of Congress at the hearing. Despite the spread and potential harm of deepfake technology, there are currently no sure-fire methods of identifying and distinguishing manipulated content from authentic content. The ability to differentiate between manipulated and authentic content is essential to maintaining our national and economic security and protecting against malicious use of these technologies. H.R. 4355 leverages the strengths of the National Science Foundation and the National Institute of Standards and Technology by directing these agencies to support research on manipulated or synthesized content in order to help develop the standards and other tools necessary to detect this content. I commend my colleagues Representatives Gonzalez, Stevens, and Baird for their excellent leadership on this bipartisan legislation. I urge all of my colleagues to join in passing this bill. Mr. Speaker, I reserve the balance of my time. Mr. LUCAS. Mr. Speaker, I yield myself such time as I may consume. Mr. Speaker, I rise in support of H.R. 4355, the Identifying Outputs of Generative Adversarial Networks Act introduced by Representative Anthony Gonzalez. This bill addresses the underlying technologies for digital content commonly referred to as ``deepfakes.'' This technology uses machine learning to manipulate videos and other digital content to produce misleading and false products. These technologies are becoming more sophisticated and, in the wrong hands, present a serious security threat. As we know, bad actors are already using disinformation to disrupt civil society and try to sow divisions among Americans. H.R. 4355 supports the fundamental research necessary to better understand the underlying technology, to develop tools to identify manipulated content, and to better understand how humans interact with this generated content. The bill also tasks the National Institute of Standards and Technology with bringing together the private sector and government agencies to discuss how to advance innovation in this area responsibly. I applaud Mr. Gonzalez' bipartisan work on this bill and his leadership on the issue of technology and security. I thank the chairwoman and her staff for moving H.R. 4355 forward. There is a lot of fundamental research that needs to be done to better understand the technologies driving deepfakes and their impact on society. H.R. 4355 will help support that research. Mr. Speaker, I urge my colleagues to support the bill, and I yield back the balance of my time. Ms. JOHNSON of Texas. Mr. Speaker, I would like to express my appreciation for all the Members who have been working on this very important bipartisan legislation. I urge its passage, and I yield back the balance of my time. {time} 1545 The SPEAKER pro tempore. The question is on the motion offered by the gentlewoman from Texas (Ms. Johnson) that the House suspend the rules and pass the bill, H.R. 4355, as amended. The question was taken; and (two-thirds being in the affirmative) the rules were suspended and the bill, as amended, was passed. A motion to reconsider was laid on the table. ____________________
All in House sectionPrev21 of 83Next