Planning for New and Emerging Risks: The White House Report on Artificial Intelligence, Automation, and the Economy

Share Your Thoughts: Facebooktwitterlinkedin

On December 20, 2016, in order to ready the United States for a future in which artificial intelligence (AI) plays a growing role, the White House released a report on Artificial Intelligence, Automation, and the Economy. This report follows up on the Administration’s previous report, Preparing for the Future of Artificial Intelligence, which was released in October 2016, and which recommended that the White House publish a report on the economic impacts of artificial intelligence by the end of 2016.

Artificial Intelligence creates an immediate need for organizations to review threats and risks associated with new developments.  From driver-less public transportation to evolving threats created by AI and Weapons Systems:

“AI  holds the potential to  be a major driver of economic growth and social progress, if industry, civil society, government, and the public work together to support development of the technology with thoughtful attention to its potential and to managing its risks.”

White House Report on Artificial IntelligenceAs detailed by Kristin Lee, Communications Director and Senior Policy Advisor for the White House Office of Science and Technology Policy:

Accelerating AI capabilities will enable automation of some tasks that have long required human labor. These transformations will open up new opportunities for individuals, the economy, and society, but they will also disrupt the current livelihoods of millions of Americans. The new report examines the expected impact of AI-driven automation on the economy, and describes broad strategies that could increase the benefits of AI and mitigate its costs.

AI-driven automation will transform the economy over the coming years and decades. The challenge for policymakers will be to update, strengthen, and adapt policies to respond to the economic effects of AI.

Although it is difficult to predict these economic effects precisely, the report suggests that policymakers should prepare for five primary economic effects:

  • Positive contributions to aggregate productivity growth;
  • Changes in the skills demanded by the job market, including greater demand for higher-level technical skills;
  • Uneven distribution of impact, across sectors, wage levels, education levels, job types, and locations;
  • Churning of the job market as some jobs disappear while others are created; and
  • The loss of jobs for some workers in the short-run, and possibly longer depending on policy responses.
EasyMile driverless Bus

EasyMile driverless Bus

There is substantial uncertainty about how strongly these effects will be felt and how rapidly they will arrive. It is possible that AI will not have large, new effects on the economy, such that the coming years are subject to the same basic workforce trends seen in recent decades—some of which are positive, and others which are worrisome and may require policy changes. At the other end of the range of possibilities, the economy might experience a larger shock, with accelerating changes in the job market, and significantly more workers in need of assistance and retraining as their skills no longer match the demands of the job market. Given available evidence, it is not possible to make specific predictions, so policymakers must be prepared for a range of potential outcomes. At a minimum, some occupations such as drivers and cashiers are likely to face displacement from or a restructuring of their current jobs.

Because the effects of AI-driven automation will be felt across the whole economy, and the areas of greatest impact may be difficult to predict, policy responses must be targeted to the whole economy. In addition, the economic effects of AI-driven automation may be difficult to separate from those of other factors such as other forms of technological change, globalization, reduction in market competition and worker bargaining power, and the effects of past public policy choices. Even if it is not possible to determine how much of the current transformation of the economy is caused by each of these factors, the policy challenges raised by the disruptions remain, and require a broad policy response.

In the cases where it is possible to direct mitigations to particularly affected places and sectors, those approaches should be pursued. But more generally, the report suggests three broad strategies for addressing the impacts of AI-driven automation across the whole U.S. economy:

  1. Invest in and develop AI for its many benefits;
  2. Educate and train Americans for jobs of the future; and
  3. Aid workers in the transition and empower workers to ensure broadly shared growth.

The report details what can be done to execute on these strategies. Continued engagement between government, industry, technical and policy experts, and the public should play an important role in moving the Nation toward policies that create broadly shared prosperity, unlock the creative potential of American companies and workers, advance diversity and inclusion of the technical community in AI, and ensure the Nation’s continued leadership in the creation and use of AI.

Beyond this report, more work remains, to further explore the policy implications of AI. Most notably, AI creates important opportunities in cyberdefense, and can improve systems to detect fraudulent transactions and messages.

Recommendations in this Report

Recommendation 1: Private and public institutions are encouraged to examine whether and how they can responsibly leverage AI and machine learning in ways that will benefit society.

Social justice and public policy institutions that do not typically engage with advanced technologies and data science in their work should consider partnerships with AI researchers and practitioners that can help apply AI  tactics to the broad social problems these institutions already address in other ways.

Recommendation 2: Federal agencies should prioritize open training data and open data standards in  AI.

The government should emphasize the release of datasets that enable the use of AI to address social  challenges. Potential steps may include developing an “Open Data for AI” initiative with the objective of  releasing a significant number of government data sets to accelerate AI research and galvanize the use of  open data standards and best practices across government, academia, and the private sector.

Recommendation 3: The Federal Government should explore ways to improve the capacity of key  agencies to apply AI to their missions.

For example, Federal agencies should explore the potential to create DARPA – like organizations to support high – risk, high – reward AI research and its application, much as the Department of Education has done through its proposal to create an “ARPA – ED,” to support R&D to determine whether AI and other technologies could significantly improve student learning outcomes.

Recommendation 4: The NSTC MLAI subcommittee should develop a community of practice for AI practitioners across government.

Agencies should work together to develop and share standards and best practices around the use of AI in government operations. Agencies should ensure that Federal  employee training programs include relevant AI opportunities.

Recommendation 5: Agencies should draw on appropriate technical expertise at the senior level when setting regulatory policy for AI – enabled products.  Effective regulation of AI – enabled products requires collaboration between agency leadership, staff knowledgeable about the existing regulatory framework  and regulatory practices generally, and technical experts with knowledge of AI. Agency leadership should  take steps to recruit the necessary technical talent, or identify it in existing agency staff, and should ensure that there are sufficient technical “seats at the table” in regulatory policy discussions.

Recommendation 6: Agencies should use the full range of personnel assignment and exchange models  (e.g. hiring authorities) to foster a Federal workforce with more diverse perspectives on the current state of technology.

Recommendation 7: The Department of Transportation should work with industry and researchers on ways to increase sharing of data for safety, research, and other purposes.

The future roles of AI in surface and air transportation are undeniable. Accordingly, Federal actors should focus in the near – term  on developing increasingly rich sets of data, consistent with consumer privacy, that can better inform  policy – making as these technologies mature.

Recommendation 8: The U.S. Government should invest in developing and implementing an advanced and automated air traffic management system that is highly scalable, and can fully accommodate  autonomous and piloted aircraft alike.

Recommendation 9: The Department of Transportation should continue to develop an evolving framework for regulation to enable the safe integration of fully automated vehicles and UAS, including novel vehicle designs, into the transportation system.

Recommendation 10: The NSTC Subcommittee on Machine Learning and Artificial Intelligence should monitor developments in AI, and report regularly to senior Administration leadership about the  status of AI, especially with regard to milestones.

The Subcommittee should update the list of milestones as knowledge advances and the consensus of experts changes over time.  The Subcommittee should consider reporting to the public on AI developments, when appropriate.

Recommendation 11: The Government should monitor the state of AI in other countries, especially with respect to milestones.

Recommendation 12: Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached soon.

Recommendation 13: The Federal government should prioritize basic and long – term AI research.

The Nation as a whole would benefit from a steady increase in Federal and private – sector AI R&D, with a particular emphasis on basic research and long – term, high – risk research initiatives. Because basic and  long – term research especially are areas where the private sector is not likely to invest, Federal  investments will be important for R&D in these areas.

Recommendation 14: The NSTC  Subcommittees on MLAI and NITRD, in conjunction with the NSTC Committee on Science, Technology, Engineering, and Education (CoSTEM), should initiate a study on the AI workforce pipeline in order to develop actions that ensure an appropriate increase in the size,  quality, and diversity of the workforce, including AI researchers, specialists, and users.

Recommendation 15: The Executive Office of the President should publish a follow-on report by the end of this year, to further investigate the effects of AI and automation on the U.S. job market, and  outline recommended policy responses.

Recommendation 16: Federal agencies that use AI – based systems to make or provide decision support for consequential decisions about individuals should take extra care to ensure the efficacy and fairness of those systems, based on evidence-based verification and validation.

Recommendation 17: Federal agencies that make grants to state and local governments in support of the use of AI – based systems to make consequential decisions about individuals should review the terms  of grants to ensure that AI – based products or services purchased with Federal grant funds produce results in a sufficiently transparent fashion and are supported by evidence of efficacy and fairness.

Recommendation 18: Schools and universities should include ethics, and related topics in security,  privacy, and safety,  as an integral part of curricula on AI, machine learning, computer science, and  data science.

Recommendation 19: AI professionals, safety professionals, and their professional societies should work together to continue progress toward a mature field of AI safety engineering.

Recommendation 20: The U.S. Government should develop a government-wide strategy on international engagement related to AI, and develop a list of AI topical areas that need international engagement and monitoring.

Recommendation 21: The U.S. Government should deepen its engagement with key international  stakeholders, including foreign governments, international organizations, industry, academia, and others, to exchange information and facilitate collaboration on AI R&D.

Recommendation 22: Agencies’ plans and strategies should account for the influence of AI on cybersecurity, and of cybersecurity on AI.

Agencies involved in AI issues should engage their U.S. Government and private-sector cybersecurity colleagues for input on how to  ensure that AI systems and ecosystems are secure and resilient to intelligent adversaries. Agencies involved in cybersecurity issues  should engage their U.S. Government and private sector AI colleagues for innovative ways to apply AI  for effective and efficient cybersecurity.

Recommendation 23: The U.S. Government should complete the development of a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous  weapons.

Related:

Self-driving buses take to roads alongside commuter traffic in Helsinki
Driverless tech at a crossroads
Share Your Thoughts: Facebooktwitterlinkedin