Quantcast
Channel: 100% Solutions: robotics
Viewing all articles
Browse latest Browse all 3882

Policy - Ryan Calo Presents a Roadmap for Artificial Intelligence Policy

$
0
0

Share The year is 2017 and talk of artificial intelligence is everywhere. People marvel at the capacity of machines to translate any language and master any game. Others condemn the use of secret algorithms to sentence criminal defendants or recoil at the prospect of machines gunning for blue, pink, and white-collar jobs. Some worry aloud that artificial intelligence will be humankind's "final invention."      - Ryan Calo, “Artificial Intelligence Policy: A Roadmap”   In a new article, University of Washington law professor Ryan Calo provides a guide for debating artificial intelligence (AI) policy. “Artificial Intelligence Policy: A Roadmap” is written to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI. Prepared in connection with UC Davis Law Review's 50th anniversary symposium, the essay covers issues ranging from justice to privacy and to displacement of labor.   Below are a few selected excerpts from “Artificial Intelligence Policy: A Roadmap.”   What Is AI? There is no straightforward, consensus definition of artificial intelligence. AI is best understood as a set of techniques aimed at approximating some aspect of human or animal cognition using machines.   Much of the contemporary excitement around AI, however, flows from the enormous promise of a particular set of techniques known collectively as machine learning. Machine learning or ML refers to the capacity of a system to improve its performance at a task over time. Often this task involves recognizing patterns in datasets, although ML outputs can include everything from translating languages and diagnosing precancerous moles to grasping objects or helping to drive a car.   Artificial Intelligence Policy Challenges   Justice and Equity Perhaps the most visible and developed area of AI policy to date involves the capacity of algorithms or trained systems to reflect human values such as fairness, accountability, and transparency. ... The topic is potentially quite broad, encompassing both the prospect of bias in AI-enabled features or products as well as the use of AI in making material decisions regarding financial, health, and even liberty outcomes.   A few examples of AI issues within the justice and equity debate: A camera that won’t take an Asian American family’s picture because the software believes they are blinking. ("Are Face-Detection Cameras Racist?," Time, January 22, 2010) A translation engine that associates the role of doctor with being male and the role of nurse with being female. ("Semantics Derived Automatically from Language Corpora Contain Human-like Biases," Science, 183-86, April 2017) Police use "heat maps" that purport to predict areas of future criminal activity to determine where to patrol, but in fact lead to disproportionate harassment of African Americans. ("Predictions Put into Practice: A Quasi Experimental Evaluation of Chicago's Predictive Policing Pilot," Journal of Experimental Criminology, 347-71, September 2016)   Use of Force There is also the question of who bears responsibility for the choices of machines. There may be circumstances in which the automation of weapons is desirable or even inevitable. It seems unlikely, for example, that the United States military would permit its military rivals to have faster or more flexible response capabilities than its own whatever their control mechanism. Regardless, establishing a consensus around meaningful human control would not obviate all inquiry into responsibility in the event of mistake or war crime. There are uses of AI that presuppose human decision but nevertheless implicate deep questions of policy and ethics.   An example is when the intelligence community leverages algorithms to select targets for remotely operated drone strikes. ("Death by Drone Strike, Dished Out by Algorithm," The Guardian, February 21, 2016).   Safety and Certification AI systems do more than process information and assist officials to make decisions of consequence. Many systems—such as the software that controls an airplane on autopilot or a fully driverless car—exert direct and physical control over objects in the human environment. Others provide sensitive services that, when performed by people, require training and certification. These applications raise additional questions concerning the standards to which AI systems are held and the procedures and techniques available to ensure those standards are being met.   An example: “When Your Self-Driving Car Crashes, You Could Still Be the One Who Gets Sued” (Quartz, July 25, 2015).   Privacy Over the past decade, the discourse around privacy has shifted perceptibly. What started out as a conversation about individual control over personal information has evolved into a conversation around the power of information more generally, i.e., the control institutions have over consumers and citizens by virtue of possessing so much information about them. The acceleration of artificial intelligence, which is intimately tied to the availability of data, will play a significant role in this evolving conversation in at least two ways: (1) the problem of pattern recognition and (2) the problem of data parity. Note that unlike some of the policy questions discussed above, which envision the consequential deployment of imperfect AI, the privacy questions that follow assume AI that is performing its assigned tasks only too well.   A few AI issues related to privacy include: Pattern recognition - with enough data about you and the population at large, firms, governments, and other institutions with access to AI will one day make guesses about you that you cannot imagine - what you like, whom you love, what you have done. Expectation of privacy - if everyone in public can be identified through facial recognition, and if the “public” habits of individuals or groups permit AI to derive private facts, then citizens will have little choice but to convey information to a government bent on public surveillance.   Taxation and Displacement of Labor A common concern, especially in public discourse, is that AI will displace jobs by mastering tasks currently performed by people. The classic example is the truck driver: many have observed that self-driving vehicles could obviate, or at least radically transform, this very common role. Machines have been replacing people since the Industrial Revolution (which was hard enough on society). The difference, many suppose, is twofold: first, the process of automation will be much faster, and second, very few sectors will remain untouched by AI’s contemporary and anticipated capabilities. This would widen the populations that could feel AI’s impact and limits the efficacy of unemployment benefits or retraining.   Does Artificial Intelligence Present an Existential Threat to Humanity? My own view is that AI does not present an existential threat to humanity, at least not in anything like the foreseeable future. Further, devoting disproportionate attention and resources to the AI apocalypse has the potential to distract policymakers from addressing AI’s more immediate harms and challenges and could discourage investment in research on AI’s present social impacts. How much attention to pay to a remote but dire threat is itself a difficult question of policy.   Conclusion AI is remaking aspects of society today and likely to shepherd in much greater changes in the coming years. As this essay emphasized, the process of societal transformation carries with it many distinct and difficult questions of policy. Overall, however, there is reason for hope. We have certain advantages over our predecessors. The previous industrial revolutions had their lessons and we have access today to many more policymaking bodies and tools. We have also made interdisciplinary collaboration much more of a standard practice. But perhaps the greatest advantage is timing: AI has managed to capture policymakers’ imaginations early enough in its life-cycle that there is hope we can yet channel it toward the public interest. I hope this essay contributes in some small way to this process.   Read the full article: “Artificial Intelligence Policy: A Roadmap”.   Watch Professor Calo discuss why he wrote this roadmap for debating artificial intelligence policy:   Ryan Calo is the Lane Powell and D. Wayne Gittinger Assistant Professor at the University of Washington School of Law. Additionally, he is a faculty co-director of the University of Washington Tech Policy Lab. Professor Calo specializes in the areas of cyberlaw and robotics, and his research on law and emerging technology appears or is forthcoming in leading law reviews (California Law Review, University of Chicago Law Review, and Columbia Law Review) and technical publications (MIT Press, Nature, Artificial Intelligence).

Viewing all articles
Browse latest Browse all 3882

Trending Articles