Artificial Intelligence Studies: Global Impact & Policy
Most people are not very familiar with the concept of artificial intelligence (AI). For instance, a 2017 survey asked 1,500 senior business leaders in the United States about AI, and only 17 percent reported familiarity with it. A number were uncertain about its nature or how it would impact their specific companies. They recognized significant potential for altering business processes but lacked clarity on how AI could be deployed within their own organizations. Gaining insight into Artificial Intelligence Studies can bridge this knowledge gap.
Despite this widespread lack of familiarity, AI is a technology fundamentally transforming nearly every aspect of life. It acts as a versatile tool enabling individuals and organizations to rethink how information is integrated, data is analyzed, and resulting insights are leveraged to improve decision-making. This comprehensive overview aims to explain AI to policymakers, opinion leaders, and interested observers, demonstrating how AI is already reshaping the world and raising critical questions for society, the economy, and governance.
This discussion explores novel AI applications across various sectors, including finance, national security, health care, criminal justice, transportation, and smart cities. It addresses pressing issues such as data access challenges, algorithmic bias, AI ethics and transparency, and legal liability for AI-driven decisions. We will contrast the regulatory approaches adopted by the U.S. and the European Union and conclude with a series of recommendations designed to maximize AI benefits while safeguarding essential human values.
To optimize the advantages derived from AI, nine key steps are recommended:
- Encourage broader data access for researchers while protecting users’ personal privacy.
- Increase government funding for unclassified AI research.
- Promote new models for digital education and AI workforce development to equip employees with necessary 21st-century skills.
- Establish a federal AI advisory committee tasked with making policy recommendations.
- Engage with state and local officials to facilitate the enactment of effective policies.
- Regulate broad AI principles rather than focusing on specific algorithms.
- Address bias complaints diligently to prevent AI from perpetuating historical injustice, unfairness, or discrimination embedded in data or algorithms.
- Maintain mechanisms ensuring human oversight and control.
- Implement penalties for malicious AI behavior and enhance cybersecurity measures.
Qualities of Artificial Intelligence
While a universally accepted definition remains elusive, AI is commonly understood to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require a human level of expertise” and assist people in anticipating or addressing issues as they arise. Consequently, they operate in an intentional, intelligent, and adaptive manner. The field of Artificial Intelligence Studies explores these fundamental characteristics.
Intentionality
Artificial intelligence algorithms are specifically designed to make decisions, frequently utilizing real-time data. Unlike passive machines capable only of mechanical or predetermined responses, AI systems, using sensors, digital data, or remote inputs, consolidate information from diverse sources, analyze it instantaneously, and act upon the insights derived from that data. Significant advancements in storage systems, processing speeds, and analytical techniques have endowed them with tremendous sophistication in analysis and decision-making.
Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.
Intelligence
AI development is generally undertaken in conjunction with machine learning and data analytics. Machine learning processes data to identify underlying trends. If a relevant pattern for a practical problem is discovered, software designers can apply this knowledge to analyze specific issues. This process merely requires data sets that are sufficiently robust for algorithms to discern useful patterns. Data can originate from various forms, including digital information, satellite imagery, visual information, text, or unstructured data. Exploring concepts like artificial intelligence sentient or self aware ai google often involves deep dives into the definition and implications of AI’s intelligence capabilities.
Adaptability
AI systems possess the capacity to learn and adapt based on their decisions. In the transportation sector, for instance, semi-autonomous vehicles are equipped with tools that inform drivers and the vehicles themselves about impending congestion, potholes, highway construction, or other potential traffic obstacles. These vehicles can benefit from the collective experience of other vehicles on the road without human intervention, and this accumulated “experience” is immediately and fully transferable to similarly configured vehicles. Their sophisticated algorithms, sensors, and cameras integrate experience from current operations and utilize dashboards and visual displays to present real-time information, enabling human drivers to comprehend ongoing traffic and vehicular conditions. Furthermore, in fully autonomous vehicles, advanced systems can completely control the car or truck and manage all navigation decisions.
Autonomous vehicle driving, illustrating applications of artificial intelligence studies in transportation
Applications in Diverse Sectors
AI is not merely a concept confined to the future; it is a present reality being integrated and deployed across numerous sectors. These include fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are countless instances where AI is already making a significant impact globally and substantially augmenting human capabilities.
One key driver behind the expanding role of AI is the immense potential it offers for economic development. A project conducted by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” This projection includes substantial gains across various regions: $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is advancing rapidly, having set a national objective to invest $150 billion in AI and become the global leader in this domain by 2030.
Meanwhile, a McKinsey Global Institute study focusing on China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” Although the authors noted that China currently lags behind the United States and the United Kingdom in AI deployment, the sheer scale of its AI market provides significant opportunities for pilot testing and future development.
Finance
Investments in financial AI in the United States experienced a threefold increase between 2013 and 2014, reaching a total of $12.2 billion. Experts in this sector note that “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” Additionally, specialized “robo-advisers” exist that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” These advancements aim to remove emotional factors from investing, basing decisions solely on analytical considerations, and executing these choices within minutes.
A prominent illustration of this occurs in stock exchanges, where high-frequency trading executed by machines has largely replaced human decision-making. Humans submit buy and sell orders, and computers match them instantaneously without human intervention. Machines can detect minuscule trading inefficiencies or market differentials and execute trades based on investor instructions. Powered in some instances by advanced computing, these tools possess significantly enhanced capacities for storing information, leveraging “quantum bits” that can store multiple values per location, dramatically increasing storage capacity and reducing processing times.
Fraud detection is another area where AI proves highly beneficial in financial systems. Identifying fraudulent activities within large organizations can be challenging, but AI can pinpoint abnormalities, outliers, or unusual cases requiring further investigation. This capability assists managers in identifying problems early in their cycle, preventing them from escalating to dangerous levels.
National Security
AI plays a substantial role in national defense. Through Project Maven, the American military is utilizing AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” According to Deputy Secretary of Defense Patrick Shanahan, the objective of emerging technologies in this area is “to meet our warfighters’ needs and to increase the speed and agility of technology development and procurement.”
Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.
The extensive data analytics associated with AI will profoundly influence intelligence analysis, processing massive amounts of data in near real time—and eventually real time—thereby offering commanders and their staff an unprecedented level of intelligence analysis and productivity. Command and control will be similarly affected as human commanders delegate certain routine, and in specific circumstances, critical decisions to AI platforms, drastically reducing the time required for decisions and subsequent actions. Ultimately, warfare is a time-sensitive process; the side that can decide and act most quickly will generally prevail. Indeed, artificially intelligent intelligence systems, linked to AI-assisted command and control systems, can elevate decision support and decision-making speeds far beyond traditional warfare means. This process will be so rapid, especially when combined with automatic decisions to launch artificially intelligent autonomous weapon systems capable of lethal outcomes, that a new term has emerged specifically to describe the accelerated pace of war: hyperwar.
While significant ethical and legal debates surround the potential use of artificially intelligent autonomous lethal systems in the West, China and Russia appear less constrained by this discussion. Therefore, we must anticipate the need to defend against such systems operating at hyperwar speeds. The challenge in the West regarding where to position “humans in the loop” in a hyperwar scenario will ultimately determine the West’s capacity to compete in this new form of conflict.
Just as AI will drastically accelerate the pace of warfare, the proliferation of zero-day or zero-second cyber threats, along with polymorphic malware, will challenge even the most advanced signature-based cyber defenses. This necessitates significant enhancements to existing cyber defenses. Increasingly, vulnerable systems are transitioning, and will need to migrate, to a layered approach to cybersecurity incorporating cloud-based, cognitive AI platforms. This methodology moves towards a “thinking” defensive capability that can protect networks through continuous training on known threats. This capability includes DNA-level analysis of previously unknown code, offering the possibility of recognizing and blocking incoming malicious code by identifying a string component of the file. This technique proved effective in stopping the debilitating “WannaCry” and “Petya” viruses in certain key U.S.-based systems.
Preparing for hyperwar and defending critical cyber networks must be a high priority, given that China, Russia, North Korea, and other nations are investing substantial resources into AI. In 2017, China’s State Council released a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. As an example of potential applications, the Chinese search company Baidu has pioneered a facial recognition application for locating missing individuals. Additionally, cities like Shenzhen are providing up to $1 million in support for AI laboratories. China hopes AI will enhance security, combat terrorism, and improve speech recognition programs. The dual-use nature of many AI algorithms means that AI research initially focused on one sector can be rapidly modified for use in the security sector as well.
Health Care
AI tools are assisting developers in enhancing computational sophistication within health care. For instance, Merantix, a German company, applies deep learning to medical issues. It has a medical imaging application that “detects lymph nodes in the human body in Computer Tomography (CT) images.” According to its creators, the critical steps involve labeling the nodes and identifying small lesions or growths that could indicate problems. Humans can perform this task, but radiologists charge $100 per hour and can meticulously review only about four images per hour. Processing 10,000 images would incur a cost of $250,000, rendering the process prohibitively expensive if solely performed by humans.
Deep learning excels in this scenario by training computers on data sets to differentiate between normal-looking and irregularly appearing lymph nodes. After extensive imaging exercises to refine labeling accuracy, radiological imaging specialists can apply this learned knowledge to actual patients to assess the risk of cancerous lymph nodes. Since only a small percentage are likely to test positive, the challenge is efficiently identifying unhealthy nodes from healthy ones.
AI has also been applied to managing congestive heart failure, a condition affecting 10 percent of senior citizens in the United States and costing $35 billion annually. AI tools are valuable because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.”
Criminal Justice
AI is being deployed within the criminal justice system. The city of Chicago has developed an AI-powered “Strategic Subject List” that analyzes individuals who have been arrested to assess their risk of becoming future perpetrators. It ranks 400,000 individuals on a scale from 0 to 500, incorporating factors such as age, criminal history, victimization experience, drug arrest records, and gang affiliation. Analysis of this data revealed that youth is a strong predictor of violence, being a shooting victim correlates with becoming a future perpetrator, gang affiliation has limited predictive value, and drug arrests are not strongly associated with future criminal activity.
Proponents argue that AI programs reduce human bias in law enforcement and contribute to a fairer sentencing system. R Street Institute Associate Caleb Watney notes:
Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates.
However, critics express concern that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” The fear is that such tools unfairly target individuals of color and have not successfully helped Chicago reduce the wave of murders experienced in recent years. Discussions related to the lemoine google ai controversy also highlight public concerns about advanced AI systems and potential biases in their operation.
Despite these concerns, other countries are rapidly advancing deployment in this area. In China, for example, companies already possess “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” New technologies facilitate the matching of images and voices with other data types and the application of AI to these combined data sets to enhance law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is integrating video images, social media activity, online purchases, travel records, and personal identity information into a “police cloud.” This consolidated database allows authorities to track criminals, potential law-breakers, and terrorists. In essence, China has become the world’s leading AI-powered surveillance state.
Transportation
Transportation is a sector where AI and machine learning are driving major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. These investments encompassed both applications for autonomous driving and the core technologies essential to the sector.
Students learning coding, related to artificial intelligence studies and workforce development
Autonomous vehicles—including cars, trucks, buses, and drone delivery systems—rely on advanced technological capabilities. These features include automated vehicle guidance and braking systems, lane-changing systems, the use of cameras and sensors for collision avoidance, AI for real-time information analysis, and high-performance computing and deep learning systems that adapt to new circumstances using detailed maps.
Light detection and ranging systems (LIDARs) and AI are crucial for navigation and collision avoidance. LIDAR systems combine light and radar technology. Mounted on vehicle tops, they use 360-degree imaging from radar and light beams to measure the speed and distance of surrounding objects. Along with sensors located on the front, sides, and rear of the vehicle, these instruments provide data that helps fast-moving cars and trucks stay in their lane, avoid other vehicles, and apply brakes and steering instantaneously to prevent accidents.
Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.
Since these cameras and sensors collect vast amounts of information that must be processed instantly to avoid obstacles, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to novel scenarios. This underscores that the software, rather than the physical vehicle itself, is the critical component. Advanced software allows vehicles to learn from the experiences of others on the road and modify their guidance systems in response to changes in weather, driving, or road conditions.
Ride-sharing companies have a strong interest in autonomous vehicles, perceiving benefits in customer service and labor productivity. All major ride-sharing companies are exploring driverless car technology. The proliferation of car-sharing and taxi services—such as Uber and Lyft in the U.S., Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrates the potential of this transportation option. Uber recently agreed to purchase 24,000 autonomous vehicles from Volvo for its ride-sharing fleet.
However, the ride-sharing firm faced a significant setback in March 2018 when one of its autonomous test vehicles in Arizona struck and killed a pedestrian. Immediately following the incident, Uber and several automotive manufacturers suspended testing and initiated investigations to understand what occurred and how the fatality could have happened. Both the industry and consumers require reassurance regarding the technology’s safety and ability to deliver on its promises. Without convincing answers, this accident could impede advancements in AI within the transportation sector.
Smart Cities
Metropolitan governments are leveraging AI to enhance urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:
The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls.
With approximately 80,000 requests annually, Cincinnati officials are implementing this technology to prioritize responses and identify the most efficient ways to manage emergencies. They view AI as a means to process large volumes of data and determine effective strategies for responding to public requests. Instead of addressing service issues reactively, authorities are striving for a proactive approach to urban service provision.
Cincinnati is not unique in this endeavor. Numerous metropolitan areas are adopting smart city applications that utilize AI to improve various aspects, including service delivery, environmental planning, resource management, energy utilization, and crime prevention. In its smart cities index, Fast Company magazine ranked American cities, identifying Seattle, Boston, San Francisco, Washington, D.C., and New York City as leading adopters. Seattle, for instance, has embraced sustainability, employing AI to manage energy usage and resource allocation. Boston launched a “City Hall To Go” initiative to ensure underserved communities receive essential public services. It has also deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards.
Through these and other methods, metropolitan areas are at the forefront of deploying AI solutions in the United States. A National League of Cities report indicates that 66 percent of American cities are investing in smart city technology. The report highlighted top applications, including “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.”
Robot greeter representing the impact of artificial intelligence studies on employment
Policy, Regulatory, and Ethical Issues
The examples across various sectors demonstrate how AI is transforming numerous aspects of human existence. The increasing integration of AI and autonomous devices into daily life is altering fundamental operations and decision-making processes within organizations, leading to improved efficiency and response times.
Simultaneously, these developments raise significant policy, regulatory, and ethical considerations. For example, how should data access be promoted? What measures can guard against biased or unfair data used in algorithms? What types of ethical principles are embedded through software programming, and how transparent should designers be about their choices? Furthermore, what questions arise regarding legal liability in instances where algorithms cause harm? Delving into artificial intelligence studies often means grappling with these complex challenges.
The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.
Data Access Problems
Maximizing the benefits of AI necessitates a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI is dependent on data that can be analyzed in real time and applied to concrete problems. Ensuring data is “accessible for exploration” within the research community is a fundamental prerequisite for successful AI development.
A McKinsey Global Institute study indicates that nations that actively promote open data sources and data sharing are most likely to achieve advancements in AI. In this regard, the United States holds a considerable advantage over China. Global rankings on data openness show the U.S. ranked eighth overall globally, while China ranked 93rd.
However, currently, the United States lacks a cohesive national data strategy. There are few established protocols for facilitating research access or platforms that enable gaining new insights from proprietary data. Ownership of data and the extent to which it belongs in the public sphere are not always clear. These ambiguities constrain the innovation economy and hinder academic research. The following section outlines potential approaches to improve data access for researchers.
Biases in Data and Algorithms
In some cases, certain AI systems are believed to have facilitated discriminatory or biased practices. For instance, Airbnb has faced accusations that homeowners on its platform discriminate against racial minorities. A research project conducted by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.”
Racial issues also emerge with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces within a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” Unless the databases contain diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.
Many historical data sets reflect traditional values, which may not align with the preferences desired in a modern system. As Buolamwini observes, such an approach risks perpetuating past inequities:
The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create.
The ethical implications debated around specific AI projects, such as those involving self aware ai google researchers, underscore the importance of addressing these potential biases.
AI Ethics and Transparency
Algorithms inherently embed ethical considerations and value choices into program decisions. As such, these systems raise questions regarding the criteria used in automated decision-making. Some individuals desire a greater understanding of how algorithms function and what choices are being made.
In the United States, many urban schools utilize algorithms for enrollment decisions based on various considerations, such as parent preferences, neighborhood characteristics, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” Enrollment outcomes can be significantly different depending on how these considerations are factored in.
Depending on their configuration, AI systems can facilitate discriminatory practices like redlining mortgage applications, enable individuals to discriminate against others, or assist in screening or compiling lists of individuals based on unfair criteria. The specific considerations programmed into decision-making algorithms significantly impact how the systems operate and affect users.
These concerns prompted the EU’s implementation of the General Data Protection Regulation (GDPR) in May 2018. The regulations grant individuals “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” through an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with insight into the operation of these “black box” systems.
Legal Liability
Questions of legal liability for AI systems are increasingly relevant. If harm, infractions, or fatalities occur (as in the case of driverless cars), the operators of the algorithm are likely to be subject to product liability rules. Existing case law indicates that liability is determined by the specific facts and circumstances of the situation, influencing the type of penalties imposed, which can range from civil fines to imprisonment for major harms. The fatality involving a self-driving Uber vehicle in Arizona represents an important test case for legal liability. The state actively encouraged Uber to test its autonomous vehicles and granted the company considerable latitude in road testing. The outcome—whether lawsuits will be filed and who will be held liable (the human backup driver, the state, the city, Uber, software developers, or the auto manufacturer)—remains to be seen, given the multiple parties involved in the testing.
In non-transportation sectors, digital platforms often have limited liability for activities occurring on their sites. For example, Airbnb’s terms of service require users to waive their right to sue or join class-action lawsuits or arbitration to use the service. By demanding users forfeit these basic rights, the company restricts consumer protections and limits individuals’ ability to challenge discrimination resulting from unfair algorithms. Whether the principle of neutral networks will hold up across many sectors on a widespread basis is yet to be determined.
Recommendations
To strike a balance between promoting innovation and safeguarding fundamental human values, we propose several recommendations for advancing AI responsibly. These include improving data access, increasing government investment in AI, promoting AI workforce development, establishing a federal advisory committee, engaging with state and local officials to ensure effective policy enactment, regulating broad objectives rather than specific algorithms, seriously addressing bias as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior while promoting cybersecurity. These are key areas of focus within artificial intelligence studies.
Improving Data Access
The United States should develop a comprehensive data strategy that fosters innovation while ensuring consumer protection. Currently, there are no uniform standards regarding data access, sharing, or protection. Most data is proprietary and not widely shared with the research community, which limits innovation and system design. AI requires data to train and enhance its learning capacity. Without structured and unstructured data sets, realizing the full benefits of artificial intelligence will be nearly impossible.
According to a McKinsey Global Institute study, nations that actively promote open data sources and data sharing are most likely to see significant AI advancements. In this regard, the United States has a substantial advantage over China. Global data openness rankings place the U.S. eighth globally, compared to China’s 93rd position.
In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.
However, the United States currently lacks a coherent national data strategy. Protocols for promoting research access or platforms enabling new insights from proprietary data are limited. Ownership of data and the extent to which it should be public are often unclear. These uncertainties hinder the innovation economy and constrain academic research. A deliberate focus on improving data access is vital for advancing artificial intelligence studies.
There are various methods for researchers to gain data access. One involves voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to study inequality. As part of the arrangement, researchers underwent background checks and accessed data only from secured sites to protect user privacy and security.
Google has long made aggregated search results available to researchers and the public through its “Trends” site. Scholars can analyze public interest in topics such as interest in political figures, views on democracy, and perspectives on the overall economy. This allows tracking shifts in public interest and identifying topics that resonate with the general public.
Twitter provides much of its tweet data to researchers through application programming interfaces (APIs). These tools allow external developers to build application software using data from the social media platform, enabling studies of social media communication patterns and public reactions to current events.
In sectors with demonstrable public benefit, governments can facilitate collaboration by building infrastructure for data sharing. For example, the National Cancer Institute pioneered a data-sharing protocol allowing certified researchers to query health data using de-identified information from clinical data, claims information, and drug therapies. This enables researchers to evaluate efficacy and effectiveness and make recommendations on best medical approaches without compromising patient privacy.
Public-private data partnerships could combine government and business data sets to enhance system performance. For instance, cities could integrate data from ride-sharing services with their own information on social service locations, bus routes, mass transit, and highway congestion to improve transportation planning. This would help metropolitan areas address traffic issues and inform future highway and mass transit development.
A combination of these approaches would improve data access for researchers, government, and the business community without infringing on personal privacy. As noted by Ian Buck, vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” Through its Data.gov portal, the federal government has already made over 230,000 data sets publicly available, spurring innovation and advancements in AI and data analytics technologies. The private sector must also facilitate research data access for society to fully realize the benefits of artificial intelligence.
Increase Government Investment in AI
According to Greg Brockman, co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. This amount is significantly lower than investments made by China or other leading nations in this research area. This shortfall is notable given the substantial economic payoffs of AI. To boost economic development and social innovation, federal officials should increase investment in artificial intelligence and data analytics. Higher investment is likely to yield economic and social benefits far exceeding the initial cost. Investing in artificial intelligence studies and research is critical for national competitiveness.
Promote Digital Education and Workforce Development
As AI applications rapidly expand across many sectors, it is crucial to reimagine our educational institutions for a future where AI is ubiquitous and students require different training than they currently receive. Many students today lack instruction in the skills necessary for an AI-dominated landscape. For example, there are current shortages of data scientists, computer scientists, engineers, coders, and platform developers. Unless our educational system produces more individuals with these capabilities, it will constrain AI development.
Recognizing this, both state and federal governments have begun investing in AI human capital. In 2017, for instance, the National Science Foundation funded over 6,500 graduate students in computer-related fields and launched new initiatives to promote data and computer science education from pre-kindergarten through higher and continuing education. The goal is to create a larger pool of AI and data analytic personnel, enabling the United States to fully capitalize on the knowledge revolution.
However, substantial changes are also needed in the learning process itself. An AI-driven world requires not only technical skills but also critical reasoning, collaboration, design thinking, visual information display, and independent thinking skills, among others. AI will reconfigure societal and economic operations, necessitating “big picture” thinking about its implications for ethics, governance, and societal impact. Individuals will need the capacity to think broadly about complex issues and integrate knowledge from diverse fields.
One example of preparing students for a digital future is IBM’s Teacher Advisor program, which utilizes Watson’s free online tools to help educators incorporate the latest knowledge into the classroom. These tools enable instructors to develop new lesson plans in STEM and non-STEM subjects, find relevant instructional videos, and help students maximize their classroom experience. Such initiatives are precursors to the new educational environments that need to be developed.
Create a Federal AI Advisory Committee
Federal officials need a structured approach to address artificial intelligence. As previously noted, the field presents numerous challenges, from the need for improved data access to addressing issues of bias and discrimination. Addressing these concerns is vital to realizing the full benefits of this emerging technology.
To advance in this area, several members of Congress introduced the “Future of Artificial Intelligence Act,” a bill proposing broad policy and legal principles for AI. It suggests the secretary of commerce establish a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to receive advice on fostering a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.”
Specific questions the committee is tasked with addressing include: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration within 540 days of enactment regarding needed legislative or administrative action on AI.
This legislation represents a positive step, although given the rapid pace of the field, shortening the reporting timeline from 540 days to 180 days would be advisable. Waiting nearly two years for a committee report risks missed opportunities and delays action on important issues. A much quicker turnaround on the committee’s analysis would be highly beneficial given the rapid advancements in AI research and development.
Engage with State and Local Officials
States and localities are also taking action regarding AI. For example, the New York City Council unanimously passed a bill directing the mayor to form a task force to “monitor the fairness and validity of algorithms used by municipal agencies.” The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.”
According to the legislation’s proponents, city officials seek to understand how these algorithms function and ensure sufficient AI transparency and accountability. Furthermore, concerns exist regarding the fairness and biases of AI algorithms, leading the task force to be directed to analyze these issues and make recommendations for future usage. The task force is scheduled to report its findings on a range of AI policy, legal, and regulatory issues to the mayor by late 2019.
Some observers worry that the task force may not go far enough in holding algorithms accountable. For instance, Julia Powles of Cornell Tech and New York University argues that the original bill required companies to make AI source code publicly available for inspection and conduct simulations of its decision-making using actual data. However, following criticism, former Councilman James Vacca withdrew these requirements in favor of a task force study. He and other city officials were concerned that publishing proprietary algorithm information would impede innovation and make it difficult to find AI vendors willing to work with the city. It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.
Regulate Broad Objectives More Than Specific Algorithms
The European Union has adopted a more restrictive stance on data collection and analysis. It has regulations limiting companies’ ability to collect data on road conditions and map street views. Due to concerns that personal information on unencrypted Wi-Fi networks could be included in overall data collection, the EU has fined technology firms, demanded copies of data, and imposed limits on collected material. This has complicated the development of high-definition maps, crucial for autonomous vehicles, for technology companies operating there.
The GDPR implemented in Europe places significant restrictions on the use of artificial intelligence and machine learning. Published guidelines state, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” Furthermore, these new rules grant citizens the right to review how digital services made specific algorithmic choices affecting them.
By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.
If interpreted stringently, these regulations will hinder European software designers (and American designers collaborating with European counterparts) from incorporating artificial intelligence and high-definition mapping into autonomous vehicles. Tracking location and movements is fundamental to navigation in these vehicles. Without high-definition maps containing geo-coded data and the deep learning capabilities utilizing this information, fully autonomous driving will stagnate in Europe. Through these and other data protection actions, the European Union is placing its manufacturers and software designers at a considerable disadvantage compared to the rest of the world.
A more sensible approach involves focusing on the desired broad objectives of AI and enacting policies that promote them, rather than governments attempting to penetrate the “black boxes” to understand the exact operation of specific algorithms. Regulating individual algorithms will likely stifle innovation and make it challenging for companies to effectively utilize artificial intelligence. This debate is central to current artificial intelligence studies focusing on policy implications.
Take Biases Seriously
Bias and discrimination represent serious challenges for AI. Numerous instances of unfair treatment linked to historical data have already emerged, and proactive steps are essential to prevent this from becoming prevalent in artificial intelligence systems. Existing statutes governing discrimination in the physical economy should be extended to digital platforms. This will help protect consumers and build overall confidence in these systems.
For these advancements to be widely adopted, greater transparency in how AI systems operate is needed. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.”
Maintaining Mechanisms for Human Oversight and Control
Some experts argue for establishing avenues through which humans can exercise oversight and control over AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni proposes rules for regulating these systems. First, he suggests AI must comply with all existing laws governing human behavior, including regulations on “cyberbullying, stock manipulation or terrorist threats,” as well as preventing AI from “entrap[ping] people into committing crimes.” Second, he believes these systems should disclose that they are automated rather than human. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” His rationale is that these tools store vast amounts of data, making it imperative that individuals are aware of the privacy risks posed by AI.
In a similar vein, the IEEE Global Initiative has developed ethical guidelines for AI and autonomous systems. Its experts suggest these models should be programmed with consideration for widely accepted human norms and rules of behavior. AI algorithms need to account for the significance of these norms, how norm conflicts can be resolved, and how these systems can be transparent about norm resolution processes. According to ethics experts, software designs should be programmed for “nondeception” and “honesty.” When failures occur, mitigation mechanisms must be in place to address the consequences. Critically, AI must be sensitive to issues such as bias, discrimination, and fairness.
A group of machine learning experts claim that automating ethical decision-making is possible. Using the trolley problem as a moral dilemma, they posed the question: If an autonomous car goes out of control, should it be programmed to prioritize the lives of its passengers or pedestrians crossing the street? They devised a “voting-based system” that asked 1.3 million people to evaluate alternative scenarios, summarized the overall choices, and applied the aggregated public perspective to a range of vehicular possibilities. This procedure allowed them to automate ethical decision-making within AI algorithms, incorporating public preferences. While this approach doesn’t diminish the tragedy of any fatality, such as in the Uber case, it offers a mechanism for AI developers to integrate ethical considerations into their design processes.
Penalize Malicious Behavior and Promote Cybersecurity
As with any emerging technology, it is essential to discourage malicious use designed to deceive software or employ it for undesirable purposes. This is particularly important given the dual-use nature of AI, where the same tool can serve both beneficial and harmful ends. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the positive potential of the emerging technology. This includes activities such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI for the purpose of soliciting confidential information should face severe penalties to deter such actions. Cybersecurity is a vital component of responsible artificial intelligence studies.
In a rapidly evolving world where many entities possess advanced computing capabilities, serious attention must be dedicated to cybersecurity. Countries must prioritize safeguarding their own systems and preventing other nations from compromising their security. According to the U.S. Department of Homeland Security, a major American bank receives approximately 11 million calls per week at its service center. To protect its telephony systems from denial-of-service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” This demonstrates how machine learning can assist in defending technology systems against malicious attacks.
Conclusion
In summary, the world is poised to revolutionize numerous sectors through artificial intelligence and data analytics. Significant deployments are already evident in finance, national security, health care, criminal justice, transportation, and smart cities, altering decision-making, business models, risk mitigation, and system performance. These advancements are generating substantial economic and social benefits.
The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.
However, the manner in which AI systems are developed and deployed has profound implications for society. How policy issues are addressed, ethical conflicts reconciled, legal realities resolved, and the level of transparency required in AI and data analytic solutions are all critical considerations. Human choices in software development influence decision-making processes and how they are integrated into organizational routines. A deeper understanding of how these processes are executed is essential, as they will significantly impact the public in the near future and for the foreseeable future. AI may well represent a revolution in human affairs, potentially becoming the single most influential human innovation in history, and the study of its development and impact, or artificial intelligence studies, is paramount.
References
- Allen, John R., and Amir Husain. “On Hyperwar.” Naval Institute Proceedings, July 17, 2017, pp. 30-36.
- Asher, Jeff, and Rob Arthur. “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.” New York Times Upshot, June 13, 2017.
- Barton, Dominic, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian. “Artificial Intelligence: Implications for China.” New York: McKinsey Global Institute, April 2017.
- Benner, Katie. “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness.” New York Times, June 19, 2016.
- Brockman, Greg. “The Dawn of Artificial Intelligence.” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
- Brundage, Miles, et al. “The Malicious Use of Artificial Intelligence.” University of Oxford unpublished paper, February 2018.
- Buolamwini, Joy. “Joy Buolamwini.” Bloomberg Businessweek, July 3, 2017, p. 80.
- Buck, Ian. “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
- Cohen, Boyd. “The 10 Smartest Cities in North America.” Fast Company, November 14, 2013.
- Congress.gov. “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
- Davenport, Christian. “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says.” Washington Post, December 3, 2017.
- Dormehl, Luke. Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next. New York: Penguin–TarcherPerigee, 2017.
- Desouza, Kevin, Rashmi Krishnamurthy, and Gregory Dawson. “Learning from Public Sector Experimentation with Artificial Intelligence.” TechTank (blog), Brookings Institution, June 23, 2017.
- Economist. “America v China: The Battle for Digital Supremacy,” March 15, 2018.
- Economist. “The Challenger: Technopolitics,” March 17, 2018.
- “Ethical Considerations in Artificial Intelligence and Autonomous Systems.” Unpublished paper. IEEE Global Initiative, 2018.
- Etzioni, Oren. “How to Regulate Artificial Intelligence.” New York Times, September 1, 2017.
- Executive Office of the President. “Artificial Intelligence, Automation, and the Economy,” December 2016.
- Executive Office of the President. “Preparing for the Future of Artificial Intelligence,” October 2016.
- Frenkel, Sheera. “Tech Giants Brace for Europe’s New Data Privacy Rules.” New York Times, January 28, 2018.
- Ge, Yuming, Xiaoman Liu, Libo Tang, and Darrell M. West. “Smart Transportation in China and the United States.” Center for Technology Innovation, Brookings Institution, December 2017.
- Glusac, Elaine. “As Airbnb Grows, So Do Claims of Discrimination.” New York Times, June 21, 2016.
- Holley, Peter. “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo.” Washington Post, November 20, 2017.
- Horvitz, Eric. “Reflections on the Status and Future of Artificial Intelligence.” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
- Karsten, Jack, and Darrell M. West. “How robots, artificial intelligence, and machine learning will affect employment and public policy.” Brookings Institution, October 26, 2015.
- Kerry, Cameron, and Jack Karsten. “Gauging Investment in Self-Driving Cars.” Brookings Institution, October 16, 2017.
- Khosrowshahi, Amir. “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
- Kuang, Cliff. “Can A.I. Be Taught to Explain Itself?” New York Times Magazine, November 21, 2017.
- Kurose, James. “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
- Lewis, Michael. Flash Boys: A Wall Street Revolt. New York: Norton, 2015.
- Maddox, Teena. “66% of US Cities Are Investing in Smart City Technology.” TechRepublic, November 6, 2017.
- Markoff, John. “As Artificial Intelligence Evolves, So Does Its Criminal Potential.” New York Times, October 24, 2016, p. B3.
- McAfee, Andrew, and Erik Brynjolfsson. Machine Platform Crowd: Harnessing Our Digital Future. New York: Norton, 2017.
- Metz, Cade. “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe.” Wired, July 11, 2016.
- Metz, Cade. “In Quantum Computing Race, Yale Professors Battle Tech Giants.” New York Times, November 14, 2017, p. B3.
- Miller, Claire, and Kevin O’Brien. “Germany’s Complicated Relationship with Google Street View.” New York Times, April 23, 2013.
- Mozur, Paul. “China Sets Goal to Lead in Artificial Intelligence.” New York Times, July 21, 2017, p. B1.
- Mozur, Paul, and Keith Bradsher. “China’s A.I. Advances Help Its Tech Industry, and State Security.” New York Times, December 3, 2017.
- Mozur, Paul, and John Markoff. “Is China Outsmarting American Artificial Intelligence?” New York Times, May 28, 2017.
- Nakasone, Keith. “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
- Noonoo, Stephen. “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans.” EdSurge, September 13, 2017.
- Noothigattu, Ritesh, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia. “A Voting-Based System for Ethical Decision Making.” Computers and Society, September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
- Osoba, Osonde, and William Welser IV. “The Risks of Artificial Intelligence to Security and the Future of Work.” Santa Monica, Calif.: RAND Corp., December 2017 (www.rand.org/pubs/perspectives/PE237.html).
- Popper, Nathaniel. “Stocks and Bots.” New York Times Magazine, February 28, 2016.
- Powles, Julia. “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable.” New Yorker, December 20, 2017.
- PriceWaterhouseCoopers. “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
- Purdy, Mark, and Paul Daugherty. “Why Artificial Intelligence is the Future of Growth.” Accenture, 2016.
- Rothe, Rasmus. “Applying Deep Learning to Real-World Problems.” Medium, May 23, 2017.
- Scolar, Nancy. “Facebook’s Next Project: American Inequality.” Politico, February 19, 2018.
- Shubhendu and Vijay. “Applicability of Artificial Intelligence in Different Fields of Life.”
- Siegel, Eric. “Predictive Analytics Interview Series: Andrew Burt.” Predictive Analytics Times, June 14, 2017.
- Denyer, Simon. “China’s Watchful Eye.” Washington Post, January 7, 2018.
- Tillemann, Levi, and Colin McCormick. “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy.” New American Foundation, March 2017.
- Tucker, Patrick. “‘A White Mask Worked Better.’” Defense One, October 26, 2017.
- Valant, Jon. “Integrating Charter Schools and Choice-Based Education Systems.” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
- Wakabayashi, Daisuke. “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam.” New York Times, March 19, 2018.
- Watney, Caleb. “It’s Time for our Justice System to Embrace Artificial Intelligence.” TechTank (blog), Brookings Institution, July 20, 2017.
- West, Darrell M. “Driverless Cars in China, Europe, Japan, Korea, and the United States.” Brookings Institution, September 2016.
- West, Darrell M. The Future of Work: Robots, AI, and Automation, Brookings Institution Press, 2018.
- West, Darrell M. “What Internet Search Data Reveals about Donald Trump’s First Year in Office.” Brookings Institution policy report, January 17, 2018.
- Yale Law School Information Society Project. “Governing Machine Learning,” September 2017.
- Zima, Elizabeth. “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology, January 4, 2018.