top of page

ARTIFICIAL INTELLIGENCE

The impact artificial intelligence is going to increasingly have on our future is unprecedented — and there is no escaping it.  We recognize that the unknowns of this can be scary. The good news is that, if we commit to a proactive strategy, we can maintain control over how A.I. advances instead of being vulnerable to forces beyond our control.

There are many complicated components to this topic — everything from how best to capitalize on its powerful and innovative tools to how A.I. will affect our workforce to how it will affect our quest for knowledge to the social and ethical implications of the rising technology.  For all these reasons, it is critical that we establish ethical frameworks to ensure that A.I. both enhances our global strength and is advantageous for society overall.

The United States needs to be the leading force in A.I. and we are already well on our way. The Massachusetts Institute of Technology (MIT) has dedicated $1 billion to finding technological breakthroughs as well as assessing their associated ethical implications. 

 

Already, MIT and the U.S. Air Force have joined forces to find ways to use artificial intelligence to help safeguard our national security.  The MIT-Air Force A.I. Accelerator, as it is known, hopes to improve Air Force operations within the context of societal responsibility.  

 

​There are other super cool ways A.I. is helping society.  For example, A.I. is being used to anticipate, absorb, and repair the challenges of extreme weather events such as hurricanes, flooding, drought, and wildfires. 

The Grid Resilience & Intelligence Platform (GRIP) program — housed at the SLAC National Accelerator Laboratory, one of seventeen Department of Energy national labs — is:

“(1) Demonstrating machine learning and artificial intelligence from different data sources to anticipate grid events; (2) Validating controls for distributed energy resources for absorbing grid events; and (3) Reducing recovery time by managing distributed energy resources in the case of limited communications.  The project builds on previous efforts to collect massive amounts of data and use it to fine-tune grid operations, including SLAC’s Visualization and Analytics of Distributed Energy Resources (VADER) project as well as other Grid Modernization Lab Consortium projects on distributed controls and cyber security.  The innovations in the project include application of artificial intelligence and machine learning for distribution grid resilience. Particularly using predictive analytics, image recognition, increased ‘learning’ and ‘problem solving’ capabilities for anticipation of grid events.” 

The accuracy of the technology and the associated algorithms have already improved substantially.  A report from the National Institute of Standards and Technology (NIST), a physical science laboratory within the U.S. Department of Commerce, reveals that “massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013).” 

“While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities.  With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2 percent.  The remaining errors are in large part attributable to long-run aging and injury.”

 

Sounds good, right?

It does but, on the other hand, there are trickier things to consider.  Facial recognition technology, for example, has increasingly become one of law enforcement’s standard investigative tools.  After a mass shooting at The Capital Gazette’s newsroom in Annapolis, Maryland, authorities identified the shooter using this technology after he refused to give his name.  Law enforcement agencies are also using facial recognition technology to identify the troublemakers who participated in the U.S. Capitol insurrection. 

However, there are legitimate concerns surrounding surveillance technologies like facial recognition, including everything from privacy issues to concerns over mass surveillance to abuse of power...concerns that are disturbing enough to lead some cities like San Francisco to ban its use by the police and other law enforcement agencies.

Potential abuse of these technologies is particularly alarming, understandably, for people of color, who fear these technologies and their algorithms are utilized in a racially biased manner.

Georgetown Law’s Center on Privacy and Technology warns that “the risks of face surveillance are likely to be borne disproportionately by communities of color.  African Americans are simultaneously more likely to be enrolled in face recognition databases and the targets of police surveillance use.  Compounding this, studies continue to show that face recognition performs differently depending on the age, gender, and race of the person being searched.  This creates the risk that African Americans will disproportionately bear the harms of face recognition misidentification.” 

​Because of this, we must make certain local, state, and federal government regulations catch up with technology.  For example, the New York City Council has approved the Public Oversight of Surveillance Technology Act, which greatly increased transparency and legislative accountability over these technologies.  Cambridge, Nashville and Seattle already have similar laws that give citizens more control over these issues.

At the very least, the federal government can help streamline inconsistent laws around the country and ensure that facial recognition used to solve crimes is used after the fact as opposed to as real-time surveillance. 

This effort has already started.  The National Security Commission on Artificial Intelligence — an independent commission established by Congress in 2018 to make recommendations to the President and Congress to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States” — released their final report.  The report “presents an integrated national strategy to reorganize the government, reorient the nation, and rally our closest allies and partners to defend and compete in the coming era of AI-accelerated competition and conflict.” 

Even Microsoft is calling for the government regulation of their own A.I. technology to guard against abuse.  We agree with Brad Smith, the president of Microsoft:

The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself.  And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so.  This in fact is what we believe is needed today — a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.

​While we appreciate that some people today are calling for tech companies to make these decisions — and we recognize a clear need for our own exercise of responsibility — we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic.  We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology.  As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.

Certainly, government oversight is an important piece of the puzzle, but technology companies should also be held responsible for establishing ethical frameworks.  This is important at a time when around 30 percent of large companies in the United States have A.I. projects in the works and, according to MIT, there are over 2,000 A.I. startups. 

Read More Here

bottom of page