
Artificial Intelligence
continued
AI'S POTENTIAL IMPACT
There are differing opinions on the level of AI’s potential impact on our lives. Some negative consequences are undeniable because they are already disturbingly evident. For example, content generated by AI is beginning to threaten our democratic process, particularly our elections and those around the world.
According to the International Panel on the Information Environment, an independent organization of scientists, 80 percent of the countries that held elections in 2024 had Generative Artificial Intelligence (GenAI) “incidents,” and over two-thirds (69 percent) of those incidents were determined to play a harmful role in the election. Most of these incidents involved content creation – think audio messages, images, videos, and social media posts – including deepfakes that recreated images of real people or cloned the voices of well-known political figures, candidates and newscasters.
For example, in July 2025, someone pretending to be Secretary of State Marco Rubio contacted at least five government officials around the world, including three foreign ministers, a U.S. governor, and a U.S. Senator. Just days later, an imposter imitating House Intelligence Committee Chairman Rick Crawford (R-AR) sent messages to several people requesting help for a project involving first lady Melania Trump. (read more about misinformation and conspiracy theories here)
Another major threat is something called “model collapse,” which refers to the declining performance of GenAI models that are trained primarily on AI-generated content (i.e., synthetic content produced by other AI models) rather than legitimate human knowledge. Essentially, these Al models begin to lose originality, accuracy, and effectiveness – ultimately polluting the training set of the next generation.
Then there are our jobs. In a recent report from Axios subtly called A White-Collar Bloodbath, CEO of Anthropic Dario Amodei said that half of all entry-level jobs could disappear in one to five years and that executives and government officials should stop “sugarcoating” that reality. Amodei told Axios that he was speaking out now because, “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming. I don’t think this is on people’s radar.” He further warned, “You can’t just step in front of the train and stop it. The only move that’s going to work is steering the train – steer it 10 degrees in a different direction from where it was going. That can be done. That’s possible, but we have to do it now.”
Others are also starting to speak up. The Chief Executive of Ford Motor Company recently said that AI could likely replace half of white-collar workers and will “leave a lot of white-collar people behind.” An executive with JPMorgan Chase said the bank anticipates a 10 percent workforce reduction thanks to AI. The CEO of Fiverr, a marketplace for freelancers, said this all in a little more direct way: “This is a wake-up call. It does not matter if you are a programmer, designer, product manager, data scientist, lawyer, customer support rep, salesperson, or a finance person – AI is coming for you.”
On the other hand, some people believe the hype surrounding AI is being blown way out of proportion, particularly when it comes to its ability to conquer sensory/motor skills and logical reasoning.
In 1988, Hans Moravec, a computer scientist and current adjunct faculty member at the Robotics Institute of Carnegie Mellon University, crystalized the challenge: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, but difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility… Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it.”
Daron Acemoglu, a Nobel laureate and economist from MIT, thinks AI will only be able to perform 5 percent of jobs within the next decade. “A lot of money is going to get wasted,” he recently said. “You’re not going to get an economic revolution out of that 5 percent. You need highly reliable information or the ability of these models to faithfully implement certain steps that previously workers were doing. They can do that in a few places with some human supervisory oversight…but in most places they cannot.”
ENERGY CONSUMPTION
There is also a major ancillary problem with AI: energy consumption. < The fact that DeepSeek-R1 uses less computing power than the existing U.S. models has called everything into question, including future energy consumption. However, the $100 billion+ data centers the U.S. tech guys are building will use more electricity than a million homes.>
Jesse Dodge, a senior research analyst at the Allen Institute for AI – a nonprofit AI research institute founded by the late co-founder of Microsoft Paul Allen – says that “one query to ChatGPT uses approximately as much electricity as could light one light bulb for about 20 minutes…so, you can imagine with millions of people using something like that every day, that adds up to a really large amount of electricity.”
Put another way, research from financial services company Goldman Sachs says that, on average, a “ChatGPT query needs nearly 10 times as much electricity to process as a Google search.”
AI already requires thousands of servers plus the cooling equipment that helps them run, all housed in thousands of data centers that require enormous amounts of electricity to meet the demand. To put it in perspective, the U.S. Department of Energy says one data center can require 50 times the electricity of a traditional office building. Complexes with multiple buildings can use up to 20 times that amount.
This is causing enormous challenges. Northern Virginia – known as the world’s internet hub, processing almost 70 percent of global digital traffic – uses electricity at a staggering rate. In fact, PJM Interconnection, the regional grid operator for the area, says the usage is unsustainable without hundreds of miles of new transmission lines and continued energy output from the old-school coal-powered electricity plants that had previously been ordered to shut down because of environmental concerns.
Dominion Energy has repeatedly warned they may not be able to keep up with the energy demand sparked by AI. The utility estimates the AI energy demand in Virginia will likely quadruple by 2035 – roughly the same amount of electricity used to power 8.8 million homes. Already, the 50+ data centers Northern Virginia Electric Cooperative serves account for 59 percent of its entire energy demand. By mid-2028, the number of data centers is expected to expand to over 110.
The real-world consequences of this new reality are massive. In Google’s 2024 Environmental Report, the company said its greenhouse gas emissions have increased by 48 percent over the past five years, due to a surge in data center energy consumption and supply chain emissions. Google’s report warns, “As we further integrate AI into our products, reducing emissions may be challenging.” < Google’s 2025 Environmental Report says it has since reduced data center energy emissions by 12 percent. >
In its 2024 Environmental Sustainability Report, Microsoft revealed its emissions increased by 29 percent over the past four years because of new data centers “designed and optimized to support AI workloads.” The company also warned that “the infrastructure and electricity needed for these technologies create new challenges for meeting sustainability commitments across the tech sector.” < Microsoft’s 2025 Environmental Sustainability Report revealed its total emissions had increased by 23.4 percent compared to the 2020 baseline. >
BIAS, DISCRIMINATION, CONSUMER PRIVACY, SOCIAL/ETHICAL IMPLICATIONS, AND LEGAL REGULATIONS
These are all significant issues, but the most important conversations we must have are ones about bias, discrimination, consumer privacy, the social/ethical implications of AI, and the legal regulations that we need to govern all of it. For example, how can we make sure facial recognition technology is never utilized in a racially biased manner? And who should be held responsible – and what should the consequences be – when an automated system goes on an antisemitic rant and spreads conspiracy theories about Jewish people like X’s Grok chatbot recently did?
It’s critical we establish ethical frameworks that ensure AI enhances our global strength and is beneficial for society overall. That said, regulating AI is tricky because we must balance the many benefits with a large variety of risks – all without stifling AI’s progress.
This comes at a time when Americans are getting increasingly nervous about AI. A June 2025 survey from Ipsos, a market research company, found that AI makes 63 percent of Americans “nervous.” Data from Pew Research Center found that 51 percent of Americans feel “more concerned than excited” about AI.
There are valid reasons for this angst. For example, facial recognition technology has become one of law enforcement’s standard investigative tools. A 2024 report from the U.S. Government Accountability Office (GAO) revealed seven law enforcement agencies within the Departments of Justice (DOJ) and Homeland Security (DHS) – including the FBI and Secret Service – use facial recognition technology to support criminal investigations.
In some ways, this sounds like a positive development. Law enforcement agencies used this technology to identify many of the troublemakers that participated in the U.S. Capitol insurrection on January 6th, for example. However, there are legitimate concerns surrounding surveillance technologies, including everything from privacy issues TO concerns over mass surveillance TO abuse of power.
Potential abuse of these technologies is particularly alarming for racial/ethnic minorities, many of whom, understandably, fear these technologies and their algorithms may be utilized in a racially biased manner. Research suggests this fear is founded. A study conducted by Georgetown University, for example, found that “the risks of face surveillance are likely to be borne disproportionately by communities of color.” This is a real problem given the GAO found that only three of the seven federal agencies mentioned in its report had policies for – or even guidance on – how to protect civil rights and civil liberties.
The good news is that an independent commission was established by the U.S. Congress in 2018 to make recommendations to the president and Congress that “advance the development of Artificial Intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.”
The National Security Commission on Artificial Intelligence’s final report “presented an integrated national strategy to reorganize the government, reorient the nation, and rally our closest allies and partners to defend and compete in the coming era of AI-accelerated competition and conflict.” We must make sure those in charge stay on this.
AI can be scary, but the good news is that, if we are proactive, we can maintain control over how AI advances instead of being vulnerable to forces beyond our control.