Knowing less about AI makes people more open to having it in their lives – new research

Spread the love

Anggalih Prasetya / Shutterstock

Chiara Longoni, Bocconi University; Gil Appel, George Washington University, and Stephanie Tully, University of Southern California

The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.

Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.

This link shows up across different groups, settings and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive towards AI adoption than those in nations with higher literacy.

Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments.

The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response or plays a musical instrument, it can feel almost magical – like it’s crossing into human territory.

Of course, AI doesn’t actually possess human qualities. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this.

They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works) and computational models operate. This makes the technology less mysterious.

On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools.

Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counselling. When it comes to tasks that don’t evoke the same sense of human-like qualities – such as analysing test results – the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI’s efficiency, rather than any “magical” qualities.

It’s not about capability, fear or ethics

Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks.

This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called “algorithm appreciation”, while others show scepticism, or “algorithm aversion”. Our research points to perceptions of AI’s “magicalness” as a key factor shaping these reactions.

These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption.

To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance. By understanding how perceptions of “magicalness” shape people’s openness to AI, we can help develop and deploy new AI-based products and services that take the way people view AI into account, and help them understand the benefits and risks of AI.

And ideally, this will happen without causing a loss of the awe that inspires many people to embrace this new technology.

Chiara Longoni, Associate Professor, Marketing and Social Science, Bocconi University; Gil Appel, Assistant Professor of Marketing, School of Business, George Washington University, and Stephanie Tully, Associate Professor of Marketing, USC Marshall School of Business, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue ReadingKnowing less about AI makes people more open to having it in their lives – new research

How the UK’s plans for AI could derail net zero – the numbers explained

Spread the love
Data centres use an enormous amount of electricity for cooling and to power servers. Andia/Alamy Stock Photo

Tom Jackson, Loughborough University and Ian R. Hodgkinson, Loughborough University

The UK government’s goal to increase public-controlled artificial intelligence computing power twentyfold by 2030 would significantly raise electricity demand. Can renewable energy supply meet it – and still have enough left over to electrify sectors like heating and transport, which must be fully decarbonised by 2050?

First, let’s discuss why AI is so energy intensive. AI systems demand a huge amount of computing power. The creation and use of AI involves training the programmes on models and algorithms that must be invented and calibrated, all of which demands computing power. Then, that AI model must draw conclusions from the new data it is fed, which is another energy-intensive process in itself.

The need for more and more computing power has risen sharply as AI has become more sophisticated. Computing power is becoming scarce as a result and is a major bottleneck for the further development and use of AI. Indeed, the UK’s national AI strategy published in 2021, recognised that computing power capacity must be increased if the potential of AI is to be realised.

The more sophisticated the AI, typically, the more energy intensive it is. This has significant implications for the UK.

How much energy does the AI rollout need?

Data centres (facilities that store, process and distribute data) are a significant and growing consumer of electricity. From training complex AI models, which requires immense computational power and data storage, to running data through trained AI models to make predictions or solve tasks, data centres are central to every stage of AI’s use and development.

According to estimates by the International Energy Agency, data centres globally account for approximately 1%-1.3% of total electricity consumption. One recent observation suggests that developing the most sophisticated AI systems currently requires a fourfold increase in the amount of computing power annually. The total amount of data required for AI training has also risen by 2.5 times a year, increasing reliance on data centres.

Pylons at sunset.
Britain’s electricity grid will strain to meet rising demand even without AI. SuxxesPhoto/Shutterstock

In the UK, AI and related infrastructure consumed around 3.6 terawatt-hours (TWh) of electricity in 2020. If this consumption increases twentyfold, as per the government’s target, it could reach 72 TWh by 2030. This would represent over one-quarter of the UK’s total electricity consumption in 2021, which was approximately 261 TWh.

The rapid growth in AI computing requires careful planning. However, data centres are only part of the equation. The devices that use AI, such as sensors in smart homes, gas and electricity meters, routers, wifi hubs, streaming devices and social media platforms, could add significant energy demand that is difficult to estimate.

These additional components of AI’s total energy consumption are often overlooked.

Renewable energy growth is insufficient

The UK has made significant strides in renewable energy production, with wind and solar power contributing over 40% of electricity in recent years.

However, our projections, reported in the journal Energy Policy, indicate that global renewable electricity supply will not meet surging demand from global digital data growth.

Our research considered different scenarios for AI’s energy use. The UK’s target of a twentyfold increase in AI computing power by 2030 is certainly a high-consumption scenario, in which energy demand from digital infrastructure alone could outpace the growth of renewable energy capacity.

At the same time, the UK’s decarbonisation hinges on electrifying transport and heating, sectors traditionally reliant on fossil fuels: replacing natural gas boilers with electric heat pumps and combustion engine cars with electric vehicles. These will require substantial increases in electricity supply.

A row of electric cars plugged into public chargers.
Britain’s electric vehicle charging network will need to expand to decarbonise transport. Shutterstock

However, solving this problem will not just require expanding renewable energy production. The energy efficiency of AI systems and related technologies must improve too. Ensuring that the energy needed for AI and other digital advancements is sustainably sourced, without compromising broader net zero goals, will require a combination of government policy, technological innovation and public awareness.

AI’s growing electricity needs could exacerbate competition for limited renewable energy resources. This competition risks increasing reliance on fossil fuels, especially during periods of peak energy demand. If additional renewable capacity cannot be deployed quickly enough, the UK might face a scenario where AI-driven electricity demand increases overall emissions rather than reducing them.

The UK’s commitment to a twentyfold increase in public AI computing power by 2030 presents an immense challenge for the country’s electricity system. Meeting this goal sustainably will require balancing AI’s energy needs with broader electrification goals and renewable energy limitations.

Without immediate and concerted efforts to expand renewable energy and improve efficiency, AI’s electricity demands could hinder the transition to a net zero future.


Don’t have time to read about climate change as much as you’d like?
Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 40,000+ readers who’ve subscribed so far.


Tom Jackson, Professor of Information and Knowledge Management, Loughborough University and Ian R. Hodgkinson, Professor of Strategy, Loughborough University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue ReadingHow the UK’s plans for AI could derail net zero – the numbers explained

‘Unsettling New Milestone’: Top 12 US Billionaires Now Control $2 Trillion in Wealth

Spread the love

Original article by Eloise Goldsmith republished from Common Dreams under Creative Commons (CC BY-NC-ND 3.0).

Jensen Huang of Nvidia speaks about the future of artificial intelligence and its effect on energy consumption and production at the Bipartisan Policy Center on September 27, 2024 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

“The oligarchic dozen is richer than ever, and they are endowed with extreme material power that can be used to pursue narrow political interests at the expense of democratic majorities,” according to the author of a new analysis.

Just 12 U.S. billionaires now have a collective net worth of over $2 trillion—a figure that amounts to a little less than a third of total federal spending in 2023—according to an analysis out Tuesday from Inequality.org, a project of the Institute for Policy Studies (IPS).

The $2 trillion number is also twice the amount of wealth that the top 12 US billionaires held in 2020, according to researchers at IPS, a progressive organization.

The full list of 12 billionaires includes Jeff Bezos, Bill Gates, Mark Zuckerberg, Warren Buffett, Elon Musk, Steve Ballmer, Larry Ellison, Larry Page, Sergey Brin, Jim Walton, Rob Walton, and Jensen Huang.

“This is an unsettling new milestone for wealth concentration in the United States. The oligarchic dozen is richer than ever, and they are endowed with extreme material power that can be used to pursue narrow political interests at the expense of democratic majorities,” wrote the author of the analysis, Omar Ocampo, a researcher at IPS.

New to the “oligarchic dozen” is Jensen Huang, the co-founder and CEO of the tech company Nvidia. Nvidia, which became the most valuable publicly traded company this year, has seen its profits jump thanks to the world’s ravenous appetite for the artificial intelligence chips that the firm produces. According to the analysis, Huang’s personal wealth “has skyrocketed from $4.7 billion in 2020 to $122.4 billion—a mind-boggling 2,504 percent increase—over the last four years.”

Each of the billionaires on the list “owns or is a controlling shareholder of a business that is investing billions of dollars in artificial intelligence,” according to Ocampo, which raises concerns about their respective carbon footprints.

Fueling AI is energy intensive, and AI data centers in the U.S. are largely powered by fossil fuels, meaning their proliferation poses a threat to the environment and a transition to a green economy.

Ocampo also discusses the political reach of the billionaires on the list. Elon Musk and Jeff Bezos, who respectively own X and The Washington Post, “have both purchased large media platforms, which has granted them the ability to set the terms of public debate with the hopes of influencing public opinion in their favor.”

Musk specifically has established himself as a major power broker within the GOP. The billionaire spent hundreds of millions helping to re-elect Donald Trump and is now poised to play a major role in the president-elect’s administration, helping oversee a new advisory committee tasked with slashing government spending.

As of early December, Trump had tapped an “unprecedented” total of seven reported billionaires for key positions in his administration, according to a separate piece of analysis by Inequality.org.

“We see the effects of this growing concentration of wealth and economic inequality everywhere—plutocratic influence on our politics, wealth transfers from the bottom to the top, and the acceleration of climate breakdown,” Ocampo wrote on Tuesday.

Original article by Eloise Goldsmith republished from Common Dreams under Creative Commons (CC BY-NC-ND 3.0).

Continue Reading‘Unsettling New Milestone’: Top 12 US Billionaires Now Control $2 Trillion in Wealth

Global Cooperation Key to Preventing ‘Runaway’ Climate and AI Chaos: UN Chief

Spread the love

Original article by JULIA CONLEY republished from Common Dreams under Creative Commons (CC BY-NC-ND 3.0). 

United Nations Secretary-General António Guterres speaks at the U.N. headquarters on February 22, 2023.  (Photo: Lev Radin/Pacific Press/LightRocket via Getty Images)

“Geopolitical divides are preventing us from coming together around global solutions for global challenges,” said United Nations Secretary-General António Guterres.

At the World Economic Forum in Davos, Switzerland on Wednesday, United Nations Secretary-General António Guterres warned that multilateralism that includes often overlooked governments in the Global South is the only solution to the rapidly developing crises posed by the climate emergency and artificial intelligence—both of which are worsening “the global crisis in trust.”

“In the face of the serious, even existential threats posed by runaway climate chaos,” said Guterres, “and the runaway development of artificial intelligence without guardrails, we seem powerless to act together.”

While “droughtsstormsfires, and floods are pummeling countries and communities,” particularly in nations that have contributed the least planet-heating fossil fuel pollution, Guterres told the political and business elite assembled in Davos, “countries remain hellbent on raising emissions.”

He reserved particular scorn for the United States fossil fuel industry, which—amid the Biden administration’s approval of pollution-causing infrastructure including the Willow oil project and the Mountain Valley Pipelinedeceives the public with false climate solutions, misinformation, and greenwashing campaigns “to kneecap progress and keep the oil and gas flowing indefinitely.”

As suffering intensifies in communities that are most vulnerable to drought, damage from extreme weather, and other climate catastrophes, Guterres said, fossil fuel giants and powerful governments are risking lives to only delay an “inevitable” shift to renewable energy.

“The phaseout of fossil fuels is essential,” said the secretary-general. “No amount of spin or scare tactics will change that. Let’s hope it doesn’t come too late.”

As trust between the Global South and wealthy governments is frayed by fossil fuel-producing countries’ refusal to leave oil, gas, and coal behind, Guterres warned that the separate threat of “unintended consequences” of artificial intelligence evolution also looms—for people in rich economies as well as developing countries.

“This technology has enormous potential for sustainable development,” said the U.N. chief, while noting that “some powerful tech companies are already pursuing profits with a clear disregard for human rights, personal privacy, and social impact.”

Guterres’ comments came days after the International Monetary Fund (IMF) released a new analysis of AI’s expected impact on the global economy and workers, with nearly 40% of the labor market expected to be “exposed” to AI.

In wealthy countries, about 60% of jobs are projected to be impacted by AI, and about half of those workers are likely to see at least some of their primary tasks being completed by AI tools like ChatGPT or similar technology, “which could lower labor demand, leading to lower wages, and reduced hiring,” according to the IMF. “In the most extreme cases, some of these jobs may disappear.”

The analysis released Sunday noted that the rapidly changing field could worsen inequality within countries, as some higher earners may be able to “harness AI” and leverage its use for increases in their productivity and pay while those who can’t fall behind.

“In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions,” said the IMF. “It is crucial for countries to establish comprehensive social safety nets and offer retraining programs for vulnerable workers.”

Guterres called on policymakers to work closely with the private sector—currently “in the lead on AI expertise and resources”—to “develop a governance model” for AI that is focused on “monitoring and mitigating future harms.”

A systematic effort is also needed, said the secretary-general, “to increase access to AI so that developing economies can benefit from its enormous potential.”

Along with the IMF and Guterres, global human rights group Amnesty International this week raised alarm about AI and the “urgent but difficult task” of regulating the technology, noting that in addition to changing how people and companies work, AI has the potential to be “used as a means of societal control, mass surveillance, and discrimination.”

Police agencies in several countries have begun using AI for so-called “predictive policing,” attempting to prevent crimes before they’re committed, while officials have also deployed automated systems to detect fraud, determine who can and can’t access healthcare and social assistance, as well as to monitor migrants’ and refugees’ movement.

Amnesty credited the European Union with making headway in regulating AI in 2023, closing out the year by reaching a landmark agreement on the AI Act, which would take steps to protect Europeans from the automation of jobs, the spread of misinformation, and national security threats.

The AI Act, however, has been criticized by rights groups over its failure to ban mass surveillance via live facial recognition tools.

“Others must learn from the E.U. process and ensure there are not loopholes for public and private sector players to circumvent regulatory obligations, and removing any exemptions for AI used within national security or law enforcement is critical to achieving this,” said Amnesty.

In Davos on Wednesday, Guterres expressed hope that policymakers will agree on climate, AI, and other solutions that center human rights in the coming year, including at the U.N.’s Summit of the Future, planned for September.

“These two issues—climate and AI—are exhaustively discussed by governments, by the media, and by leaders here in Davos,” said Guterres. “And yet, we have not yet an effective global strategy to deal with either. And the reason is simple. Geopolitical divides are preventing us from coming together around global solutions for global challenges.”

“The only way to manage this complexity and avoid a slide into chaos,” he said, “is through a reformed, inclusive, networked multilateralism.”

Original article by JULIA CONLEY republished from Common Dreams under Creative Commons (CC BY-NC-ND 3.0). 

Continue ReadingGlobal Cooperation Key to Preventing ‘Runaway’ Climate and AI Chaos: UN Chief