Ad
AI assistants may soon predict and influence our decisions at an early stage, selling these “intentions” to companies in real time, creating an “Intention Economy,” say Cambridge ethicists. These tools, leveraging vast behavioral data and persuasive AI, could manipulate human plans for profit, raising concerns about ethics, privacy, and societal impact, especially on elections, media, and markets.
The emerging “Intention Economy” leverages AI to forecast and influence decisions, commodifying human motivations. Cambridge researchers warn of ethical risks, including manipulation and threats to democracy, urging regulation to prevent misuse.
AI assistants may soon play a pivotal role in predicting and shaping our decisions at an early stage, selling these emerging “intentions” in real-time to companies ready to fulfill our needs—sometimes even before we consciously make up our minds.
This concept, known as the “Intention Economy,” is highlighted by AI ethicists at the University of Cambridge. They warn of a burgeoning and potentially unsettling marketplace where digital signals of intent—ranging from purchasing movie tickets to casting votes—become highly lucrative.
Experts from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) suggest that the rapid growth of generative AI and our increasing reliance on chatbots mark the beginning of a new era of “persuasive technologies.” These developments, they note, align with recent strategic moves by major tech companies, hinting at the growing potential of this emerging field.
“Anthropomorphic” AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioral data, often gleaned via informal, conversational spoken dialogue.
This AI will combine knowledge of our online habits with an uncanny ability to attune to us in ways we find comforting – mimicking personalities and anticipating desired responses – to build levels of trust and understanding that allow for social manipulation on an industrial scale, say researchers.
Ethical Concerns About AI Manipulation
“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” said LCFI Visiting Scholar Dr. Yaqub Chaudhary.
“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions.”
“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes.”
Dr. Jonnie Penn, an historian of technology from Cambridge’s LCFI, said: “For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.”
“Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions.”
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences.”
In a new Harvard Data Science Review paper, Penn and Chaudhary write that the intention economy will be the attention economy “plotted in time”: profiling how user attention and communicative style connects to patterns of behaviour and the choices we end up making.
“While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers,” said Chaudhary.
In an intention economy, Large Language Models or LLMs could be used to target, at low cost, a user’s cadence, politics, vocabulary, age, gender, online history, and even preferences for flattery and ingratiation, write the researchers.
This information-gathering would be linked with brokered bidding networks to maximize the likelihood of achieving a given aim, such as selling a cinema trip (“You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”).
This could include steering conversations in the service of particular platforms, advertisers, businesses, and even political organisations, argue Penn and Chaudhary.
While researchers say the intention economy is currently an “aspiration” for the tech industry, they track early signs of this trend through published research and the hints dropped by several major tech players.
Early Indicators of the Intention Economy
These include an open call for “data that expresses human intention… across any language, topic, and format” in a 2023 OpenAI blogpost, while the director of product at Shopify – an OpenAI partner – spoke of chatbots coming in “to explicitly get the user’s intent” at a conference the same year.
Nvidia’s CEO has spoken publicly of using LLMs to figure out intention and desire, while Meta released “Intentonomy” research, a dataset for human intent understanding, back in 2021.
In 2024, Apple’s new “App Intents” developer framework for connecting apps to Siri (Apple’s voice-controlled personal assistant), includes protocols to “predict actions someone might take in future” and “to suggest the app intent to someone in the future using predictions you [the developer] provide”.
“AI agents such as Meta’s CICERO are said to achieve human-level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one’s position,” said Chaudhary.
“These companies already sell our attention. To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are.”
Penn points out that these developments are not necessarily bad, but have the potential to be destructive. “Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” he said.
Reference: “Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models” by Yaqub Chaudhary and Jonnie Penn, 30 December 2024, Harvard Data Science Review.
DOI: 10.1162/99608f92.21e6bbaa
Ad