Scientists Prove that AI has Learned to Manipulate People – Techsplore.


Warnings about the Negative Impact of Generative AI and Chatbots on People
Recently, there has been increasing attention on generative AI, chatbots, and other anthropomorphic assistants. However, according to researchers from the University of Cambridge, this trend may lead to manipulations using behavioral and psychological data collected during daily interactions.
In particular, artificial intelligence can predict and influence decisions, such as purchases, political preferences, or personal plans. To achieve this, AI systems utilize trust and personalized communication. Currently, there are companies developing platforms to identify and predict user intentions, such as OpenAI, Meta, Nvidia, and Apple.
However, scientists emphasize the risks associated with using these technologies without proper regulation. The principle of 'intent economy', created by AI, may undermine the foundations of free choice, independent press, and fair competition. Companies that maximize human desires and decisions will gain an advantage. This scenario is logical but can have serious consequences.
Major tech companies are already offering platforms that exhibit signs of this trend. For instance, Apple's new platform 'App Intents' allows predicting user actions. Meta and OpenAI are also actively exploring the potential of AI to analyze human desires and intentions. Therefore, scientists call for increased public awareness and regulation of such tools to avoid possible manipulations and abuses.
It should be noted that the use of such technologies can also have a positive impact. However, to ensure ethical implementation, open dialogue and awareness of potential threats to society are necessary.
Read also
- Vodafone Improves Connection Quality: What is Known About xPON and Which Cities Will Be the First
- The 'Army+' App Introduces Six New Reports
- Volz discussed in Signal the end of the war in Ukraine – WSJ
- The first orbital rocket from Europe exploded after launch, but the launch was not in vain
- Telegram channel 'Joker' blocked for disseminating personal data
- Ukrainian courts have begun to consider materials generated by... artificial intelligence. How does Themis respond?