Agents of manipulation (the real AI risk)

Coinbase
Agents of manipulation (the real AI risk)
fiverr


Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

Our lives will soon be filled with conversational AI agents designed to help us at every turn, anticipating our wants and needs so they can feed us tailored information and perform useful tasks on our behalf. They will do this using an extensive store of personal data about our individual interests and hobbies, backgrounds and aspirations, personality traits and political views — all with the goal of making our lives “more convenient.”

These agents will be extremely skilled. Just this week, Open AI released GPT-4o, their next generation chatbot that can read human emotions. It can do this not just by reading sentiment in the text you write, but also by assessing the inflections in your voice (if you speak to it through a mic) and by using your facial cues (if you interact through video).

This is the future of computing and it’s coming fast

Just this week, Google announced Project Astra — short for advanced seeing and talking responsive agent. The goal is to deploy an assistive AI that can interact conversationally with you while understanding what it sees and hears in your surroundings. This will enable it to provide interactive guidance and assistance in real-time.

Tokenmetrics

And just last week, OpenAI’s Sam Altman told MIT Technology Review that the killer app for AI is assistive agents. In fact, he predicted everyone will want a personalized AI agent that acts as “a super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had,” all captured and analyzed so it can take useful actions on your behalf. 

VB Event

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

What could possibly go wrong?

As I wrote here in VentureBeat last year, there is a significant risk that AI agents can be misused in ways that compromise human agency. In fact, I believe targeted manipulation is the single most dangerous threat posed by AI in the near future, especially when these agents become embedded in mobile devices. After all, mobile devices are the gateway to our digital lives, from the news and opinions we consume to every email, phone call and text message we receive. These agents will monitor our information flow, learning intimate details about our lives, while also filtering the content that reaches our eyes.  

Any system that monitors our lives and mediates the information we receive is a vehicle for interactive manipulation. To make this even more dangerous, these AI agents will use the cameras and microphones on our mobile devices to see what we see and hear what we hear in real-time. This capability (enabled by multimodal large language models) will make these agents extremely useful — able to react to the sights and sounds in your environment without you needing to ask for their guidance.  This capability could also be used to trigger targeted influence that matches the precise activity or situation you are engaged in. 

For many people, this level of tracking and intervention sounds creepy and yet, I predict they will embrace this technology. After all, these agents will be designed to make our lives better, whispering in our ears as we go about our daily routines, ensuring we don’t forget to pick up our laundry when walking down the street, tutoring us as we learn new skills, even coaching us in social situations to make us seem smarter, funnier, or more confident. 

This will become an arms race among tech companies to augment our mental abilities in the most powerful ways possible. And those who choose not to use these features will quickly feel disadvantaged. Eventually, it will not even feel like a choice. This is why I regularly predict that adoption will be extremely fast, becoming ubiquitous by 2030.

So why not embrace an augmented mentality?

As I wrote about in my new book, Our Next Reality, assistive agents will give us mental superpowers, but we cannot forget these are products designed to make a profit. And by using them, we will be allowing corporations to whisper in our ears (and soon flash images before our eyes) that guide us, coach us, educate us, caution us and prod us throughout our days. In other words — we will allow AI agents to influence our thoughts and guide our behaviors. When used for good, this could be an amazing form of empowerment, but when abused, it could easily become the ultimate tool of persuasion.

This brings me to the “AI Manipulation Problem“: The fact that targeted influence delivered by conversational agents is potentially far more effective than traditional content. If you want to understand why, just ask any skilled salesperson. They know the best way to coax someone into buying a product or service (even one they don’t need) is not to hand them a brochure, but to engage them in dialog. A good salesperson will start with friendly banter to “size you up” and lower your defenses. They will then ask questions to surface any reservations you may have. And finally, they will customize their pitch to overcome your concerns, using carefully chosen arguments that best play on your needs or insecurities.

The reason AI manipulation is such a significant risk is that AI agents will soon be able to pitch us interactively and they will be significantly more skilled than any human salesperson (see video example below).

This is not only because these agents will be trained to use sales tactics, behavioral psychology, cognitive biases and other tools of persuasion, but they will be armed with far more information about us than any salesperson.

In fact, if the agent is your “personal assistant,” it could know more about you than any human ever has.  (For a depiction of AI assistants in the near future, see my 2021 short story Metaverse 2030). From a technical perspective, the manipulative danger of AI agents can be summarized in two simple words: “Feedback control.” That’s because a conversational agent can be given an “influence objective” and work interactively to optimize the impact of that influence on a human user. It can do this by expressing a point, reading your reactions as detected in your words, your vocal inflections and your facial expressions, then adapt its influence tactics (both its words and strategic approach) to overcome objections and convince you of whatever it was asked to deploy. 

A control system for human manipulation is shown above. From a conceptual perspective, it’s not very different than control systems used in heat seeking missiles. They detect the heat signature of an airplane and correct in real-time if they are not aimed in the right direction, homing in until they hit their target.  Unless regulated, conversational agents will be able to do the same thing, but the missile is a piece of influence, and the target is you.  And, if the influence is misinformation, disinformation or propaganda, the danger is extreme. For these reasons, regulators need to greatly limit targeted interactive influence.

But are these technologies coming soon?

I am confident that conversational agents will impact all our lives within the next two to three years. After all, Meta, Google and Apple have all made announcements that point in this direction. For example, Meta recently launched a new version of their Ray-Ban glasses powered by AI that can process video from the onboard cameras, giving you guidance about items the AI can see in your surroundings. Apple is also pushing in this direction, announcing a multimodal LLM that could give eyes and ears to Siri. 

As I wrote about here in VentureBeat, I believe cameras will soon be included on most high-end earbuds to allow AI agents to always see what we’re looking at. As soon as these products are available to consumers, adoption will happen quickly. They will be useful.  

Whether you are looking forward to it or not, the fact is big tech is racing to put artificial agents into your ears (and soon our eyes) so they will guide us everywhere we go. There are very positive uses of these technologies that will make our lives better. At the same time, these superpowers could easily be deployed as agents of manipulation. 

How do we address this? I feel strongly that regulators need to take rapid action in this space, ensuring the positive uses are not hindered while protecting the public from abuse. The first big step would be a ban (or very strict limitations) on interactive conversational advertising. This is essentially the “gateway drug” to conversational propaganda and misinformation. The time for policymakers to address this is now.

Louis Rosenberg is a longtime researcher in the fields of AI and XR. He is CEO of Unanimous AI.  

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link

Bybit

Be the first to comment

Leave a Reply

Your email address will not be published.


*