Should We Aim For Humain-AI Coordination Instead Of Human-AI Confrontation?

Therefore Human-AI and more generally multi-agent coordination will be at the center of our interactions with machines in the future.

The main challenge is to be able to interpret the actions and intentions of other agents in the environment and find an optimal policy to make the best decisions.

The problem with arbitrary conventions is that when paired with other agents, the first agent would rely on those conventions to reach its target resulting in poor performance.

To reach this kind of ability and perception for intelligent agents, one of the works that paved the way for current research is the machine theory of mind. The original theory of mind is actually applied to humans and refers to our ability to represent and abstract the mental states of other humans. This includes intentions, desires, and beliefs.

The work done on the machine theory of mind was able to improve decision-making in complex multi-agent tasks.

It was also shown that meta-learning could be used to furnish an agent with the ability to build flexible and sample-efficient models of others.

In this respect, the pursuit of a machine theory of mind is about building the missing interface between machines and human expectations.

The long-term goal of artificial intelligence is often defined as the ability to solve advanced real-world challenges. Furthermore, a number of companies such as DeepMind have been focused on “solving

One of the most impressive recent experiences was the construction of a Dota 2 AI.

Dota 2’s rules are complex and imply a high level of coordination and strategy to perform at the highest level — the game has been actively developed for over a decade, with game logic implemented in hundreds of thousands of lines of code.

One way to train and coordinate different agents is the Single Play (SP) method. The idea is to train an agent with duplicates of itself and make all the agents work in coordination.

This method has a major drawback which was its fragility to change. When paired with other agents it performs poorly because it focuses on arbitrary policies built with itself.

The Other Play method tackles this arbitrary convention issue by integrating symmetry transformation.

In this problem, it would mean that the agent would train against transformed versions of itself, and hence avoid creating arbitrary convention. For example, if an agent plays with a transformed version of itself and finds that selecting the third 1.0 lever is an efficient strategy, when paired with a newly transformed agent, it will perform poorly and try to find a better policy. By contrast, OP suggests the choice of the 0.9 lever.

Agent communication is also a key part of this field and work has already been realized both in free communication and in costly communication between agents.

Those elements also play an important part in AI research since agents will be needing to communicate before starting to coordinate in order to look for similarities in further work.

Should we aim for Humain-AI coordination instead of Human-AI confrontation?

Written by Adam Rida

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Adam Rida

Adam Rida

2 Followers

Engineering student in Applied mathematics and Data Science