Toward General AI
Artificial intelligence (AI) has advanced in recent years to the point where an AI system can master complex games like chess and Go with nothing more than the basic rules. Through a process of trial and error, they use reinforcement learning (RL) to play round after round and ultimately gain enough skill to beat even world-champion programs. The drawback of these systems, however, has been their inability to learn more than one game at a time without repeating the entire RL process.
A newly developed game environment called XLand may be the next step in creating AI agents capable of adapting when presented with new tasks or conditions of play using a process known as deep RL. The dynamically changing environment allows agents to improve their capabilities based on goals of the game currently being played and their relative performance. The inclusion of multiple players even allows agents to build on each other’s learning. Although the process starts from scratch, the progression is totally open-ended and virtually unlimited.
As a result of training on approximately 700,000 games in 4,000 worlds with each agent experiencing an estimated 200 billion training steps, the developers have observed a variety of behaviors in XLand agents, including experimentation, use of tools and cooperative behaviors with a trend toward more generally capable tasks.
For information: Deep Mind; website: https://deepmind.com/ or https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play