Learning for General Competency in Video Games
AAAI 2015 Workshop, January 26, 2015, Austin, Texas, USA
Call for Papers
Over recent years there has been a surge of interest in video game platforms as a source of challenging AI domains. The Atari 2600, for example, offers hundreds of independently-designed games drawn from a variety of genres. Through this variety, video game platforms offer the opportunity to truly test the general competency of learning agents. Unresolved challenges in these domains include learning dynamical models for high-dimensional visual observations, learning concise state representations, and efficient exploration when rewards are sparse.
The aim of this workshop is to accelerate the dissemination of interesting approaches, engineering techniques and lessons learned concerning the Atari 2600 and other video game domains. A portion of the workshop will also be devoted to a panel discussing evaluation standards to assist in reproducibility and comparability between different research groups.
We encourage the submission of both original and incremental work as well as the presentation of interesting engineering results, whether positive or negative. The workshop will combine oral presentations, short technical presentations, panel discussions and invited talks from researchers actively investigating general competency for video games.
- Full submissions (4-8 pages): Published or unpublished work applied to the Atari 2600 or other video game domains requiring general competency. Accepted work will be allotted 20 minutes for presentation, including questions.
- Surprising technical results (1-2 pages): Participants are encouraged to present ideas, algorithms, tricks that should have worked but did not, as well as methods that curiously fail to generalize beyond a handful of games. Accepted abstracts will be allotted 5 minutes presentation.
- Discussion material (1-2 pages): Participants are encouraged to submit topic suggestions and opinion pieces on the nature of general competency in video games, for example on the design of evaluation mechanisms. These will form the basis of the discussion session.
Submissions should be sent to firstname.lastname@example.org, using the AAAI style files to prepare your papers (http://www.aaai.org/
Relevant topics include, but are not limited to:
- Representation learning
- Model learning
- Simulation-based planning
- Transfer learning
- Apprenticeship and imitation learning
- Intrinsic motivation
- Subgoal discovery
- Skill acquisition
- Exploration in large state spaces
- Feature selection
- Hierarchical reinforcement learning
Submission deadline: November, 1st, 2014. Notification of acceptance: November, 14, 2014.
- Workshop date: January, 26, 2015.
09:00 – 09:05: Welcome
09:05 – 09:45: Invited Talk 1: Michael Bowling
09:45 – 10:25: Oral Presentations (2 papers; 20 minutes)
10:30 – 11:00: Coffee Break
11:00 – 11:50: Invited Talk 2: Joel Veness & Marc Bellemare
11:50 – 12:15: Blitz Session 1 (5 papers; 5 minutes)
14:00 – 14:50: Invited Talk 3: Peter Stone & Matthew Hausknecht
14:50 – 15:30: Evaluation in the ALE
15:30 – 16:00: Coffee Break
16:00 – 16:25: Blitz Session 2 (5 papers; 5 minutes)
16:30 – 17:30: Future Directions
- Peter Stone and Matthew Hausknecht, University of Texas at Austin
- Marc G. Bellemare and Joel Veness, Google DeepMind
- Michael Bowling, University of Alberta
- N. Lipovetzky, M. Ramirez, H. Geffner: Classical Planning Algorithms on the Atari Video Games: Preliminary Results
- X. Guo, S. Singh, H. Lee, R. Lewis, X. Wang: Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning
Blitz Session 1:
- A. Braylan, M. Hollenbeck, E. Meyerson, R. Miikkulainen: Frame Skip is a Powerful Parameter for Learning to Play Atari
- M. Hausknecht, P. Stone: The Impact of Determinism on Learning Atari 2600 Games
- M. C. Machado, S. Srinivasan, M. Bowling: Domain-Independent Optimistic Initialization for Reinforcement Learning
- V. Marivate, M. L. Littman: Reinforcement-Learning Evaluation for Better Generalization
- E. Talvitie, M. Bowling: Pairwise Relative Offset Features for Atari 2600 Games
Blitz Session 2:
- G. V. de la Cruz Jr., B. Peng, W. S. Lasecki, M. E. Taylor: Generating Real-Time Crowd Advice to Improve Reinforcement Learning Agents
- T. A. Mann, D. J. Mankowitz, S. Mannor: Learning when to Switch between Skills in a High Dimensional Domain
- D. Markovikj, M. Bogdanovic, N. de Freitas, M. Denil: Deep Apprenticeship Learning for Playing Video Games
- V. Nagarajan, L. S. Marcolino, M. Tambe: Every Team Makes Mistakes: An Initial Report on Predicting Failure in Teamwork
- B. A. Pires, C. Szepesvari: Pathological effects of variance on classification-based policy iteration
- Michael Bowling, University of Alberta
- Marc G. Bellemare, Google DeepMind
- Erik Talvitie, Franklin & Marshall College
- Joel Veness, Google DeepMind
- Marlos C. Machado, University of Alberta