Keiki: Towards Realistic Danmaku Generation via Sequential GANs

07/07/2021 ∙ by Ziqi Wang, et al. ∙ University of Malta 0

Search-based procedural content generation methods have recently been introduced for the autonomous creation of bullet hell games. Search-based methods, however, can hardly model patterns of danmakus – the bullet hell shooting entity – explicitly and the resulting levels often look non-realistic. In this paper, we present a novel bullet hell game platform named Keiki, which allows the representation of danmakus as a parametric sequence which, in turn, can model the sequential behaviours of danmakus. We employ three types of generative adversarial networks (GANs) and test Keiki across three metrics designed to quantify the quality of the generated danmakus. The time-series GAN and periodic spatial GAN show different yet competitive performance in terms of the evaluation metrics adopted, their deviation from human-designed danmakus, and the diversity of generated danmakus. The preliminary experimental studies presented here showcase that potential of time-series GANs for sequential content generation in games.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Bullet hell is a game genre in which a player survives by dodging overwhelming numbers of enemy projectiles and scores by shooting enemies down. The core component of a bullet hell game level is a “danmaku”, which refers to the game entity (agent or opponent) that creates bullets and determines their trajectories. Danmakus in bullet hell games are typically controlled by a set of human-designed rules, aiming at the elicitation of challenging and engaging experiences for the player.

(a) Touhou Project series (Team Shanghai Alice, since 2003).
(b) Talakat: Figure 6 from [8] with authors’ permission.
(c) Keiki: danmakus generated by deep convolutional GAN.
(d) Keiki: danmakus generated by periodic spatial GAN.
(e) Keiki: danmakus generated by time-series GAN.
Fig. 1: Examples of danmakus of commercial games vs. danmakus generated by Talakat and by our GAN generators Keiki (cf. Section III).

Over the last decade, a number of search-based procedural content generation (PCG) approaches have been applied widely to generate levels for platformer games, dungeon-like games and shooting games [18, 17, 14, 13, 16, 10]; however, the generation of bullet hell levels was only been explored recently. Talakat [8] first applied PCG to generate bullet hell levels by searching in a human-designed level space represented as a grammar-based text and designed several metrics and constraints to guide the search, although the generated danmakus look rather dissimilar compared to the regular ones of commercial games (cf. Figures 1(a) and 1(b)).

Studies linked to bullet hell generation include weapon generation and rhythm game generation. In particular, Hastings et al. [5] evolved particle-based weapons as compositional pattern producing networks in an online manner and Hoover et al. [6] introduced a similar generative system that considers the background music of the game for weapon particle generation. Constrained surprise search was introduced in [4] to generate weapons in first-person shooting games. In [9]

, a novel neural network structure combining convolutional neural network (CNN) and recurrent neural network (RNN) was designed to generate rhythm game levels with specific degree of difficulty.

By observing the indicative outcomes of Figures 1(a) and 1(b), it appears that search-based PCG methods [18] reach a certain level of quality compared to actual bullet hell levels [8]. Through mere observation, it also seems that existing state-of-the-art approaches can hardly model the implicit danmaku patterns present in commercial-standard games. In this paper, we present Keiki111The name “Keiki” comes from a character in Touhou Kikeijuu Wily Beast and Weakest Creature (Team Shanghai Alice, 2019). (cf. Section II) as a complementary bullet hell game platform that can represent danmakus as parametric sequences and can support neural network-based generative methods. Keiki currently employs three types of generative adversarial networks (GANs) [3]. The various GAN models are evaluated across three metrics designed to quantify the quality of the generated danmakus.

Ii Keiki: Bullet Hell Platform Generator

The Keiki platform provides functions including danmaku design, encoding, evaluation and basic gameplay. The source code of Keiki is available on Github222https://github.com/PneuC/Keiki, including the training data and all image assets used in this work. Due to the page limit, we only describe the danmaku encoding as follows.

The design of a danmaku can be seen as specifying a tuple , where

refers to the shooting rules and the vector

specifies the values of ’s control parameters, usually defined in the construction method of the implementation class. At the frame, the danmaku calls the implemented bullet builder several times according to its shooting rules () and parameter values (). This process is denoted by , where is the number of times that this danmaku calls the bullet builder at the frame and is the parameter vector used at the call. When shooting multiple times is desired (e.g., times), will be repeatedly called for until such that is satisfied. Thus, the length of the parametric sequence to be generated is . To reduce the dimensionality and data redundancy, the data compress method proposed in the work of [11] is used, described as follows. A parameter (short for interval, corresponding to in [11]), representing the time passed since the last call of bullet builder, is also added to the parametric sequence.

Iii Danmaku Generation

In Section III-A we outline the three GAN methods used for danmaku generation. Section III-B introduces the metrics we used to evaluate the danmaku performance. Our preliminary experimental results are presented and discussed in Section III-C.

Iii-a Danmaku Generation with GANs

Deep convolutional GANs, periodic spatial GANs and time-series GANs are considered separately and employed to generate parametric sequence of length 64. Generators are trained on a dataset of 34 danmakus implemented by the first author of this paper. These danmakus are mainly imitations of danmakus found in Touhou Project333https://touhou.fandom.com/wiki/Touhou_Project series.

During training, data augmentation is applied as follows. Each time a danmaku is loaded from training data, a Gaussian mutation is added to its parameters, i.e., , before feeding it into the model.

Iii-A1 Time-series GAN

We implement time-series GAN (TimeGAN) [20]

as an alternative PCGML approach, as it combines autoencoder and supervised loss to help learning temporal dynamics from training data. Not only a generator and a discriminator are involved in TimeGAN, but also a pair of an embedder (encoder) and a reconstructor (decoder), trained with a reconstruction loss, a supervised loss and an adversarial loss jointly. For all these models in TimeGAN, a 3-layer stacked LSTM with 128 hidden units in each layer is used and the logistic function is employed in the output layer. The dimensionality of the autoencoder’s hidden space is set as 24. The noise input to TimeGAN is composed of a global noise

and a periodic noise . For any , the input noise is , where denotes concat. is sampled once and duplicated times ( is the spatial length of the noise, set as here), while is sampled as , where and

are two multi-layer perceptrons.

Iii-A2 Periodic spatial GAN

The periodic spatial GAN [1]

with sequential noise input is employed as our second, alternative, generative model. The noise input to periodic spatial GAN is the same as that of TimeGAN. We use four one-dimensional convolutions with no padding. The kernel size and stride are set as

, respectively, for each layer. Such a design guarantees that the generator will create sequences of length when the spatial length of input noise is

, which is directly comparable with other employed GAN architectures. The structure of the discriminator is symmetric to the generator. Both the generator and the discriminator use the logistic activation function in the final layer and the ReLU activation function in all other layers.

Iii-A3 Deep convolutional GAN

A vanilla deep convolutional GAN (DCGAN) [12] is implemented as a non-sequential baseline. We used 5 one-dimensional transposed convolutional layers in the generator. The kernel size, stride and padding of the first and the remainder layers are and , respectively. The number of output channels of the first layer is and are reduced by times at each layer that follows. The structure of the discriminator is also symmetric to the generator, and the activation function is the same as the one used in the periodic spatial GAN.

Iii-B Evaluation Metrics

Three feature-based metrics are designed to evaluate a generated danmakus . The first one is shooting frequency, , which determines whether all bullets are shot in a very short period or progressively.

(1)

where is the number of shooting times (i.e., the length of parametric sequence) and is danmaku’s duration.

The second metric used is mean momentum, defined as the sum of all bullets on the screen over time. Let be the set of all the bullets at the frame. For any bullet , and denote respectively its weight (a scalar value proportional to the size of its image) and its speed. The mean momentum of danmaku is calculated as:

(2)

where is the danmaku’s duration.

Coverage

is another metric used to estimate the degree of game difficulty. The game screen is split into grids of

pixels and the percentage of regions that is covered by any bullet at any frame is computed. Let and be the number of rows and columns of the regions, respectively. determines whether the region at position is covered by at least one bullet at frame. The coverage metrics is as follows:.

(3)

where is if the proposition holds, otherwise . Higher values imply more difficult levels.

Although the aforementioned metrics may not suffice to determine if generated danmakus are of good quality, they can be used to quantify the similarity between real and generated danmakus with distance measures, such as those of the KL-divergence family.

Iii-C Experimental Study and Discussion

After preliminary hyper-parameter tuning, the following setting is used in our experiments. The batch size is set to for all GANs. The learning rate is for the DCGAN and the periodic spatial GAN, while it is set to

for the TimeGAN. We optimise the DCGAN and the periodic spatial GAN via Adam and the TimeGAN via RMSprop. The DCGAN and periodic spatial GAN are trained separately for

iterations. The autoencoder of TimeGAN is pre-trained for iterations, then the generator is trained with supervised loss only for iterations; finally we the trained all the models jointly for iterations.

Figure 2 shows how the values of metrics introduced in Section III-B change during training. The DCGAN performs the worst since its metric values are the ones furthest to the ones from real data. The periodic spatial GAN, on the other hand, shows more stable performance compared to that of the TimeGAN. The instability of TimeGAN in our experiments may be explained by its complex architecture which, in turn, calls for larger amounts of training data. According to Figure 2, however, the TimeGAN approach has the highest standard derivation across all metrics, which implies the highest diversity. The other GANs appear to suffer from mode collapse, i.e., they only generate data of one or a limited number of patterns.

Fig. 2: Comparison of GANs on shooting frequency (top), mean momentum (middle) and coverage (bottom) during training. Every 20 iterations, the trained GAN generates 30 samples for evaluation. Solid curves depict the mean value over 30 samples and shadow areas illustrate the standard derivation.

Figures 1(c), 1(d) and 1(e) show indicative examples of danmakus generated by the different GANs. DCGAN generates almost similar danmakus; periodic spatial GAN generates danmakus of similar patterns whereas TimeGAN appears to generate far more diverse danmakus compared to the other two GANs.

Iv Further Discussion

Generating danmakus via PCGML [16] using GANs is a challenging task; especially when training data is not readily available. Bootstrap methods [19]

or PCG via reinforcement learning (PCGRL) 

[7, 15] are worth investigating to overcome this challenge, but collecting more human-designed danmakus as training data is also important. While implementing and coding danmakus is time-consuming, videos of danmakus are easier to obtain from real games; thus, learning to represent danmakus directly from videos appears to be a promising future direction. The design of alternative GAN architectures is also important to be investigated for future work.

Another important direction for future research is agent-based evaluation. An agent has already been implemented in Keiki. Agent-based testing, however, is currently inefficient due to the slow simulation times. In bullet hell games hundreds of bullets may exist simultaneously at each frame; in Keiki, bullets’ moving directions can vary depending on player’s current or previous positions, which makes the real-time simulation CPU intensive. If more efficient agents are implemented or the game simulation can eventually be accelerated, we plan to add constraints regarding the playability of generated danmakus by using techniques such as constrained adversarial nets [2].

The three metrics currently used this work are intuitively designed. Designing new evaluation metrics and employing other PCG methods, such as that in [8], as baselines of the Keiki platform define top priorities for future work.

V Conclusion

In this paper, we presented Keiki, a novel bullet hell platform which allows encoding danmakus into a parametric sequence so that various artificial level generators can be used to generate danmakus. We also introduce GANs [3] as an alternative PCGML [16] mechanism that learns to represent human-designed danmakus—and hence generates more realistic content (cf. Section III)—and employ three different GAN architectures to generate them. Experimental results on Keiki show that both the TimeGAN and the periodic spatial GAN methods have the potential of generating realistic danmakus. The TimeGAN, in particular, generated more diverse danmakus, while the periodic spatial GAN is not sensitive to the length of parametric sequence since it is based on CNNs. Danmaku generation with GANs can be further enhanced though agent-based evaluation and constrained optimisation to ensure playability and yield more realistic danmakus that feature particular human-designed patterns or features.

Acknowledgement

The authors thank the reviewers for their careful reviews and insightful comments.

References

  • [1] U. Bergmann, N. Jetchev, and R. Vollgraf (2017) Learning texture manifolds with the periodic spatial GAN. In ICML, Cited by: §III-A2.
  • [2] L. Di Liello, P. Ardino, J. Gobbi, P. Morettin, S. Teso, and A. Passerini (2020) Efficient generation of structured objects with constrained adversarial networks. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33, pp. 14663–14674. Cited by: §IV.
  • [3] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, Cambridge, MA, USA, pp. 2672–2680. Cited by: §I, §V.
  • [4] D. Gravina, A. Liapis, and G. N. Yannakakis (2016) Constrained surprise search for content generation. In 2016 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8. Cited by: §I.
  • [5] E. J. Hastings, R. K. Guha, and K. O. Stanley (2009) Evolving content in the galactic arms race video game. In 2009 IEEE Symposium on Computational Intelligence and Games, pp. 241–248. Cited by: §I.
  • [6] A. K. Hoover, W. Cachia, A. Liapis, and G. N. Yannakakis (2015) Audioinspace: exploring the creative fusion of generative audio, visuals and gameplay. In International Conference on Evolutionary and Biologically Inspired Music and Art, pp. 101–112. Cited by: §I.
  • [7] A. Khalifa, P. Bontrager, S. Earle, and J. Togelius (2020) PCGRL: procedural content generation via reinforcement learning. In

    Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment

    ,
    Vol. 16, pp. 95–101. Cited by: §IV.
  • [8] A. Khalifa, S. Lee, A. Nealen, and J. Togelius (2018) Talakat: bullet hell generation through constrained map-elites. In

    Proceedings of The Genetic and Evolutionary Computation Conference

    ,
    pp. 1047–1054. Cited by: 1(b), §I, §I, §IV.
  • [9] Y. Liang, W. Li, and K. Ikeda (2019)

    Procedural content generation of rhythm games using deep learning methods

    .
    In Joint International Conference on Entertainment Computing and Serious Games, pp. 134–145. Cited by: §I.
  • [10] J. Liu, S. Snodgrass, A. Khalifa, S. Risi, G. N. Yannakakis, and J. Togelius (2021) Deep learning for procedural content generation. Neural Computing and Applications 33, pp. 19–37. Cited by: §I.
  • [11] O. Mogren (2016) C-RNN-GAN: continuous recurrent neural networks with adversarial training. arXiv preprint arXiv:1611.09904. Cited by: §II.
  • [12] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §III-A3.
  • [13] S. Risi and J. Togelius (2020)

    Increasing generality in machine learning through procedural content generation

    .
    Nature Machine Intelligence 2 (8), pp. 428–436. Cited by: §I.
  • [14] N. Shaker, J. Togelius, and M. J. Nelson (2016) Procedural content generation in games. Springer. Cited by: §I.
  • [15] T. Shu, J. Liu, and G. N. Yannakakis (2021) Experience-driven PCG via reinforcement learning: A Super Mario Bros study. In The 2021 IEEE Conference on Games (CoG), pp. accepted. Cited by: §IV.
  • [16] A. Summerville, S. Snodgrass, M. Guzdial, C. Holmgård, A. K. Hoover, A. Isaksen, A. Nealen, and J. Togelius (2018) Procedural content generation via machine learning (PCGML). IEEE Transactions on Games 10 (3), pp. 257–270. Cited by: §I, §IV, §V.
  • [17] J. Togelius, A. J. Champandard, P. L. Lanzi, M. Mateas, A. Paiva, M. Preuss, and K. O. Stanley (2013) Procedural content generation: goals, challenges and actionable steps. In Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Cited by: §I.
  • [18] J. Togelius, G. N. Yannakakis, K. O. Stanley, and C. Browne (2011) Search-based procedural content generation: a taxonomy and survey. IEEE Transactions on Computational Intelligence and AI in Games 3 (3), pp. 172–186. Cited by: §I, §I.
  • [19] R. R. Torrado, A. Khalifa, M. C. Green, N. Justesen, S. Risi, and J. Togelius (2020) Bootstrapping conditional gans for video game level generation. In 2020 IEEE Conference on Games (CoG), pp. 41–48. Cited by: §IV.
  • [20] J. Yoon, D. Jarrett, and M. van der Schaar (2019) Time-series generative adversarial networks. In Advances in Neural Information Processing Systems, Vol. 32, pp. . Cited by: §III-A1.