Blending output from generative adversarial networks to texture high-resolution 2D town maps for roleplaying games
G Siracusa, D Seychell… - 2021 IEEE Conference on …, 2021 - ieeexplore.ieee.org
G Siracusa, D Seychell, M Bugeja
2021 IEEE Conference on Games (CoG), 2021•ieeexplore.ieee.orgThe recent success of Generative Adversarial Networks (GAN s) in image and video
applications led to the development of numerous variants specialised for particular tasks,
such as conditional GANs for image-to-image translation. In spite of the research done in
fine-tuning architectures and applying them to different subjects, techniques still deal with
stand-alone images, such as nature scenes, city landmarks, faces and others. The task of
producing contiguous colour data-namely adjacent parts of the same image, not textures …
applications led to the development of numerous variants specialised for particular tasks,
such as conditional GANs for image-to-image translation. In spite of the research done in
fine-tuning architectures and applying them to different subjects, techniques still deal with
stand-alone images, such as nature scenes, city landmarks, faces and others. The task of
producing contiguous colour data-namely adjacent parts of the same image, not textures …
The recent success of Generative Adversarial Networks (GAN s) in image and video applications led to the development of numerous variants specialised for particular tasks, such as conditional GANs for image-to-image translation. In spite of the research done in fine-tuning architectures and applying them to different subjects, techniques still deal with stand-alone images, such as nature scenes, city landmarks, faces and others. The task of producing contiguous colour data - namely adjacent parts of the same image, not textures - has not been attempted before in literature related to generative machine learning techniques. Achieving this feat would allow large images to be processed in smaller parts, hence removing the architectural maximum to the output resolution that can be achieved by the network. Concurrent state-of-the-art architectures for conditional image-to-image translation are in the range of 2k x 1k pixels and typically take several days to train on powerful hardware. The proposed contiguous technique, in this case applied on fantasy maps for roleplaying games, can achieve higher resolutions with smaller networks that can be trained faster, within a single day. The technique is capable of maintaining as much quality as allowed by the detail of the semantic layouts provided, even at 4k and higher, but it suffers when detail in these is too sparse. A sample of images produced by the system were shown to survey participants who judged their appeal as 3.49 on a Likert scale of 5, and segmentation analysis reported an average weighted inter-class accuracy score of 0.689 (0.448 unweighted).
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果