-1

Image showing required content transfer between source and target images

Essentially is there a way to grow an image patch defined by mask and keep it realistic?

1 Answers1

0

For your first question, you are talking about image style transfer. In that case, CNNs may help you.

For the second, if I understand correctly, by growing you mean introducing variations in the image patch while keeping it realistic. If that's the goal, you may use GANs for generating images, provided you have a reasonable sized dataset to train with:

Image Synthesis with GANs

Intuitively, conditional GANs model the joint distribution of the input dataset (which in your case, are images you want to imitate) and can draw new samples (images) from the learned distribution, thereby allowing you to create more images having similar content.

Pix2Pix is the open-source code of a well-known paper that you can play around to generate images. Specifically, let X be your input image and Y be a target image. You can train the network and feed X to observe the output O of the generator. Thereafter, by tweaking the architecture a bit or by changing the skip connections (read the paper) train again and you can generate variety in the output images O.

Font Style Transfer is an interesting experiment with text on images (rather than image on image, as in your case).

n.gaurav
  • 45
  • 9