Let's see all the parameters of background generate section that can be used in image generation API. They will be described in detail down below.
Field name
Description
model_type
This parameter chooses model which will be used for image generation, realistic is the default one:
- realistic
- fantasy
description
Generation prompt. By default "high quality, highly detailed, 8K" phrase is added to it.
sample_num
Works like generator random seed.
adapter_type
This parameter controls how generator will use the input image.
generate_background - (default value) background is generated around the main object found in the input image
face - generator will use first found face of the input image to create an avatar
control - (also known as "image to image") generator will generate image based on image and image edges, it's useful for adding details to existing photos, f.e. adding furniture to empty room
control2 - generator will generate image based only on image edges, it's useful for generating images from drawings
upscale - used for specifying prompts for generative upscale
true/false - Works only when adapter_type is face. Selects different face generation algorithm when just face details are used (skips for example hair style from original photo).
controlnet_conditioning_scale
Float value from 0 to 1 describes how much edges will be preserved. Default value is 0.5
Let's check some examples.
Based on text only
Image can be generated based on text prompt in given resolution.
{"width":2048,"height":1024,"background": {"generate": { "description": "woman in a futuristic suit holding a gun in her hand, looking at the camera, cyberpunk art, neo-figurative, anime"
} }}
And the result:
Now, the same prompt but with fantasy model:
{"width":2048,"height":1024,"background": {"generate": { "description": "woman in a futuristic suit holding a gun in her hand, looking at the camera, cyberpunk art, neo-figurative, anime",
"model_type":"fantasy" } }}
Based on image
Images can be generated based on face details or image canny edges.
Let's check some examples.
{"url":"https://deep-image.ai/api-example.png","width":1024,"height":1024,"background": {"generate": {"description":"item on the beach","adapter_type":"generate_background" } }}
And the result
{"url":"https://deep-image.ai/api-example3.jpg","width":1024,"height":1024,"background": {"generate": {"description":"model on the beach","adapter_type":"face" } }}
And the result
{"url":"https://deep-image.ai/api-example3.jpg","background": {"generate": {"description":"model in the room","adapter_type":"control" } }}
And the result
Generative upscaling
Let's try to use generative upscaling. This algorithm can do some real magic 🙂. It also modifies image slightly because it's based on diffusion algorithm so do not use it when you really want to preserve exact image colors and original image details.