Unreal OpenAI API 1.0.0
Loading...
Searching...
No Matches
FOpenAIImageEdit Struct Reference

Public Attributes

TArray< FString > Image
FString Prompt
FOptionalString Background
FOptionalString Input_Fidelity {}
FString Mask
FString Model {"dall-e-2"}
FOptionalString Moderation {}
int32 N {1}
FOptionalInt Output_Compression {}
FOptionalString Output_Format {}
FOptionalInt Partial_Images {}
FString Quality {"auto"}
FString Response_Format {"url"}
FString Size {"1024x1024"}
FOptionalBool Stream {}
FOptionalString User

Member Data Documentation

◆ Background

FOptionalString FOpenAIImageEdit::Background

Allows to set transparency for the background of the generated image(s). This parameter is only supported for gpt-image-1. Must be one of transparent, opaque or auto (default value). When auto is used, the model will automatically determine the best background for the image. If transparent, the output format needs to support transparency, so it should be set to either png (default value) or webp.

◆ Image

TArray<FString> FOpenAIImageEdit::Image

The image(s) to edit. Must be a supported image file or an array of images. For gpt-image-1, each image should be a png, webp, or jpg file less than 25MB. For dall-e-2, you can only provide one image, and it should be a square png file less than 4MB.

◆ Input_Fidelity

FOptionalString FOpenAIImageEdit::Input_Fidelity {}

Controls fidelity to the original input image(s). Must be one of high or low. This parameter is only supported for GPT image models.

◆ Mask

FString FOpenAIImageEdit::Mask

An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. If there are multiple images provided, the mask will be applied on the first image. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.

◆ Model

FString FOpenAIImageEdit::Model {"dall-e-2"}

The model to use for image generation. Only dall-e-2 and gpt-image-1 are supported. Defaults to dall-e-2 unless a parameter specific to gpt-image-1 is used.

◆ Moderation

FOptionalString FOpenAIImageEdit::Moderation {}

Control the content-moderation level. Must be either low or auto (default value). Only supported for GPT image models.

◆ N

int32 FOpenAIImageEdit::N {1}

The number of images to generate. Must be between 1 and 10.

◆ Output_Compression

FOptionalInt FOpenAIImageEdit::Output_Compression {}

The compression level (0-100%) for the generated images. Only supported for GPT image models with webp or jpeg output formats.

◆ Output_Format

FOptionalString FOpenAIImageEdit::Output_Format {}

Output image format. Must be one of png, jpeg, or webp. Only supported for GPT image models.

◆ Partial_Images

FOptionalInt FOpenAIImageEdit::Partial_Images {}

The number of partial images to generate during streaming. Value must be between 0 and 3. Only supported for GPT image models.

◆ Prompt

FString FOpenAIImageEdit::Prompt

A text description of the desired image(s). The maximum length is 1000 characters for dall-e-2, and 32000 characters for gpt-image-1.

◆ Quality

FString FOpenAIImageEdit::Quality {"auto"}

The quality of the image that will be generated. high, medium and low are only supported for gpt-image-1. dall-e-2 only supports standard quality. Defaults to auto.

◆ Response_Format

FString FOpenAIImageEdit::Response_Format {"url"}

The format in which the generated images are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated. This parameter is only supported for dall-e-2, as gpt-image-1 will always return base64-encoded images.

◆ Size

FString FOpenAIImageEdit::Size {"1024x1024"}

The size of the generated images. Must be one of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default value) for gpt-image-1, and one of 256x256, 512x512, or 1024x1024 for dall-e-2.

◆ Stream

FOptionalBool FOpenAIImageEdit::Stream {}

Stream partial image results as events. Only supported for GPT image models.

◆ User

FOptionalString FOpenAIImageEdit::User

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.


The documentation for this struct was generated from the following file: