set up | options (as in computer software settings) | descriptive | drawbacks |
---|---|---|---|
base_model | String path to the base model file | This option specifies the path to the base model file to be used as a starting point for training a new model. The model will be fine-tuned on the new data provided. | It is important to choose the underlying model that is relevant to the task and data, otherwise the fine-tuning process may not improve performance. |
img_folder | String path to the folder containing the training images | This option specifies the path to the folder containing the training images used to train the model. | The quality and quantity of training images can greatly affect the performance of the model. Having sufficiently diverse and high-quality images is crucial for model learning. |
output_folder | String path to the folder where the output model is stored | This option specifies the path to the folder where the output model is saved after training. | The output folder should have enough storage space to save the model files. It is important to back up the model files regularly to prevent loss of data that could lead to loss of work. |
change_output_name | boolean | Specifies whether to change the output name of the training model. If set to True, the output name will change, otherwise it will remain unchanged. | N/A |
save_json_folder | string (computer science) | Path to the folder where the json file for the training model will be saved. | N/A |
load_json_path | string (computer science) | The path to the json file that will be used to load the model parameters. This is useful for continuing training from a previous checkpoint. | The file specified by load_json_path must exist and be a valid json file. |
json_load_skip_list | string list | Specifies a list of strings for keys in the model configuration that are not loaded in the saved .json file. | If you accidentally include the required keys in json_load_skip_list, the model may not work as expected. |
multi_run_folder | string (computer science) | The name of the folder where the results of multiple training runs are stored, in the format {multirun_folder}/run{run_number}/. | If multiple runs are done without changing the multi_run_folder name, previous results will be overwritten. |
save_json_only | boolean | If set to True, only the model configuration file (.json format) is saved, not the full model checkpoints. | The .json file alone will not recover the model, and if save_json_only is set to True, you must start training from scratch. |
caption_dropout_rate | Floating point numbers between 0 and 1 | Specifies the random discard rate of titles during training. | If this value is set too high, important information in the title may be lost, resulting in lower quality results. |
caption_dropout_every_n_epochs | integer (math.) | Specifies the frequency (in rounds) at which title drops are performed during training. | If this value is set too high, the model may not be exposed to enough title diversity during training, leading to overfitting. If it is set too low, the model may not have enough time to learn from the headlines before being discarded. |
caption_tag_dropout_rate | Floating point numbers between 0 and 1 | Controls the random discard rate of titles when training the model. A high value means that more titles will be discarded, while a low value means that fewer titles will be discarded. | Setting high values may result in the loss of important information in the title, leading to lower quality results. |
net_dim | integer (math.) | This setting determines the number of hidden units in the model's network architecture. Larger net_dim values result in a more complex and larger model, but also require more computational resources and can lead to overfitting if the model is too large and there is insufficient training data. | Overfitting, increasing computational resources |
alpha | floating point | This setting determines the learning capacity used during training. Larger alpha values can lead to faster convergence, but if set too high, they can also lead to overfitting of the model or convergence to suboptimal solutions. Smaller alpha values may result in slow convergence or no convergence at all. | Suboptimal solution with slow convergence |
scheduler | string (computer science) | This setting determines the learning rate schedule used during training. Common choices include "step", "cosine", and "plateau". The step plan reduces the learning rate by a fixed factor after a specified number of iterations, while the cosine plan reduces the learning rate by a cosine function. The plateau plan reduces the learning rate when the validation loss stops improving. | Suboptimal solution, slow convergence, difficulty in choosing the appropriate plan |
cosine_restarts | integer (math.) | The number of times the cosine annealing program should restart. A greater number of restarts allows the learning rate to change more frequently, reducing the risk of falling into a suboptimal learning rate. | Increasing the number of restarts may lead to more frequent changes in the learning rate, which may make the training process more unstable and difficult to adjust. |
scheduler_power | floating point | The power parameter of the scheduler. Larger power values mean that the learning rate changes more slowly. | Setting higher power values may result in a learning rate that is too slow to converge in a reasonable amount of time. On the other hand, setting lower power values may result in a learning rate that is too aggressive, causing the model to overfit the training data. |
warmup_lr_ratio | floating point | The ratio of the maximum learning rate to the initial learning rate during the warm-up period. The learning rate gradually increases from the initial value to the maximum value. | A high warm-up learning rate ratio may result in a model that converges slowly or not at all. On the other hand, a low warm-up learning rate ratio may result in a learning rate that is too low to effectively train the model. |
learning_rate | floating point | This option sets the learning rate of the optimizer used to train the model. It determines the step size at which the optimizer updates the model parameters. The default value is 0.0001. | A high learning rate may cause the model to converge too quickly to a suboptimal solution, while a low learning rate may cause the training process to be slow and may converge to a poor solution. The learning rate must be carefully set to balance these tradeoffs. |
text_encoder_lr | floating point | This option specifically sets the learning rate of the model's text encoder component. If this value is set to a different value than learning_rate, it allows for special fine-tuning of the text encoder. | Setting text_encoder_lr to a different value than learning_rate may result in overfitting to the text encoder and may not generalize well to new data. |
unet_lr | floating point | This option specifically sets the learning rate of the UNet component of the model. If this value is set to a different value than learning_rate, it allows to fine-tune UNet specifically. | Setting unet_lr to a different value than learning_rate may result in overfitting to UNet and may not generalize well to new data. |
num_workers | integer (math.) | Specifies the number of worker threads to load the data. Increasing the number of worker threads can speed up data loading and training, but may also increase memory usage. | Too many worker threads can lead to memory overflow and slow down the training process. |
persistent_workers | boolean | Determines whether to use persistent worker threads. Persistent worker threads maintain a queue of data samples, allowing data to be loaded more efficiently. | May result in degraded system performance, especially on systems with limited resources such as memory or disk I/O. |
batch_size | integer (math.) | Specifies the number of samples included in each batch. Larger batch sizes can lead to more efficient training, but may also increase memory usage and slow down convergence. | Too large a batch size may lead to memory overflow and slow down the training process, while too small a batch size may lead to slow convergence. |
num_epochs | integer (math.) | Specifies how many complete traversals should be performed on the training data. More rounds will result in a more accurate model, but will also take more time to run. | Longer training times may overfit the data if too many rounds are used. |
save_every_n_epochs | integer (math.) | Specify how often the model should be saved during training. For example, setting this to 5 means that the model is saved every 5 rounds. | takes up more storage space because models will be saved more frequently. |
shuffle_captions | boolean | Specifies whether the training data should be shuffled between rounds. Shuffling can help prevent the model from falling into local minima, but it can also make training inconsistent. | If the order of the training data is significant, it may lead to inconsistent training. |
keep_tokens | integer (math.) | The number of most frequent tokens in the text corpus used for training. Token appearing less frequently than keep_tokens will be replaced with an unknown Token (""). Smaller values will result in a smaller vocabulary size, which may reduce the memory requirements of the model, but may also result in loss of information. | If keep_tokens is set too low, information may be lost. |
max_steps | integer (math.) | The maximum number of steps to be taken during training. Once the model sees the max_steps batch of data, training will stop. | If max_steps is set too low, the model may not be fully trained. If it is set too high, training may take a long time. |
tag_occurrence_txt_file | string (computer science) | Path to a text file containing label occurrence information. The label occurrence information is used to weight the loss function during training. | If the label appears to have information unavailable or not properly specified, the model may not be trained correctly. |
sort_tag_occurrence_alphabetically | true or false | If set to true, tags in tag_occurrence_txt_file will be sorted alphabetically. This option can be used to keep the tag order consistent and ensure that similar tags are grouped together. | N/A |
train_resolution | integer value | This value determines the resolution of the training image. Higher resolutions will produce more detailed images, but also require more memory and computational resources. | Increasing the resolution can significantly increase training time and memory requirements, especially if the training data is large. |
min_bucket_resolution | integer value | This value determines the minimum size of the bucket used for training. A smaller bucket size can lead to a faster training process, but may also cause overfitting or reduce the quality of the results. | Excessive reduction of the bucket size may lead to less efficient training and lower quality of results. |
max_bucket_resolution | integer (math.) | Specifies the maximum image resolution of the training data. If the resolution of the training data is greater than max_bucket_resolution, it will be downsampled. | A high value of max_bucket_resolution may result in longer training time and higher memory usage, while a low value may reduce the quality of the generated image. |
lora_model_for_resume | string (computer science) | Specifies a path to a pre-trained LoRA model that will be used to resume training from a previous checkpoint. | Resuming training from a pre-trained model may lead to overfitting if the new training data is significantly different from the original training data. |
save_state | boolean | Specifies whether to save the training state after each round. If set to True, the training state will be saved in the lora_model_for_resume file. | Saving training states frequently may result in longer training times and higher disk usage. |
load_previous_save_state | true or false | Specifies whether to load the previously saved state of the model during training. If set to True, training will resume from the previously saved state. If set to False, training will start from scratch. | If the previously saved state is not available or has been corrupted, training will not be recovered and training will start from scratch, which may result in longer training times and performance degradation. |
training_comment | string (computer science) | Specifies the comment that will be added to the saved model name. This can be used to distinguish between different models trained with different settings or parameters. | not have |
unet_only | true or false | Specifies whether to train only the UNet components of the model. If set to True, only the UNet component of the model will be trained and the text encoder component will not be trained. If set to False, both the UNet and text encoder components of the model will be trained. | Training only the UNet component of the model may result in lower performance than training both components at the same time, as the text encoder component is an important part of the model and helps encode textual information into the training process. |
text_only | true or false | Determines whether to train the model on text only or on both text and images. Setting this to True will result in faster training but lower quality image generation. Setting this to False will result in slower training but higher quality image generation. | If set to True, the resulting image will not be as accurate or detailed as if set to False. |
reg_img_folder | string (computer science) | The path to the image directory used for training. | This option is only relevant if text_only is set to False. If no images are provided, the model will be trained on text only and no images will be generated. |
clip_skip | true or false | Determines whether the model should skip clipped images in the training data. Clipped images are those that are either too small or too large in size compared to train_resolution. | If set to True, the model may not be able to learn from some images in the training data. If set to False, training may take longer because the model needs to process all images, even edited ones. |
test_seed | integer (math.) | Specify a random seed for test data generation and evaluation. Setting the seed ensures that the same test data is generated every time the script is run. | Different seeds may lead to different test data and evaluation results, making it difficult to compare performance across runs. |
prior_loss_weight | floating point | Specify the weight of the prior loss term in the overall loss calculation. The prior loss term is used to encourage the model to generate outputs that are similar to the prior distribution of the training data. | Setting the weights too high may result in an output that is too similar to the previous, reducing the creativity of the model. Setting the weights too low may result in an output that is too far from the previous and not coherent enough. |
gradient_checkpointing | boolean | Specifies whether to use gradient checkpoints to reduce memory usage during training. Gradient checkpointing involves selectively saving and reloading activations during backpropagation, which reduces memory use at the cost of increased computation time. | Using gradient checkpoints may slow down the training process and may not be necessary for small models or devices with sufficient memory. |
gradient_acc_steps | integer (math.) | Specifies the number of steps for gradient accumulation during training. Increasing this value reduces memory usage and aids in training stability. | Higher values of gradient_acc_steps increase the number of operations and may slow down the training process. |
mixed_precision | boolean | Specifies whether to use mixed-precision training, which uses low-precision data types to speed up training. | Training with mixed precision may result in reduced accuracy and may lead to unstable training. |
save_precision | floating point | Specifies the precision used when saving model weights. Typically set to 32 or 16 depending on the precision used during training. | Lower accuracy values may lead to loss of information when saving model weights, resulting in lower accuracy. |
save_as | string (computer science) | Specifies the file format in which to save the training model. Supported formats are: ckpt, safetensors, pt, bin. | The file format should match the type of Stable Diffusion AI art model that the LoRA model will be used for. |
caption_extension | string (computer science) | Specifies the extension of the text file containing the headers of the training data. | The extension must match the actual file extension of the header file. |
max_clip_token_length | integer (math.) | Specify the maximum number of Token allowed in a single title. Titles exceeding this length will be skipped during training. | Setting higher values may increase memory usage during training. Setting a lower value may result in loss of important information in the header. |
buckets | list of integers | Specify the size of the buckets algorithm. For example, if buckets is set to [5,10,15], the data will be divided into three buckets, with data of length 5 Token in one bucket, data of length 10 Token in another, and data of length 15 Token in the third. | The number of buckets and the size of the buckets must be carefully chosen to achieve good results. If the number of buckets is too small, the model may not perform well, while if the number of buckets is too large, the model may be overfitted. |
xformers | string list | Specify the transformer to be used during training. The transformer can be used to apply data enhancement techniques such as random cropping, flipping, rotating, etc. | The choice of transformer can greatly affect the performance of the model, so it is important to select the transformer that is best suited to the particular task. |
use_8bit_adam | boolean | Specifies whether to use the 8-bit Adam optimizer. This option can be used to reduce the memory requirements for the training process. | If this option is set to True, the memory requirements for the training process will be reduced, but training may be slower and the model may be less accurate. |
cache_latents | boolean | If set to True, potential values of the training data are cached to speed up training. This can reduce the time it takes to train the model, but may also use more memory and increase the time to start training. | Increased memory usage and slower startup times. |
color_aug | boolean | If set to True, color enhancement is performed during training. This can increase the diversity of the training data, but may also slow down training. | Training time slows down. |
flip_aug | boolean | If set to True, flip enhancement is performed during training. This can increase the diversity of the training data, but may also slow down training. | Training time slows down. |
random_crop | True/False | Specifies whether to apply random cropping to the training image. If set to True, the training image will be randomly cropped to the specified size before being input to the model. | The use of randomized cropping increases the diversity of training data, but it also increases the computational cost of training and may slow down the training process. |
vae | True/False | Specifies whether to use a variational autocoder (VAE) as the model's backbone. If set to True, the model will be trained as a VAE. | Using VAE can provide a more flexible representation of the data, but it can also make training more difficult and may require more fine-tuning. |
no_meta | True/False | Specifies whether to exclude metadata (e.g., category tags, attributes, etc.) from the training process. If set to True, the model will not have access to any metadata during training. | Excluding metadata can simplify the training process, but it can lead to lower model quality and failure to utilize the additional information provided by the metadata. |
log_dir | string (computer science) | The path to the directory where the training log files are stored. | If the directory already exists and is not empty, the training may overwrite previous logs stored in that directory, resulting in data loss. |
bucket_reso_steps | integer (math.) | Increase the number of steps for image resolution. The image resolution starts at max_bucket_resolution and increases by a factor of 2 after each step. | Setting this value too high may result in memory errors and longer training times as the image size increases in each step. Setting this value too low may result in lower image quality. |
bucket_no_upscale | boolean | Indicates whether to limit the increase in image resolution beyond its original size. | If set to True, the image resolution will not increase beyond its original size, which may result in lower image quality. |
v2 | true or false | This setting specifies whether version 2 of the model architecture is used. | Using different versions of the model architecture may change the quality and performance of the generated art, so it is important to experiment and compare results to determine the best option for a given task. |
v_parameterization | spectral_norm", "instance_norm" or "batch_norm". | This setting determines how the parameters of the model are normalized during training. Spectral normalization, instance normalization, and batch normalization are different approaches to preventing overfitting, each with its own tradeoffs in terms of computational cost and performance. | Choosing the wrong normalization method may negatively affect the performance of the model, so it is important to try different options to determine which one is best suited for a given task. |
Let ChatGPT interpret all parameters of LoRA
May not be reproduced without permission:Chief AI Sharing Circle " Let ChatGPT interpret all parameters of LoRA
Recommended
- Why LISP Language Prompts Generate SVG Vector Graphics
- Code retrieval logic disclosed from the official Cursor security documentation
- Structured Data Output Methods for Large Models: A Selected List of LLM JSON Resources
- AI Conversion Rate Optimization: How AI is Transforming CRO Strategies
- Popular Science: What is a Large Language Model Context Window
- PDL: Declarative Prompted Word Programming Language
- Career Guide Books in AI: Building Your Career in Artificial Intelligence
- ToolGen: Unified Tool Retrieval and Invocation through Generation