Skip to content

Conversation

clarkkent0618
Copy link
Contributor

@clarkkent0618 clarkkent0618 commented Jul 5, 2024

What does this PR do?

First of all, thanks for your great work. Here is my personal understanding. If there are any mistakes, feel free to correct me!

Regardless of whether it is the official documentation's description of the usage of the padding_mask_crop parameter or the actual effect when the inpainting area is set to "Only masked" when using the AUTOMATIC1111 WebUI, the original input image size should be maintained, thereby eliminating the need for additional super-resolution operations.

The description of the docs about padding_mask_crop :

Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like StableDiffusionUpscalePipeline.

However, in practice, when this feature is enabled in diffusers, the method apply_overlay will resize the init_image to the size of actually inpainted part(512 * 512 in default) at first. And if the input image to the pipeline is not resized before generation, the overlaid result will be incorrect. On the other hand, resizing the original image at the input stage fails to preserve the original image size. It will significantly degrades the image quality and necessitating super-resolution to restore it.

I don't think this logic aligns with the original intent of this feature and differs from the implementation in automatic1111. Therefore, I have modified the apply_overlay function accordingly to ensure that the output image retains the same size as the original image.

Here is the comparison.

  1. Original Image and mask
    dog_cat
    dog_cat_mask

  2. If I do not resize the original image size before pipeline(the existing version code): the overlay result is incorrect and the image is also resized at the same time.
    old_version

  3. If I resize the original image at first: the overlay result is correct. But it degrades the image quality since it has resized the init image size to 512.
    resized_old_version

  4. Modified Version: The output image is of the same size as the original input and the overlaid result is correct.
    new_version

  5. Using AUTOMATIC111 WebUI: Just select the checkbox shown below. The output image is definitely the same size as the original image without resizing.
    image
    image

Test Code

pipeline = StableDiffusionInpaintPipeline.from_pretrained(
    base_model_path, torch_dtype=torch.float16, variant="fp16",
    low_cpu_mem_usage=False, 
    safety_checker=None, 
    requires_safety_checker = False
)
pipeline.enable_model_cpu_offload()

# load base and mask image
image_path="dog_cat.jpg"
mask_path="dog_cat_mask.png"

init_image = cv2.imread(image_path)[:,:,::-1]
init_image = Image.fromarray(init_image.astype(np.uint8)).convert("RGB").resize((512,512))
mask_image = 1.*(cv2.imread(mask_path).sum(-1)>255)[:,:,np.newaxis]
mask_image = Image.fromarray(mask_image.astype(np.uint8).repeat(3,-1)*255).convert("RGB").resize((512,512))
mask_image = pipeline.mask_processor.blur(mask_image, blur_factor=4)

generator = torch.Generator("cuda").manual_seed(6188)
caption = "black cat"
image = pipeline(prompt=caption, 
                 image=init_image, 
                 mask_image=mask_image, 
                 generator=generator, 
                 num_inference_steps=25, 
                 strength=1,
                 padding_mask_crop=40,
                 ).images[0]
image.save(f"output_inpainting.png")

Modified Code: I modify the code as below.

  def apply_overlay(
      self,
      mask: PIL.Image.Image,
      init_image: PIL.Image.Image,
      image: PIL.Image.Image,
      crop_coords: Optional[Tuple[int, int, int, int]] = None,
  ) -> PIL.Image.Image:
      """
      overlay the inpaint output to the original image
      """

      width, height = init_image.width, init_image.height

      init_image_masked = PIL.Image.new("RGBa", (width, height))
      init_image_masked.paste(init_image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(mask.convert("L")))
      
      init_image_masked = init_image_masked.convert("RGBA")

      if crop_coords is not None:
          x, y, x2, y2 = crop_coords
          w = x2 - x
          h = y2 - y
          base_image = PIL.Image.new("RGBA", (width, height))
          image = self.resize(image, height=h, width=w, resize_mode="crop")
          base_image.paste(image, (x, y))
          image = base_image.convert("RGB")
          
      image = image.convert("RGBA") 
      image.alpha_composite(init_image_masked)
      image = image.convert("RGB")

      return image

original code: apply_overlay in src/diffusers/image_processor Line 651
https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L651

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@clarkkent0618 clarkkent0618 changed the title Modify apply_overlay for inpainting to align with the intended purpose and logic of padding_mask_crop (Inpainting area: "Only Masked") Modify apply_overlay for inpainting with padding_mask_crop (Inpainting area: "Only Masked") Jul 6, 2024
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Sep 14, 2024
@clarkkent0618
Copy link
Contributor Author

@yiyixuxu @asomoza

@asomoza asomoza removed the stale Issues that haven't received updates label Oct 11, 2024
@asomoza
Copy link
Member

asomoza commented Oct 11, 2024

sorry for the late answer, I'll try to test this today or as soon as possible.

@asomoza
Copy link
Member

asomoza commented Oct 15, 2024

can you run make style and make quality please.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@clarkkent0618
Copy link
Contributor Author

can you run make style and make quality please.

@asomoza
Hi! There are my results. Please have a look. thank you
make style

examples/research_projects/geodiff/geodiff_molecule_conformation.ipynb:cell 53:59:7: F821 Undefined name `pickle`
   |
58 |   with open(save_path, 'wb') as f:
59 |       pickle.dump(results, f)
   |       ^^^^^^ F821
   |

examples/research_projects/gligen/demo.ipynb:cell 5:13:1: E402 Module level import not at top of cell
   |
11 | gen_boxes = [('a steam boat', [232, 225, 257, 149]), ('a jumping pink dolphin', [21, 249, 189, 123])]
12 | 
13 | import numpy as np
   | ^^^^^^^^^^^^^^^^^^ E402
   |

src/diffusers/configuration_utils.py:679:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
677 |             if field.name in self._flax_internal_args:
678 |                 continue
679 |             if type(field.default) == dataclasses._MISSING_TYPE:
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
680 |                 default_kwargs[field.name] = None
681 |             else:
    |

tests/models/test_modeling_common.py:381:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
380 |         model.set_default_attn_processor()
381 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
382 |         with torch.no_grad():
383 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:389:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
388 |         model.enable_npu_flash_attention()
389 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
390 |         with torch.no_grad():
391 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:397:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
396 |         model.set_attn_processor(AttnProcessorNPU())
397 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
398 |         with torch.no_grad():
399 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:432:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
431 |         model.set_default_attn_processor()
432 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
433 |         with torch.no_grad():
434 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:440:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
439 |         model.enable_xformers_memory_efficient_attention()
440 |         assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
441 |         with torch.no_grad():
442 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:448:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
447 |         model.set_attn_processor(XFormersAttnProcessor())
448 |         assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
449 |         with torch.no_grad():
450 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:479:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
477 |             return
478 | 
479 |         assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
480 |         with torch.no_grad():
481 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:487:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
486 |         model.set_default_attn_processor()
487 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
488 |         with torch.no_grad():
489 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:495:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
494 |         model.set_attn_processor(AttnProcessor2_0())
495 |         assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
496 |         with torch.no_grad():
497 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:503:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
502 |         model.set_attn_processor(AttnProcessor())
503 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
504 |         with torch.no_grad():
505 |             if self.forward_requires_fresh_args:
    |

tests/pipelines/controlnet/test_controlnet_sdxl.py:1022:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
     |
1021 |         controlnet = ControlNetModel.from_unet(unet, conditioning_channels=4)
1022 |         assert type(controlnet.mid_block) == UNetMidBlock2D
     |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
1023 |         assert controlnet.conditioning_channels == 4
     |

tests/pipelines/test_pipelines_common.py:770:21: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
768 |             if hasattr(component, "attn_processors"):
769 |                 assert all(
770 |                     type(proc) == AttnProcessor for proc in component.attn_processors.values()
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
771 |                 ), "`from_pipe` changed the attention processor in original pipeline."
    |

tests/schedulers/test_schedulers.py:827:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
825 |         scheduler_loaded = DDIMScheduler.from_pretrained(f"{USER}/{self.repo_id}")
826 | 
827 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
828 | 
829 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:838:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
836 |         scheduler_loaded = DDIMScheduler.from_pretrained(f"{USER}/{self.repo_id}")
837 | 
838 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
839 | 
840 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:854:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
852 |         scheduler_loaded = DDIMScheduler.from_pretrained(self.org_repo_id)
853 | 
854 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
855 | 
856 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:865:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
863 |         scheduler_loaded = DDIMScheduler.from_pretrained(self.org_repo_id)
864 | 
865 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
866 | 
867 |         # Reset repo
    |

Found 40 errors (21 fixed, 19 remaining).
make: *** [Makefile:57: style] Error 1 

make quality

examples/research_projects/geodiff/geodiff_molecule_conformation.ipynb:cell 53:59:7: F821 Undefined name `pickle`
   |
58 |   with open(save_path, 'wb') as f:
59 |       pickle.dump(results, f)
   |       ^^^^^^ F821
   |

examples/research_projects/gligen/demo.ipynb:cell 5:13:1: E402 Module level import not at top of cell
   |
11 | gen_boxes = [('a steam boat', [232, 225, 257, 149]), ('a jumping pink dolphin', [21, 249, 189, 123])]
12 | 
13 | import numpy as np
   | ^^^^^^^^^^^^^^^^^^ E402
   |

src/diffusers/configuration_utils.py:679:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
677 |             if field.name in self._flax_internal_args:
678 |                 continue
679 |             if type(field.default) == dataclasses._MISSING_TYPE:
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
680 |                 default_kwargs[field.name] = None
681 |             else:
    |

tests/models/test_modeling_common.py:381:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
380 |         model.set_default_attn_processor()
381 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
382 |         with torch.no_grad():
383 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:389:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
388 |         model.enable_npu_flash_attention()
389 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
390 |         with torch.no_grad():
391 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:397:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
396 |         model.set_attn_processor(AttnProcessorNPU())
397 |         assert all(type(proc) == AttnProcessorNPU for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
398 |         with torch.no_grad():
399 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:432:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
431 |         model.set_default_attn_processor()
432 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
433 |         with torch.no_grad():
434 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:440:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
439 |         model.enable_xformers_memory_efficient_attention()
440 |         assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
441 |         with torch.no_grad():
442 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:448:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
447 |         model.set_attn_processor(XFormersAttnProcessor())
448 |         assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
449 |         with torch.no_grad():
450 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:479:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
477 |             return
478 | 
479 |         assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
480 |         with torch.no_grad():
481 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:487:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
486 |         model.set_default_attn_processor()
487 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
488 |         with torch.no_grad():
489 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:495:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
494 |         model.set_attn_processor(AttnProcessor2_0())
495 |         assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
496 |         with torch.no_grad():
497 |             if self.forward_requires_fresh_args:
    |

tests/models/test_modeling_common.py:503:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
502 |         model.set_attn_processor(AttnProcessor())
503 |         assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
504 |         with torch.no_grad():
505 |             if self.forward_requires_fresh_args:
    |

tests/pipelines/controlnet/test_controlnet_sdxl.py:1022:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
     |
1021 |         controlnet = ControlNetModel.from_unet(unet, conditioning_channels=4)
1022 |         assert type(controlnet.mid_block) == UNetMidBlock2D
     |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
1023 |         assert controlnet.conditioning_channels == 4
     |

tests/pipelines/test_pipelines_common.py:770:21: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
768 |             if hasattr(component, "attn_processors"):
769 |                 assert all(
770 |                     type(proc) == AttnProcessor for proc in component.attn_processors.values()
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
771 |                 ), "`from_pipe` changed the attention processor in original pipeline."
    |

tests/schedulers/test_schedulers.py:827:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
825 |         scheduler_loaded = DDIMScheduler.from_pretrained(f"{USER}/{self.repo_id}")
826 | 
827 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
828 | 
829 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:838:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
836 |         scheduler_loaded = DDIMScheduler.from_pretrained(f"{USER}/{self.repo_id}")
837 | 
838 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
839 | 
840 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:854:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
852 |         scheduler_loaded = DDIMScheduler.from_pretrained(self.org_repo_id)
853 | 
854 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
855 | 
856 |         # Reset repo
    |

tests/schedulers/test_schedulers.py:865:16: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
863 |         scheduler_loaded = DDIMScheduler.from_pretrained(self.org_repo_id)
864 | 
865 |         assert type(scheduler) == type(scheduler_loaded)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
866 | 
867 |         # Reset repo
    |

Found 19 errors.
make: *** [Makefile:43: quality] Error 1

@asomoza
Copy link
Member

asomoza commented Oct 18, 2024

thanks, it seems the script is not working for you, it should fix the errors for your files only and not give a log for all the other files. Probably a wrong version or configuration of ruff. If you want I can do it for you.

@asomoza
Copy link
Member

asomoza commented Oct 18, 2024

this change looks good to me and makes sense. IMO the user that utilizes this option should ensure that the mask has the correct size instead of us enforcing it. WDYT @yiyixuxu?

Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Nov 11, 2024
@yiyixuxu yiyixuxu removed the stale Issues that haven't received updates label Nov 17, 2024
@yiyixuxu yiyixuxu merged commit 1d2204d into huggingface:main Nov 17, 2024
15 checks passed
sayakpaul pushed a commit that referenced this pull request Dec 23, 2024
…g area: "Only Masked") (#8793)

* Modify apply_overlay for inpainting

* style

---------

Co-authored-by: root <root@debian>
Co-authored-by: Álvaro Somoza <[email protected]>
Co-authored-by: yiyixuxu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants