Made a quick and dirty fork of YOLACT with minor changes.

  • Added --inverse_mask’ option. This adds an extra output file with just the mask data.
  • Compiled/run on WSL2

The choice of YOLCAT was based on this article: https://medium.com/@anno-ai/evaluating-segmentation-methods-for-single-objects-e773f025b5e0

Below is an example script to convert a folder of images. This script generates two files for each input,

  • a .jpg file with just the recognized image only with remaining image blacked out, and
  • a .jpg_mask file that only include a white mask of the identified object and remaining image blacked out.

in_dir=...
in_files=$(ls -1 ${in_dir}/*.jpg | cut -f2 -d/)
if [ ! -d ${in_dir}_out ] ; then
    mkdir  ${in_dir}_out
fi

for f in ${in_files} ;  do

    echo $f
    python eval.py \
           --trained_model=weights/yolact_base_54_800000.pth \
           --score_threshold=0.15 \
           --top_k=1 \
           --display_masks=no \
           --display_bboxes=no \
           --display_text=no \
           --display_scores=no \
           --inverse_masks=yes \
           --image=$in_dir/$f:${in_dir}_out/$f

done