Here's the English translation of the provided text:
This plugin is for entertainment and educational purposes only.
NaiLongRemove is a simple AI model-based plugin designed to identify "NaiLong" meme images in groups and automatically remove them.
Currently, the plugin supports three models, which can be switched via the configuration file, as detailed in the
configuration section below.
Users can choose the model that best suits their needs; both models have been optimized for performance, but errors may
still occur to varying degrees. Feedback is welcome!
- The run_with_napcat_en.ipynb file supports one-click deployment on platforms like Kaggle or Huggingface's Spaces. You can complete the bot deployment simply by clicking "Run" and scanning the QR code!
- Supports Docker one-click deployment.
If you are new to NoneBot, please refer to this guide
To avoid dependency issues, we've separated the installation methods for using GPU inference from the regular installation. Choose the one that suits your needs.
You can choose one of the following methods.
[Recommended] Install using nb-cli
Open a terminal in the root directory of the NoneBot2 project and run the following command to install:nb plugin install nonebot-plugin-nailongremove
Install using a package manager
Open a terminal in the plugin directory of the NoneBot2 project, and use the appropriate command based on your package manager.pip
pip install nonebot-plugin-nailongremove
pdm
pdm add nonebot-plugin-nailongremove
poetry
poetry add nonebot-plugin-nailongremove
conda
conda install nonebot-plugin-nailongremove
Open the pyproject.toml
file in the root directory of the NoneBot2 project, and add the following to
the [tool.nonebot]
section:
[tool.nonebot]
plugins = [
# ...
"nonebot_plugin_nailongremove"
]
Click to expand
[!NOTE] These steps are more technical and complicated, so non-technical users may choose to skip them.
In fact, CUDA acceleration has minimal impact on the models used by this plugin, so do not attempt these steps without understanding them.
First, enter the Bot virtual environment (if any).
[!NOTE] If you've previously installed the CPU inference version, please uninstall it first:
pip uninstall nonebot-plugin-nailongremove torch torchvision onnxruntime
Install the base package:
pip install nonebot-plugin-nailongremove-base
Based on your installed CUDA and CuDNN versions (if you have them; if not, install them), follow the official instructions to install the corresponding dependencies:
torch
(Official guide)onnxruntime-gpu
(Official guide)
After installation, configure the plugin to use CUDA for inference:
NAILONG_ONNX_PROVIDERS=["CUDAExecutionProvider"]
Finally, configure NoneBot2 to load the plugin. Open the pyproject.toml
file in the root directory of the NoneBot2
project, and add the following to the [tool.nonebot]
section:
[tool.nonebot]
plugins = [
# ...
"nonebot_plugin_nailongremove"
]
When updating the plugin later, you only need to update the base package. Do not install or update the non-base package:
pip install nonebot-plugin-nailongremove-base -U
Add the required configurations from the table below to the .env
file in your NoneBot2 project.
Configuration Item | Required | Default Value | Description |
---|---|---|---|
Global Configuration | |||
PROXY |
No | None |
Proxy address used for downloading model files. |
Response Configuration | |||
NAILONG_BYPASS_SUPERUSER |
No | False |
Whether to bypass checking images sent by superusers. |
NAILONG_BYPASS_ADMIN |
No | False |
Whether to bypass checking images sent by group administrators. |
NAILONG_NEED_ADMIN |
No | False |
Whether to skip checking all images in the group when the bot is not an administrator. |
NAILONG_LIST_SCENES |
No | [] |
List of allowed or disallowed chat scene IDs (e.g., group IDs or channel IDs). |
NAILONG_BLACKLIST |
No | True |
Whether to use blacklist mode. |
NAILONG_USER_BLACKLIST |
No | [] |
List of user IDs to be blacklisted. |
NAILONG_PRIORITY |
No | 100 |
Matcher priority. |
Behavior Configuration | |||
NAILONG_RECALL |
No | ["nailong"] |
Whether to recall the message |
NAILONG_MUTE_SECONDS |
No | {"nailong":0} |
Set the mute duration. If not set or the duration is 0, no mute will be applied. Unit: seconds |
NAILONG_TIP |
No | {"nailong": ["This group prohibits NaiLong images!"]} |
Message to send as a tip, using Alconna message template with custom variables. |
NAILONG_FAILED_TIP |
No | {"nailong": ["{:Reply($message_id)} Oh no, please don't send NaiLong images! 🥺 👉👈"]} |
Message sent when recalling fails or when recalling is disabled. |
NAILONG_CHECK_ALL_FRAMES |
No | False |
Specifies whether to check all frames in the image when using model 1. Requires setting NAILONG_CHECK_MODE to 0. When enabled, the $checked_result variable in the message template will return a GIF if the original image is animated. |
NAILONG_CHECK_RATE |
No | 0.8 |
When checking all frames of an image, the image will only be recalled or processed if a certain proportion of the frames meet the detection criteria. |
NAILONG_CHECK_MODE |
No | 0 |
Selects the detection method for GIF animations. 0. Check all frames 1. Check only the first frame 2. Random frame sampling |
Similarity Detection Configuration | |||
NAILONG_SIMILARITY_ON |
No | False |
Specifies whether to enable similarity detection on local storage before processing images. |
NAILONG_SIMILARITY_MAX_STORAGE |
No | 10 |
Maximum number of errored images stored locally. When the limit is reached, older images will be compressed and uploaded to the database, but it won’t affect previously stored images. |
NAILONG_HF_TOKEN |
No | None |
Hugging Face Access Token, which allows automatic data upload to Hugging Face and becomes a contributor to the dataset. |
General Model Configuration | |||
NAILONG_MODEL_DIR |
No | ./data/nailongremove |
The download location for the model. |
NAILONG_MODEL |
No | 1 |
Specifies which model to load; available models are listed below. |
NAILONG_AUTO_UPDATE_MODEL |
No | True |
Specifies whether to automatically update the model. |
NAILONG_CONCURRENCY |
No | 1 |
Maximum concurrency number for identifying frames of animated images. |
NAILONG_ONNX_PROVIDERS |
No | ["CPUExecutionProvider"] |
List of providers used for loading ONNX models; please refer to installation documentation above. |
Model 1 Specific Configuration | |||
NAILONG_MODEL1_TYPE |
No | tiny |
The model type used for model 1; options are tiny / m . |
NAILONG_MODEL1_YOLOX_SIZE |
No | None |
For model 1, custom input sizes may be changed. |
Model 2 Specific Configuration | |||
NAILONG_MODEL2_ONLINE |
No | False |
For model 2, specifies whether to enable online inference; this mode is currently not suitable with NAILONG_CHECK_MODE set to 0. |
Model 1&2 Specific Configuration | |||
NAILONG_MODEL1_SCORE |
No | {"nailong": 0.5} |
Confidence threshold for models 1&2; range is 0 ~ 1 . Values can be customized according to labels; set the corresponding threshold for detection, setting it to null or leaving it blank will ignore that label. |
Miscellaneous Configuration | |||
NAILONG_GITHUB_TOKEN |
No | None |
GitHub Access Token; can be filled in to address issues with model downloads or updates. |
0
: Inference trained on the Renet50 image classification model, thanks to @spawner1145 for providing the model, original link: spawner1145/NailongRecognize1
: Inference trained on the YOLOX object detection model, thanks to @NKXingXh for providing the model, original link: nkxingxh/NailongDetection2
: Inference trained on the YOLOv11 object detection model, thanks to @Hakureirm for providing the model, original link: Hakureirm/NailongKiller3
: Inference trained on the YOLOv11 object detection model, thanks to @Threkork for providing the model, original link: Threkork/kovi-plugin-check-alllong, it is recommended to set{"nailong": 0.78}
in theNAILONG_MODEL1_SCORE
configuration, andNAILONG_MODEL1_YOLOX_SIZE
to[640,640]
.
Variable Name | Type | Description |
---|---|---|
$event |
Event |
The current event |
$target |
Target |
The target of the event |
$message_id |
str |
The message ID |
$msg |
UniMessage |
The current message |
$ss |
Session |
The current session |
$checked_result |
Image |
The image after selecting the corresponding target, exists only when the model is configured as 1 . |
Whenever a "NaiLong" meme is recognized, it will be retracted and notified.
To store errored images locally (for SUPERUSERS
): Send "This is [type]"+image, for example: "This is nailong+image",
and it will be automatically stored locally. When similarity detection is on, in the next image check, it will
prioritize recognizing the images already stored locally.
- Plugin Learning Group: 200980266 (For installation, deployment, robot bugs, model accuracy issues, feedback here)
- Plugin Performance Testing Group: 829463462 (This group has deployed bots for testing existing model performance)
- AI Learning Group: 949992679 (Join for learning and discussing AI-related technologies)
Welcome everyone to join the group for learning and exchange!
- Fixed some SSL connection errors
- Added the run_with_napcat_en.ipynb file, which supports one-click deployment on platforms like Kaggle or Huggingface's Spaces. You can complete the bot deployment simply by clicking "Run" and scanning the QR code!
- Added Docker one-click deployment.
- The update adds a feature to select a mute tag, allowing users to choose whether to mute or recall the processing for different types of images.
- A new configuration option,
NAILONG_CHECK_RATE
, has been added. When detecting all frames of an animated image, this optional configuration allows success in judgment when the proportion of frames containing the "nailong" frame reaches a certain threshold.
- Added
model3
toNAILONG_MODEL
, a model trained based on YOLOv11. It is recommended to set{"nailong": 0.78}
in theNAILONG_MODEL1_SCORE
configuration, andNAILONG_MODEL1_YOLOX_SIZE
to[640,640]
. - Updated default values for configuration
items:
NAILONG_BYPASS_SUPERUSER
->False
,NAILONG_BYPASS_ADMIN
->False
.
- Optimized temporary processing solutions to reduce performance pressure and improve speed (the vector library faiss also supports GPU processing, but it is not recommended for non-professionals to use GPU due to the complex installation process).
- Added
NAILONG_HF_TOKEN
to automatically upload errored images to the Hugging Face dataset. - Changed the formats of the configuration items
NAILONG_TIP
andNAILONG_FAILED_TIP
, allowing random response messages. When the corresponding value is an empty list[]
, only the image will be checked (or the mute/revoke action will be performed) without returning a message.
- Updated the three frame processing modes for GIFs. You can choose through
NAILONG_CHECK_MODE
. - Updated the temporary handling for errored images. By enabling
NAILONG_SIMILARITY_ON
, local storage similarity matching can be used. Additionally, by sending "This is [type]"+image throughSUPERUSERS
, errored images can be saved to local records. - Added
model2
toNAILONG_MODEL
, which is based on the YOLOv11-trained model. Currently, it only supports Nailong recognition.
- Modified plugin dependencies to avoid some issues that affected the installation process. Please refer to the
installation documentation for more details.
- Corresponding configuration changes: Removed the
NAILONG_ONNX_TRY_TO_USE_GPU
configuration item and added theNAILONG_ONNX_PROVIDERS
configuration item.
- Corresponding configuration changes: Removed the
- Added support for checking all frames in a GIF and re-encapsulating the results into a new GIF. This is disabled by
default. The
$checked_image
variable has been deprecated, and a new$checked_result
variable has been added. - The input size for model 1 can now be automatically configured based on the model type, but if specified in the configuration, it will be used as the priority.
- Supported processing of images containing other tags. Some configuration items now allow custom values based on the tags.
- Added a user blacklist.
- The default model has been changed to 1.
- Optimized model auto-update (possibly a reverse optimization).
- Renamed the configuration item
NAILONG_YOLOX_SIZE
toNAILONG_MODEL1_YOLOX_SIZE
. - Model 1 can now automatically get the latest version, and you can also choose the model type through configuration.
- Model 1 can now control the confidence threshold for recognition via configuration.
- When loading the ONNX model, the system will attempt to use GPU by default. If it fails, a warning will be shown. If you don't want to see the warning, you can refer to the above to disable the corresponding configuration.
- Fixed a bug where the
NAILONG_NEED_ADMIN
configuration was not effective.
- Fixed a bug where the group manager and superuser settings were ignored.
- Refactored some code and fixed potential bugs.
- Added the
$checked_image
variable.
- Download models from the original repository.
- Refactored the plugin to support multiple platforms.
- Updated two new models and optimized model accuracy. Users can choose one of them for inference.
- Added features like mute, group blacklist and whitelist, optional admin detection disablement.
- Added an option for auto-updating the model.