diff --git a/README.md b/README.md
index 11475da8f..c1b3476ff 100644
--- a/README.md
+++ b/README.md
@@ -16,18 +16,17 @@
 
 ## What is Arcadia?
 
-**Arcadia** comes from [Greek mythology](https://www.greekmythology.com/Myths/Places/Arcadia/arcadia.html)(a tranquil and idyllic region, representing harmony, serenity, and natural beauty). We aim to help everyone find a more perfect integration between humans and AI.
-
-To achieve this goal, we provide this one-stop LLMOps solution. Furthermore, we can easily host **Arcadia** at any Kubernetes cluster as production ready by integrating [kubebb](https://github.com/kubebb)(Kubernetes building blocks).
+**Arcadia** is a one-stop enterprise-grade LLMOps platform that provides a unified interface for developers and operators to build, debug,deploy and manage AI agents with a orchestration engine(**RAG(Retrieval Augmented Generation)** and **LLM finetuning** has been supported).
 
 ## Features
-* Multi-tenant isolation (data, model services), built-in OIDC, RBAC, and auditing, supporting different companies and departments to develop through a unified platform
-* Kubernetes native AGI agent orchestration
+
+* Build,debug,deploy AI agents on ops-console(GUI for LLMOps)
+* Chat with AGI agent on agent-portal(GUI for gpt chat)
+* Enterprise-grade infratructure with [KubeBB](https://github.com/kubebb): Multi-tenant isolation (data, model services), built-in OIDC, RBAC, and auditing, supporting different companies and departments to develop through a unified platform
+* Support most of the popular LLMs(large language models),embedding models,reranking models,etc..
+* Inference acceleration with [vllm](https://github.com/vllm-project/vllm),distributed inference with [ray](https://github.com/ray-project/ray),quantization, and more
+* Support fine-tuining with [llama-factory](https://github.com/hiyouga/LLaMA-Factory)
 * Built on langchaingo(golang), has better performance and maintainability
-* Support distributed inference using Ray
-* Support quality and performance evaluation of AGI agent under different configurations
-* A development and operational platform for AI agents, along with an AI agent portal for end-users
-* Developed based on micro frontends and low-code approach, allowing for quick scalability and integration
 
 ## Architecture
 
@@ -45,52 +44,59 @@ Visit our [online documents](http://kubeagi.k8s.com.cn/docs/intro)
 
 Read [user guide](http://kubeagi.k8s.com.cn/docs/UserGuide/intro)
 
+## Supported Models
+
+### List of Models can be deployed by kubeagi
+
 ### LLMs
 
-List of supported(tested) LLMs
-* baichuan2-7b
-* chatglm2-6b
-* qwen-7b-chat / qwen-14b-chat / qwen-72b-chat
-* llama2-7b
-* Mistral-7B-Instruct-v0.1
-* bge-large-zh  ***embedding***
-* m3e ***embedding***
-* [ZhiPuAI(智谱 AI)](https://github.com/kubeagi/arcadia/tree/main/pkg/llms/zhipuai)
-  - [example](https://github.com/kubeagi/arcadia/blob/main/examples/zhipuai/main.go)
-  - [embedding](https://github.com/kubeagi/arcadia/tree/main/pkg/embeddings/zhipuai)
-* [DashScope(灵积模型服务)](https://github.com/kubeagi/arcadia/tree/main/pkg/llms/dashscope)
-  - [example](https://github.com/kubeagi/arcadia/blob/main/examples/dashscope/main.go)
-  - [text-embedding-v1(通用文本向量 同步接口)](https://help.aliyun.com/zh/dashscope/developer-reference/text-embedding-api-details)
+* [chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b)
+* [chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b>)
+* [qwen(7B,14B,72B)](https://huggingface.co/Qwen)
+* [qwen-1.5(0.5B,1.8B,4B,14B,32B](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524)
+* [baichuan2](https://huggingface.co/baichuan-inc)
+* [llama2](https://huggingface.co/meta-llama)
+* [mistral](https://huggingface.co/mistralai)
 
 ### Embeddings
 
-> Fully compatible with [langchain embeddings](https://github.com/tmc/langchaingo/tree/main/embeddings)
+* [bge-large-zh](https://huggingface.co/BAAI/bge-large-zh-v1.5)
+* [m3e](https://huggingface.co/moka-ai/m3e-base)
+
+### Reranking
+
+* [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) ***reranking***
+* [bce-reranking](<https://github.com/netease-youdao/BCEmbedding>) ***reranking***
 
-### VectorStores
+### List of Online(third party) LLM Services can be integrated by kubeagi
+
+* [OpenAI](https://openai.com/)
+* [Google Gemini](https://gemini.google.com/)
+* [智谱AI](https://github.com/kubeagi/arcadia/tree/main/pkg/llms/zhipuai)
+  * [example](https://github.com/kubeagi/arcadia/blob/main/examples/zhipuai/main.go)
+  * [embedding](https://github.com/kubeagi/arcadia/tree/main/pkg/embeddings/zhipuai)
+* [DashScope(灵积模型服务)](https://github.com/kubeagi/arcadia/tree/main/pkg/llms/dashscope)
+  * [example](https://github.com/kubeagi/arcadia/blob/main/examples/dashscope/main.go)
+  * [text-embedding-v1(通用文本向量 同步接口)](https://help.aliyun.com/zh/dashscope/developer-reference/text-embedding-api-details)
+
+## Supported VectorStores
 
 > Fully compatible with [langchain vectorstores](https://github.com/tmc/langchaingo/tree/main/vectorstores)
 
-- ✅ [PG Vector](https://github.com/tmc/langchaingo/tree/main/vectorstores/pgvector), KubeAGI adds the PG vector support to [langchaingo](https://github.com/tmc/langchaingo) project.
-- ✅ [ChromaDB](https://docs.trychroma.com/)
+* ✅ [PG Vector](https://github.com/tmc/langchaingo/tree/main/vectorstores/pgvector), KubeAGI adds the PG vector support to [langchaingo](https://github.com/tmc/langchaingo) project.
+* ✅ [ChromaDB](https://docs.trychroma.com/)
 
 ## Pure Go Toolchains
 
 Thanks to [langchaingo](https://github.com/tmc/langchaingo),we can have comprehensive AI capability in Golang!But in order to meet our own unique needs, we have further developed a number of other toolchains:
 
-- [Optimized DocumentLoaders](https://github.com/kubeagi/arcadia/tree/main/pkg/documentloaders): optimized csv,etc...
-- [Extended LLMs](https://github.com/kubeagi/arcadia/tree/main/pkg/llms): zhipuai,dashscope,etc...
-- [Tools](https://github.com/kubeagi/arcadia/tree/main/pkg/tools): bingsearch,weather,etc...
-- [AppRuntime](https://github.com/kubeagi/arcadia/tree/main/pkg/appruntime): powerful node(LLM,Chain,KonwledgeBase,vectorstore,Agent,etc...) orchestration runtime for arcadia
+* [Optimized DocumentLoaders](https://github.com/kubeagi/arcadia/tree/main/pkg/documentloaders): optimized csv,etc...
+* [Extended LLMs](https://github.com/kubeagi/arcadia/tree/main/pkg/llms): zhipuai,dashscope,etc...
+* [Tools](https://github.com/kubeagi/arcadia/tree/main/pkg/tools): bingsearch,weather,etc...
+* [AppRuntime](https://github.com/kubeagi/arcadia/tree/main/pkg/appruntime): powerful node(LLM,Chain,KonwledgeBase,vectorstore,Agent,etc...) orchestration runtime for arcadia
 
 We have provided some examples on how to use them. See more details at [here](https://github.com/kubeagi/arcadia/tree/main/examples)
 
-## CLI
-
-We provide a Command Line Tool `arctl` to interact with `arcadia`. See [here](http://kubeagi.k8s.com.cn/docs/Tools/arctl-tool) for more details.
-
-- ✅ datasource management
-- ✅ RAG evaluation
-
 ## Contribute to Arcadia
 
 If you want to contribute to Arcadia, refer to [contribute guide](http://kubeagi.k8s.com.cn/docs/Contribute/prepare-and-start).
diff --git a/deploy/charts/arcadia/Chart.yaml b/deploy/charts/arcadia/Chart.yaml
index 4f25507cf..3c17857a9 100644
--- a/deploy/charts/arcadia/Chart.yaml
+++ b/deploy/charts/arcadia/Chart.yaml
@@ -3,7 +3,7 @@ name: arcadia
 description: A Helm chart(Also a KubeBB Component) for KubeAGI Arcadia
 type: application
 version: 0.3.30
-appVersion: "0.2.1"
+appVersion: "0.2.2"
 
 keywords:
   - LLMOps
diff --git a/deploy/charts/arcadia/values.yaml b/deploy/charts/arcadia/values.yaml
index 66c7b8f04..cfb5b4613 100644
--- a/deploy/charts/arcadia/values.yaml
+++ b/deploy/charts/arcadia/values.yaml
@@ -37,7 +37,7 @@ config:
 controller:
   # 1: error 3:info 5:debug
   loglevel: 3
-  image: kubeagi/arcadia:v0.2.1-20240401-b80e4e4
+  image: kubeagi/arcadia:v0.2.2
   imagePullPolicy: IfNotPresent
   resources:
     limits:
@@ -51,7 +51,7 @@ controller:
 # related project: https://github.com/kubeagi/arcadia/tree/main/apiserver
 apiserver:
   loglevel: 3
-  image: kubeagi/arcadia:v0.2.1-20240401-b80e4e4
+  image: kubeagi/arcadia:v0.2.2
   enableplayground: false
   port: 8081
   ingress:
@@ -70,7 +70,7 @@ apiserver:
 opsconsole:
   enabled: true
   kubebbEnabled: true
-  image: kubeagi/ops-console:v0.2.1-20240401-2e63d80
+  image: kubeagi/ops-console:v0.2.2
   ingress:
     path: kubeagi-portal-public
     host: portal.<replaced-ingress-nginx-ip>.nip.io
@@ -81,7 +81,7 @@ gpts:
   # all gpt resources are public in this namespace
   public_namespace: gpts
   agentportal:
-    image: kubeagi/agent-portal:v0.1.0-20240401-bc9e42d
+    image: kubeagi/agent-portal:v0.1.0-20240411-e26a310
     ingress:
       path: ""
       host: gpts.<replaced-ingress-nginx-ip>.nip.io
@@ -91,7 +91,7 @@ fastchat:
   enabled: true
   image:
     repository: kubeagi/arcadia-fastchat
-    tag: v0.2.36
+    tag: v0.2.36-patch
   ingress:
     enabled: true
     host: fastchat-api.<replaced-ingress-nginx-ip>.nip.io
@@ -131,7 +131,7 @@ minio:
 # Related project: https://github.com/kubeagi/arcadia/tree/main/data-processing
 dataprocess:
   enabled: true
-  image: kubeagi/data-processing:v0.2.1
+  image: kubeagi/data-processing:v0.2.2
   port: 28888
   config:
     llm: