Skip to content

Releases: kubeagi/arcadia

kuberay-operator-1.0.0

10 Jan 08:28
ecccf39
Compare
Choose a tag to compare

A Helm chart for Kubernetes

arcadia-0.2.9

10 Jan 08:28
ecccf39
Compare
Choose a tag to compare

A Helm chart(KubeBB Component) for KubeAGI Arcadia

arcadia-0.2.8

10 Jan 05:40
0dd9a1f
Compare
Choose a tag to compare

A Helm chart(KubeBB Component) for KubeAGI Arcadia

arcadia-0.2.7

09 Jan 07:54
459dcde
Compare
Choose a tag to compare

A Helm chart(KubeBB Component) for KubeAGI Arcadia

arcadia-0.2.6

09 Jan 06:28
307e26d
Compare
Choose a tag to compare

A Helm chart(KubeBB Component) for KubeAGI Arcadia

arcadia-0.2.4

05 Jan 08:27
1d0156c
Compare
Choose a tag to compare

A Helm chart(KubeBB Component) for KubeAGI Arcadia

arcadia-0.2.3

05 Jan 06:44
cf9e6d4
Compare
Choose a tag to compare

A Helm chart(KubeBB Component) for KubeAGI Arcadia

arcadia-0.2.2

04 Jan 03:06
fe314a0
Compare
Choose a tag to compare

A Helm chart(KubeBB Component) for KubeAGI Arcadia

arcadia-0.2.1

02 Jan 07:32
a2fd073
Compare
Choose a tag to compare

A Helm chart(KubeBB Component) for KubeAGI Arcadia

v0.1.0

29 Dec 07:40
d3065d2
Compare
Choose a tag to compare

arcadia-v0.1.0

Welcome to this new release! Our first release towards one-stop LLMOps!

Images built for this release:

  • kubeagi/arcadia:v0.1.0
  • kubeagi/data-processing:v0.1.0

Breaking Changes:

None

Feature summary 🚀 🚀 🚀

  1. Dataset management
  • Manage data by integrating with object storage(s3), view excel file and add label to different data types
  • Versioned dataset management with default datasource ObjectStorageService
  • Comprehensive data processing capabilities: data cleaning, text splitting (e.g., text segmentation, QA splitting using LLM)
  • RDMA as an optional storage service that can speed up model/data download by about 10 times
  1. AI Knowledgebase
  • Auto QA Embedding generation and indexing
  • Chromadb as the vector store by default
  1. AI Model and Inference Service
  • Manage the lifecycle of model and Inference Service
  • Able to host llm and embedding models in Kubernetes via our Worker protocol: qwen, baichuan, vicuna, chatglm, bge-large-zh-v1.5, etc...
  • Able to integrate with powerful 3rd_party providers, like zhipuai, openai, etc...
  • Model loading accelerations with rdma network protocols
  • Support CPU & GPU Model Serving
  1. LLM Applications
  • A powerful and flexible Application Runtime
  • GPTs - initial implementation of LLM application orchestration capabilities. Manage and orchestrate Prompt, LLM/Retriever Chain nodes, and provide relevant example applications (based on streamlit)
  • Provide LLMChain and RetrivalQAChain for common LLM applications and RAG applications
  • Create/debug typical GPT like application using web console easily
  • Support blocking and SSE mode chat
  1. A all-in-one deployment helm chart

  2. Documentation online doc link

Changelog

New Features

Read more