[Help]: Updated docker container without backup #324
-
Controller Version5.9.31 Describe Your Issue or QuestionToday i performed an update trough the Synology Container Manager. Unfortunately i didn't pay close attention and had no backup before performing this update. There were 32 switches attached with many port configurations. I found out that i can pull the running config from the 48 port switches when connecting to the serial port, but the 10-port switches have no serial port, and SSH is disabled. The docker container was not setup with dedicated volumes which i can point the new container to. Can anyone help me out, or point me in the right direction? Expected BehaviorData retention after updating, but i should've created an export beforehand anyway. Steps to ReproduceConfigure all the switches, update Omada Controller and lose everything youve worked on for the last months. How You're Launching the Container
Container Logs
Additional ContextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Sorry that I never responded to this. So I am not sure exactly how the container was deployed but none of the suggested methods I have omit volumes. I am not that familiar with the Synology container manager and if there is some sort of catalog of deployable containers but if there is a catalog, it sounds like it is not configured correctly by default. Without volumes, the container should actually be configured to write data to unnamed Docker managed volumes which would show up from So if I start a container without any volumes like: $ docker run -d --name omada-controller mbentley/omada-controller:5.12 It'll create two volumes: $ docker volume ls
DRIVER VOLUME NAME
local 0aae19bc249e043a699a46c9541267c21d8300114617364ef2a57da0955e6329
local 1d35d7d1b2c77db3851110d445456eb6fe0f25f5ca408f13c607387a9823c3fe Only if you have the old container definition still around can you really see where they were mounted without inspecting the contents of the volumes: $ docker inspect omada-controller --format '{{json .Mounts}}' | jq .
[
{
"Type": "volume",
"Name": "0aae19bc249e043a699a46c9541267c21d8300114617364ef2a57da0955e6329",
"Source": "/var/lib/docker/volumes/0aae19bc249e043a699a46c9541267c21d8300114617364ef2a57da0955e6329/_data",
"Destination": "/opt/tplink/EAPController/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "1d35d7d1b2c77db3851110d445456eb6fe0f25f5ca408f13c607387a9823c3fe",
"Source": "/var/lib/docker/volumes/1d35d7d1b2c77db3851110d445456eb6fe0f25f5ca408f13c607387a9823c3fe/_data",
"Destination": "/opt/tplink/EAPController/logs",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
] |
Beta Was this translation helpful? Give feedback.
Sorry that I never responded to this. So I am not sure exactly how the container was deployed but none of the suggested methods I have omit volumes. I am not that familiar with the Synology container manager and if there is some sort of catalog of deployable containers but if there is a catalog, it sounds like it is not configured correctly by default. Without volumes, the container should actually be configured to write data to unnamed Docker managed volumes which would show up from
docker volume ls
.So if I start a container without any volumes like:
It'll create two volumes:
$ docker volume ls DRIVER VOLUME NAME l…