Skip to content
This repository has been archived by the owner on Oct 5, 2023. It is now read-only.

Integration of docker update into CSGO CI/CD pipeline #17

Closed
sasanmcp opened this issue May 6, 2020 · 3 comments
Closed

Integration of docker update into CSGO CI/CD pipeline #17

sasanmcp opened this issue May 6, 2020 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@sasanmcp
Copy link

sasanmcp commented May 6, 2020

guys;
right now when I run the docker image, it takes quite a while for entry.sh to update the container with new binaries before the server is actually up and running.

I am thinking maybe it is a better idea to add a routine to the actual CSGO CI/CD pipelines so that whenever they release a new binary for it, this docker image is also updated along with that and tagged with "latest".

to be transparent, I just fetch "cm2network/csgo" image and not "cm2network/csgo:latest" but I don't think that would make any change in my experience.

@CM2Walki CM2Walki self-assigned this May 6, 2020
@CM2Walki CM2Walki added the enhancement New feature or request label May 6, 2020
@CM2Walki
Copy link
Owner

CM2Walki commented May 6, 2020

Hey @sasanmcp,

the images used to hold the entirety of the CS:GO dedicated server a couple of months ago (about 17-20GB) which made the entire image unpractical to pull, because you would need to spend 15 minutes extracting it (and then having to update to the latest version).

The current version of the image is only steamcmd with the entry script, building it periodically to keep it up to date is not going to change much, the only difference will probably be the initial steamcmd update which is negligible.

Even if you hook it to the CSGO update cycle via the steam webapi or steamdb it's still going to remain in the same state after it was rebuild (only steamcmd will be updated).

So I don't really see a reason why it should be done.

@sasanmcp
Copy link
Author

sasanmcp commented May 7, 2020 via email

@CM2Walki
Copy link
Owner

CM2Walki commented May 7, 2020

I could think of a way. Use a local network private registry that has compression disabled (so you don't have to extract the image again for 15 minutes). Commit your fully built docker images there (using docker commit / build & docker push). Then just run from that registry, this approach is obviously quiet bandwidth heavy but it should cut your deployment time.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants