Releases: strakam/generals-bots
Releases Β· strakam/generals-bots
v2.5.0
v2.4.0
v2.3.0
v2.2.1
v2.2.0
v2.1.0
v2.0.1
v2.0.0
2.0.0 (2024-10-26)
π₯³ We are happy to announce new big updates!
π Features
- Agent Deployment - now you can run your agents on online servers by running a single script. You also get replay links from generals.io, so you can analyze later.
- Improved map generation - generals are now spawned sufficiently apart. There always exists at least one path between them that doesn't contain
city
ormountain
. - Added truncation parameter - so training episodes can be stopped when too long
π Bug Fixes
- Environment deterministicity - envs now generate same exact games when passing same seeds. This makes experiments reproducible.
β¨ Enhancements
- Now we also have wiki
- Observation spaces are now more aligned with the real generals.io
v1.0.0
1.0.0 (2024-10-14)
π₯³ Generals-Bots is out!
We are excited to announce that the initial development phase is now completed!
The goal of this project is to be the last attempt to create a bot development platform for the game generals.io. To achieve this goal, multiple design choices had to be made:
- We chose
python
for everything code related. It is a home for the most advanced machine-learning tools out there (e.g.numpy
,pytorch
) and it is only natural to provide a codebase that is as compatible with them as possible. - The game integration follows
Gymnasium
andPettingZoo
standards. These standards are simple, pythonic and are de-facto THE standards for current Reinforcement-Learning. This will enable development based on machine learning, a feat not possible with previous attempts to build such platform. - The project is and will be open-source. We want to bring the game to the RL community and the RL community to the game! Having this project open-sourced we believe will create a fertile ground for pushing the limits of what current bots and players are capable of.
π· Main features
- π blazing-fast simulator: run thousands of steps per second with
numpy
-powered efficiency - π€ seamless integration: fully compatible with RL standards π€ΈGymnasium and π¦PettingZoo
- π§ effortless customization: easily tailor environments to your specific needs
- π¬ analysis tools: leverage features like interactive replays for deeper insights
π€ Future development
- βοΈ Include new game modes
- πΉ Export replays as videos or GIFs
- πΉοΈ Allow developers to play against their bots
- β‘ Introduce new agents with varying strategies as benchmarks
.. we will be happy if you participate!
β€οΈ Big thanks to
- @jdujava for continuous feedback, bug-spotting, code improvements and implementation of the
ExpanderAgent
- @Puckoland for extensive refactoring and bringing more "code culture" to the project
- @kvankova for adding CI/CD features and helping with establishing project development practices