Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix output dir, add quiet mode. #3

Closed
wants to merge 3 commits into from

Conversation

dschneller
Copy link

See commit comments for the reasoning behind these changes.

Previously the script would determine
its own location instead of using the
current working directory, in contrast
to what the comment above that line
of code expressed.

Trying to create a directory at the
location of the script itself is not
a good idea anyway and will usually
fail if the script is placed e. g. in
a place like /usr/local/bin.
* Adds a -q switch to suppress all output in
  case of regular successful runs. This makes
  the script much more suitable for unattended
  runs via cron where any other output would
  usually cause a notification mail.

* Adds a -n switch to optionally skip the
  reverse IP address lookup.
New options -r (rotating backups, keeping up to n
previous files) and -c for providing a mysql
configuration file with credentials to be used.
Allows storing these in a protected file instead
of having to provide it via script parameter
(which might be logged somewhere) or requiring
interactive entry.

Using the --defaults-extra-file option for the
mysql clients requires some working around
quoting issues (hence eval). Moreover, the option
needs to be the first.
@maxhq
Copy link
Owner

maxhq commented Mar 19, 2015

Thank you very much for your improvements!
I did not have time to do an in-depth review yet, but as soon as that's done, I'll merge them into my repo!

echo -n "${i_percent}%"
else
if [ $(($i_percent % 2)) -eq 0 ]; then echo -n "."; fi
TABLE_DUMP_CMD="mysqldump ${MYSQL_CONN} --routines --opt --single-transaction --skip-lock-tables $dump_opt $DBNAME --tables ${table}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting this error when not using the conf file.

# ./zabbix-mysql-dump -n -h 127.0.0.1 -u zabbix -pzabbix -d zabbix -o /root/
Configuration:
 - host:     127.0.0.1 (127.0.0.1)
 - database: zabbix
 - user:     zabbix
Fetching list of existing tables...
Starting table backups...
mysqldump: Couldn't find table: "zabbix"
Failed dumping table acknowledges. Aborting.

I found that $DBNAME is already on ${MYSQL_CONN}

The TABLE_DUMP_CMD is like this mysqldump -h 127.0.0.1 -u zabbix -pzabbix zabbix --routines --opt --single-transaction --skip-lock-tables --no-data zabbix --tables acknowledges

Where zabbix appears two times, and the command tries to find a table named zabbix, and ends with an error.

@endersonmaia
Copy link

I suggest, for an easy merge by the maintainers, split in different PRs.

Individual PRs for:

  • quiet mode;
  • credentials file;
  • backup rotation;
  • skip reverse lookup;

@maxhq
Copy link
Owner

maxhq commented Jul 21, 2015

@endersonmaia Thank you for your comments! Sorry I left this request open for so long. Splitting the patch up would indeed help me, as the fault you pointed out has to be fixed.
The config file handling introduced in 9d3cfcd is a nice idea, but I think it should be adjusted. Currently it just leaves out a fixed set of options from being read from the command line - I think command line options should always win (just like mysql handles it).

@maxhq
Copy link
Owner

maxhq commented Jan 22, 2016

I incorporated all changes but had to rewrite quite some of the code.
Thank you @dschneller for your contributions and @endersonmaia + @sm4sh1k for your comments!

The new release is available here: https://github.com/maxhq/zabbix-backup/releases/tag/v0.8.0

@maxhq maxhq closed this Jan 22, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants