-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.10 beta - influx_tsm conversion from bz1 #5406
Comments
@discoduck2x has the conversion tool output any information yet? it should give you some updates as it progresses through the shards. |
@toddboom , thanks for reply I guess i will let it simmer for another 5-10hours and see what happens... should go out with a warning though to anyone tryign to convert +200gb of data.... |
Approximately how big are the new folders? Depending on the data, I'd expect the final result to be in the 10-20GB range. Also, we did most of our testing on SSD, so it's possible that it's I/O bound on the disk array, but I'd expect you to still get decent performance. Keep me posted as it progresses. |
200-300mb per folder , will let u know more tomorrow. Thx Todd Persen notifications@github.com wrote: Approximately how big are the new folders? Depending on the data, I'd expect the final result to be in the 10-20GB range. Also, we did most of our testing on SSD, so it's possible that it's I/O bound on the disk array, but I'd expect you to still get decent performance. Keep me posted as it progresses. Reply to this email directly or view it on GitHubhttps://github.com//issues/5406#issuecomment-173384264. |
I did a little digging, and think I might have figured out why your conversion is crawling along. You enabled the This means that when the conversion tool creates 20 goroutines (to handle your 20 shards), but only one goroutine will be running at any given time. This would explain your "low CPU usage". If you enabled backups, rather if you didn't disable backups, you should be able to kill the process, move the directories back, and start again with I'm going to play around with the parallelism in the conversion tool, as well as add some extra logging in the case a TSM file hits the max size (although it doesn't sound like that's the case you're seeing here). @toddboom How does having |
i canceld this morning, it had done 2 shards by then,, one took 9h.. reverted to backup and gonna add iostat to the host to see how quelengths etc look but it feels like something else |
before: -rw-r--r-- 1 root root 12G Jan 21 07:37 510 |
my test env which has been running 0.10beta started to behave strangely today,, so i tried to revert this procution host , removed the 0.10beta, reinstalled the stable 0.9.6.1 , but cannot get it to load my database.bak stuff,, i am copying them to when started up and going into the CLI "show databases" only shows _internal ,, if i do "use mydb" then i CAN do "show series" but if i do "show measurements" it says "database not found" ... please help! |
@discoduck2x Did you remove the The @toddboom Can you think of any other steps that might need to be taken when downgrading and restoring the backed-up data? |
@joelegasse , yes i tried again to "upgrade" , this time using the nightly build which seems a bit faster,, all shards finished in aboout 9hours. however,, after i start influx the log gives me some info that its reading this and that,, but i still cant see the database in CLI nor access it via grafana, heres the log output as i start it: 2016/01/22 23:34:22 InfluxDB starting, version 0.10.0-nightly-72c6a51, branch master, commit 72c6a51, built 2016-01-11T05:00:46+0000 and heres CLI:
im gonna await till sunday then just kick it all out and restart empty database(s) from 0.9.6.1 stable. |
gave up... went all way back to 0.9.4.2 |
~200gb of database @ 20shards, been running for 5 hours now, not high cpu usage, any way to see progress? and more important, what happens if i ctrl-c? corruption?
The text was updated successfully, but these errors were encountered: