Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tiup cluster scale-in two TiKV, when the two TiKV become Tomestone status, tiup cluster restart, the two tikv come back to cluster again #1685

Closed
Tammyxia opened this issue Dec 20, 2021 · 3 comments
Assignees
Labels
type/bug Categorizes issue as related to a bug.

Comments

@Tammyxia
Copy link

Bug Report

Please answer these questions before submitting your issue. Thanks!

  1. What did you do?
  • scale-in two TiKV, we can see the regions in them are decreasing.
  • When the two TiKV become Tomestone status, restart cluster: tiup cluster restart
  • Check cluster status and TiKV granfa.
  1. What did you expect to see?
  • The two scaled-in TiKV will not add to TiDB cluster.
  1. What did you see instead?
  • The two scaled-in TiKV status become UP again.
  • The two scaled-in TiKV acquired region again. From pd log, the two stores are still there.
    9nvDFLRpca
  1. What version of TiUP are you using (tiup --version)?
@Tammyxia Tammyxia added the type/bug Categorizes issue as related to a bug. label Dec 20, 2021
@srstack srstack self-assigned this Dec 20, 2021
@srstack
Copy link
Collaborator

srstack commented Dec 22, 2021

PD changed the return struct of store api. PD will fix this.

@srstack srstack closed this as completed Dec 22, 2021
@rleungx
Copy link
Member

rleungx commented Dec 22, 2021

PD changed the return struct of store api. PD will fix this.

Would you like to comment more details on this issue so that we can know why changing the API of PD will cause this problem?

@srstack
Copy link
Collaborator

srstack commented Dec 22, 2021

PD changed the return struct of store api. PD will fix this.

Would you like to comment more details on this issue so that we can know why changing the API of PD will cause this problem?

tikv/pd#4485

Because TiUP cannot judge the task status of scaled-in TiKV based on the PD return value

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Categorizes issue as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants