Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve lua vm executing with pool #2864

Merged
merged 1 commit into from
Nov 28, 2022

Conversation

ikaven1024
Copy link
Member

Signed-off-by: yingjinhui yingjinhui@didiglobal.com

What type of PR is this?
/kind cleanup

What this PR does / why we need it:
At present, we create a new Lua vm to run script for every calling. This pr create vm once, before running script, get vm from pool, reducing the cost of vm creating.

I test it with a demo, calling the script every 10ms:

const script = `
function GetReplicas(obj)
	return 0, {}
end`

func main() {
	vm := luavm.New(false, 10)
	for range time.Tick(time.Millisecond * 10) {
		_, _, err := vm.GetReplicas(&unstructured.Unstructured{}, script)
		if err != nil {
			fmt.Println(err)
			os.Exit(1)
		}
	}
}

and implements GetReplicas in two ways:

  • without pool : run in luavm1 pod
  • with pool: run in luavm2 pod

image

Even in luavm2 cost little more memory(48M v.s. 36M), but has much less in CPU (0.03 v.s. 0.1)

Which issue(s) this PR fixes:
Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

NONE

@karmada-bot karmada-bot added the kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. label Nov 24, 2022
@karmada-bot karmada-bot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Nov 24, 2022
@XiShanYongYe-Chang
Copy link
Member

/assign

@XiShanYongYe-Chang
Copy link
Member

Is the increase in memory consumption related to the pool size of the lua VM?

Copy link
Member

@XiShanYongYe-Chang XiShanYongYe-Chang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks~ Can we use this lua VM in the karmadactl interpret command?
/lgtm
/cc @RainbowMango @jameszhangyukun

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Nov 26, 2022
@karmada-bot
Copy link
Collaborator

@XiShanYongYe-Chang: GitHub didn't allow me to request PR reviews from the following users: jameszhangyukun.

Note that only karmada-io members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

Thanks~ Can we use this lua VM in the karmadactl interpret command?
/lgtm
/cc @RainbowMango @jameszhangyukun

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@RainbowMango
Copy link
Member

Looks great!

Can we use this lua VM in the karmadactl interpret command?

Does #2824 depend on it?

@ikaven1024
Copy link
Member Author

Is the increase in memory consumption related to the pool size of the lua VM?

Yes. The pool size shall be balanced between memory and cpu cost. But now I have no sufficient evidences which is best.

// TODO: set an appropriate pool size.
luaVM: luavm.New(false, 10),

@ikaven1024
Copy link
Member Author

Looks great!

Can we use this lua VM in the karmadactl interpret command?

Does #2824 depend on it?

In interpret command, we only call vm once, it's no need to use VM pool. So these 2 PRs has no relations

@XiShanYongYe-Chang
Copy link
Member

/assign @RainbowMango

}

// NewInstance creates a new lua VM
func (vm *VM) NewInstance() (*lua.LState, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NewInstance is not going to create a new lua VM.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

Signed-off-by: yingjinhui <yingjinhui@didiglobal.com>
@karmada-bot karmada-bot removed the lgtm Indicates that a PR is ready to be merged. label Nov 28, 2022
Copy link
Member

@RainbowMango RainbowMango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Nov 28, 2022
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RainbowMango

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 28, 2022
@karmada-bot karmada-bot merged commit d7b3a1c into karmada-io:master Nov 28, 2022
@ikaven1024 ikaven1024 deleted the pr-luavm branch November 28, 2022 10:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm Indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants