-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: LockOSThread: unexpected gopreempt #36122
Comments
/cc @aclements |
Locking a goroutine to a thread does not mean that the thread will not be preempted. It means that when the goroutine is scheduled to run, it will only run on the thread to which it has been locked. And it means that that thread will only run the goroutine that has been locked to it. |
@ianlancetaylor Thank for your response.
|
Both threads using 50% of the CPU is what I would expect from that code. You write as though you expect something different. What do you expect? |
If I delete |
Why? Again, gopreempt has nothing to do with |
Maybe the question is "why does the periodic preempt policy apply if the M cannot run any other G?" And @ChenYahui2019 why do you care if there is a gopreempt in this case? |
Performance sensitive scenarios such as DPDK, we hope specific G monopolize a cpu, which has been configured CPU isolation and binded the OS thread using |
@ChenYahui2019 Unfortunately that's not how Go works. One issue is that the concurrent garbage collector requires the ability to briefly stop all threads while moving to the next stage of collection. So the garbage collector can stop any thread. Another, probably more important, issue is that the Go scheduler only permits up to |
So, if i understand correctly LockOSThread only locks a goroutine to run exclusively on the current OS thread. This is useful, for, say, GUI libraries or games on certain platforms where it is required that drawing and rendering must be done on the main thread of the program. However, this does not does not change anything about scheduling? The documentation of LockOsThread could be expanded to clearly state that. |
@ianlancetaylor If |
@beoran Yes. @ChenYahui2019 If I understand you correctly, then what you describe is how the scheduler works. If |
@ianlancetaylor Yep, what you say is exactly what I mean. |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
NA
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
I am trying to use
runtime.LockOSThread()
bind G and M, unexpectedly, I discover gopreempt actions on this G.Here's the code:
What did you expect to see?
No preempt, G keep running always
What did you see instead?
Belows are my debugging steps:
$ pidstat -p 12692 -t 1
Linux 3.10.0-862.el7.x86_64 (openstack-22) 12/13/2019 x86_64 (24 CPU)
03:42:47 PM UID TGID TID %usr %system %guest %CPU CPU Command
03:42:48 PM 0 12692 - 100.00 0.00 0.00 100.00 8 lockosthread
03:42:48 PM 0 - 12692 0.00 0.00 0.00 0.00 8 |__lockosthread
03:42:48 PM 0 - 12693 0.00 0.00 0.00 0.00 11 |__lockosthread
03:42:48 PM 0 - 12694 100.00 0.00 0.00 100.00 21 |__lockosthread
03:42:48 PM 0 - 12695 0.00 0.00 0.00 0.00 23 |__lockosthread
03:42:48 PM 0 - 12696 0.00 0.00 0.00 0.00 11 |__lockosthread
03:42:48 PM 0 - 12697 0.00 0.00 0.00 0.00 23 |__lockosthread
03:42:48 PM UID TGID TID %usr %system %guest %CPU CPU Command
03:42:49 PM 0 12692 - 100.00 0.00 0.00 100.00 8 lockosthread
03:42:49 PM 0 - 12692 0.00 0.00 0.00 0.00 8 |__lockosthread
03:42:49 PM 0 - 12693 0.00 0.00 0.00 0.00 11 |__lockosthread
03:42:49 PM 0 - 12694 100.00 0.00 0.00 100.00 21 |__lockosthread
03:42:49 PM 0 - 12695 0.00 0.00 0.00 0.00 23 |__lockosthread
03:42:49 PM 0 - 12696 0.00 0.00 0.00 0.00 11 |__lockosthread
03:42:49 PM 0 - 12697 0.00 0.00 0.00 0.00 23 |__lockosthread
^C
Average: UID TGID TID %usr %system %guest %CPU CPU Command
Average: 0 12692 - 100.00 0.00 0.00 100.00 - lockosthread
Average: 0 - 12692 0.00 0.00 0.00 0.00 - |__lockosthread
Average: 0 - 12693 0.00 0.00 0.00 0.00 - |__lockosthread
Average: 0 - 12694 100.00 0.00 0.00 100.00 - |__lockosthread
Average: 0 - 12695 0.00 0.00 0.00 0.00 - |__lockosthread
Average: 0 - 12696 0.00 0.00 0.00 0.00 - |__lockosthread
Average: 0 - 12697 0.00 0.00 0.00 0.00 - |__lockosthread
$ strace -p 12694 -ttT
strace: Process 12694 attached
15:43:17.197854 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
15:43:17.229295 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000017>
15:43:17.260622 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000018>
15:43:17.291778 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000017>
15:43:17.323010 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000014>
15:43:17.354231 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000018>
15:43:17.385496 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
15:43:17.416749 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
15:43:17.448026 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
15:43:17.479394 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
15:43:17.510628 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
15:43:17.541912 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000020>
15:43:17.573335 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
15:43:17.604672 futex(0xc000078148, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
// as above show, about every 31ms has a futex
// as below show, goroutine preempt happened when calling futex
(dlv) bt
0 0x000000000042d88f in runtime.gopreempt_m
at /usr/lib/golang/src/runtime/proc.go:2649
1 0x0000000000456ba5 in main.a
at ./lockosthread.go:7
2 0x0000000000456c9e in main.main.func1
at ./lockosthread.go:37
3 0x000000000044d191 in runtime.goexit
at /usr/lib/golang/src/runtime/asm_amd64.s:1357
(dlv) goroutine
Thread 12694 at /usr/lib/golang/src/runtime/proc.go:2649
Goroutine 5:
Runtime: /usr/lib/golang/src/runtime/proc.go:2649 runtime.gopreempt_m (0x42d88f)
User: ./lockosthread.go:7 main.a (0x456ba5)
Go: ./lockosthread.go:32 main.main (0x456c35)
Start: ./lockosthread.go:32 main.main.func1 (0x456c60)
The text was updated successfully, but these errors were encountered: