Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First take #1

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

First take #1

wants to merge 3 commits into from

Conversation

Razz4780
Copy link
Collaborator

@Razz4780 Razz4780 commented Jul 25, 2023

This does not work when forcing the Go binary to be statically linked (build command is go build -ldflags "-linkmode 'external' -extldflags '-static'"). An attempt to load the layer this way crashes the program at runtime. However, this happens only when injecting the mirrord layer. I tested this with other shared libraries (libc and libcurl) and all worked fine. So there might be some issue with the layer itself.

When layer path is provided in LD_PRELOAD, the program crashes at runtime with this stacktrace:

fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x220 pc=0x7fcafa950fed]

runtime stack:
runtime.throw({0x56c38a?, 0x7fcafa9466cc?})
        /usr/local/go/src/runtime/panic.go:1047 +0x5d fp=0x7ffcba1c58b0 sp=0x7ffcba1c5880 pc=0x4327dd
runtime.sigpanic()
        /usr/local/go/src/runtime/signal_unix.go:821 +0x3e9 fp=0x7ffcba1c5910 sp=0x7ffcba1c58b0 pc=0x446ec9

goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0x499e00, 0xc0000e0000)
        /usr/local/go/src/runtime/cgocall.go:157 +0x5c fp=0xc0000d5378 sp=0xc0000d5340 pc=0x404c1c
github.com/ebitengine/purego.RegisterFunc.func1({0xc0000c6480?, 0x2?, 0x2?})
        /home/razz4780/go/pkg/mod/github.com/ebitengine/purego@v0.4.0/func.go:212 +0xa31 fp=0xc0000d56e8 sp=0xc0000d5378 pc=0x4988b1
reflect.callReflect(0xc0000c6240, 0xc0000d5c50, 0xc0000d5b28, 0xc0000d5b30)
        /usr/local/go/src/reflect/value.go:772 +0x56d fp=0xc0000d5ad8 sp=0xc0000d56e8 pc=0x486b4d
reflect.callReflect(0xc0000c6240, 0xc0000d5c50, 0xc0000d5b28, 0xc0000d5b30)
        <autogenerated>:1 +0x4b fp=0xc0000d5b08 sp=0xc0000d5ad8 pc=0x48bd6b
reflect.makeFuncStub()
        /usr/local/go/src/reflect/asm_amd64.s:47 +0x7a fp=0xc0000d5c50 sp=0xc0000d5b08 pc=0x48adda
github.com/ebitengine/purego.Dlopen({0xc00001a00b?, 0x7fcb310ae2e8?}, 0x1?)
        /home/razz4780/go/pkg/mod/github.com/ebitengine/purego@v0.4.0/dlfcn.go:37 +0x25 fp=0xc0000d5c88 sp=0xc0000d5c50 pc=0x4979e5
github.com/metalbear-co/mirrord-goshim.init.0()
        /home/razz4780/go/pkg/mod/github.com/metalbear-co/mirrord-goshim@v0.0.0-20230725124701-9241722ce32a/init.go:25 +0x5d fp=0xc0000d5d20 sp=0xc0000d5c88 pc=0x49919d
runtime.doInit(0x62b9e0)
        /usr/local/go/src/runtime/proc.go:6506 +0x126 fp=0xc0000d5e50 sp=0xc0000d5d20 pc=0x4425c6
runtime.doInit(0x62ade0)
        /usr/local/go/src/runtime/proc.go:6483 +0x71 fp=0xc0000d5f80 sp=0xc0000d5e50 pc=0x442511
runtime.main()
        /usr/local/go/src/runtime/proc.go:233 +0x1c6 fp=0xc0000d5fe0 sp=0xc0000d5f80 pc=0x435086
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000d5fe8 sp=0xc0000d5fe0 pc=0x460241

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000088fb0 sp=0xc000088f90 pc=0x4354f6
runtime.goparkunlock(...)
        /usr/local/go/src/runtime/proc.go:387
runtime.forcegchelper()
        /usr/local/go/src/runtime/proc.go:305 +0xb0 fp=0xc000088fe0 sp=0xc000088fb0 pc=0x435330
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000088fe8 sp=0xc000088fe0 pc=0x460241
created by runtime.init.6
        /usr/local/go/src/runtime/proc.go:293 +0x25

goroutine 3 [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000089780 sp=0xc000089760 pc=0x4354f6
runtime.goparkunlock(...)
        /usr/local/go/src/runtime/proc.go:387
runtime.bgsweep(0x0?)
        /usr/local/go/src/runtime/mgcsweep.go:278 +0x8e fp=0xc0000897c8 sp=0xc000089780 pc=0x42218e
runtime.gcenable.func1()
        /usr/local/go/src/runtime/mgc.go:178 +0x26 fp=0xc0000897e0 sp=0xc0000897c8 pc=0x417446
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000897e8 sp=0xc0000897e0 pc=0x460241
created by runtime.gcenable
        /usr/local/go/src/runtime/mgc.go:178 +0x6b

goroutine 4 [GC scavenge wait]:
runtime.gopark(0xc00001e070?, 0x584ae8?, 0x1?, 0x0?, 0x0?)
        /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000089f70 sp=0xc000089f50 pc=0x4354f6
runtime.goparkunlock(...)
        /usr/local/go/src/runtime/proc.go:387
runtime.(*scavengerState).park(0x63b060)
        /usr/local/go/src/runtime/mgcscavenge.go:400 +0x53 fp=0xc000089fa0 sp=0xc000089f70 pc=0x4200b3
runtime.bgscavenge(0x0?)
        /usr/local/go/src/runtime/mgcscavenge.go:628 +0x45 fp=0xc000089fc8 sp=0xc000089fa0 pc=0x420685
runtime.gcenable.func2()
        /usr/local/go/src/runtime/mgc.go:179 +0x26 fp=0xc000089fe0 sp=0xc000089fc8 pc=0x4173e6
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000089fe8 sp=0xc000089fe0 pc=0x460241
created by runtime.gcenable
        /usr/local/go/src/runtime/mgc.go:179 +0xaa

goroutine 5 [finalizer wait]:
runtime.gopark(0x1a0?, 0x63b4a0?, 0x60?, 0x78?, 0xc000088770?)
        /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000088628 sp=0xc000088608 pc=0x4354f6
runtime.runfinq()
        /usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000887e0 sp=0xc000088628 pc=0x416487
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000887e8 sp=0xc0000887e0 pc=0x460241
created by runtime.createfing
        /usr/local/go/src/runtime/mfinal.go:163 +0x45

@Razz4780 Razz4780 marked this pull request as ready for review July 25, 2023 15:03
@aviramha
Copy link
Member

I think the solution to the crash would be to lock all threads, as Go creates threads and routines seamlessly under the screen, and this is implemented in purego it might have other threads/routines running at the same time, while we load and do the hooks :)

@aviramha
Copy link
Member

https://dave.cheney.net/2014/09/28/using-build-to-switch-between-debug-and-release
Can we use this to make the package effective only on debug or feature flag we set?

@Razz4780
Copy link
Collaborator Author

I think the solution to the crash would be to lock all threads, as Go creates threads and routines seamlessly under the screen, and this is implemented in purego it might have other threads/routines running at the same time, while we load and do the hooks :)

I don't think so, I tried to implement the same logic in a simple C program. It fails the same way when compiled statically :(

#include <dlfcn.h>
#include <stdio.h>

int main() {
    dlerror();

    char * error = NULL;
    void * handle = dlopen("./libmirrord_layer.so", RTLD_NOW | RTLD_GLOBAL);
    error = dlerror();
    if (error) {
        fprintf(stderr, "ERROR %s:%d: %s\n", __FILE__, __LINE__, error);
        return 1;
    }

    void (*exit)(int) = (void (*)(int)) dlsym(handle, "exit");
    error = dlerror();
    if (error) {
        fprintf(stderr, "ERROR %s:%d: %s\n", __FILE__, __LINE__, error);
        return 1;
    }

    exit(0);

    return 1;
}

@aviramha
Copy link
Member

Loved the test :)
I guess then we rely on something that initializes via libc. The solution might be to link ourselves statically using musl.
BTW what's the error you're getting with the C example?

@Razz4780
Copy link
Collaborator Author

Loved the test :) I guess then we rely on something that initializes via libc. The solution might be to link ourselves statically using musl. BTW what's the error you're getting with the C example?

A segfault when layer initialization tries to access thread local storage.

Do you mean linking the layer statically? Wouldn't this end badly when the user binary itself links to libc dynamically? We would have two versions of libc working in the same address space.

@aviramha
Copy link
Member

Loved the test :) I guess then we rely on something that initializes via libc. The solution might be to link ourselves statically using musl. BTW what's the error you're getting with the C example?

A segfault when layer initialization tries to access thread local storage.

Do you mean linking the layer statically? Wouldn't this end badly when the user binary itself links to libc dynamically? We would have two versions of libc working in the same address space.

https://github.com/pfalcon/foreign-dlopen maybe we can use this code to solve the problem?

@Razz4780
Copy link
Collaborator Author

Loved the test :) I guess then we rely on something that initializes via libc. The solution might be to link ourselves statically using musl. BTW what's the error you're getting with the C example?

A segfault when layer initialization tries to access thread local storage.
Do you mean linking the layer statically? Wouldn't this end badly when the user binary itself links to libc dynamically? We would have two versions of libc working in the same address space.

https://github.com/pfalcon/foreign-dlopen maybe we can use this code to solve the problem?

The flow there is a bit complex, requiring to compile some helper binary against the target system. I'm not sure yet how to fit it into a golang package.

There's also one thing about this project that concerns me:
4. Run the sample: './foreign_dlopen_demo'. While it is static, it will dynamically load libc.so.6 and call printf() from it.

And:

$ ldd foreign_dlopen_demo
        linux-vdso.so.1 (0x00007ffc7c6f5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc6f1400000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fc6f171d000)

@aviramha
Copy link
Member

tbh I thought integrating into our source code :) but let's hold this for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants