Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared Memory support #147

Closed
ZLangJIT opened this issue Sep 23, 2024 · 28 comments
Closed

Shared Memory support #147

ZLangJIT opened this issue Sep 23, 2024 · 28 comments
Labels
question Further information is requested

Comments

@ZLangJIT
Copy link

ZLangJIT commented Sep 23, 2024

are there any plans to add shared memory support in rvvm ?

eg rvvm -shm /path/to/named/shared/memory

eg, for each platform could we open the named shared memory then map it via IOMMU to be exposed as a shared memory region in guest ?

@ZLangJIT ZLangJIT changed the title Shared Memory Shared Memory support Sep 23, 2024
@LekKit
Copy link
Owner

LekKit commented Sep 23, 2024

Well if using librvvm API this is already doable, just create a memory mapping segment via rvvm_mmio_dev_t. But no cmdline feature is available yet.

What usecase would this would have? Consider that without a device tree description paired, guest knows nothing about the shared memory region (unless you work on it manually), and it must be some device that a driver exists for.

@LekKit LekKit added the question Further information is requested label Sep 24, 2024
@ZLangJIT
Copy link
Author

ZLangJIT commented Sep 30, 2024

Well if using librvvm API this is already doable, just create a memory mapping segment via rvvm_mmio_dev_t. But no cmdline feature is available yet.

What usecase would this would have? Consider that without a device tree description paired, guest knows nothing about the shared memory region (unless you work on it manually), and it must be some device that a driver exists for.

this would be related to a manager similar to Virtual Box / Docker but for rvvm instead

@LekKit
Copy link
Owner

LekKit commented Sep 30, 2024

But how exactly is that shared memory is gonna be used by the guest?

@ZLangJIT
Copy link
Author

ZLangJIT commented Sep 30, 2024

But how exactly is that shared memory is gonna be used by the guest?

it would be used by the kernel modules/drivers (eg, guest->host graphics/audio probably)

@LekKit
Copy link
Owner

LekKit commented Sep 30, 2024

But the drivers need to know what device is that! It is not that simple as adding a shared memory segment in a middle of nowhere. You need to implement actual device emulation.

If you want to passthrough host devices, we already have VFIO, but it requires root on the host and special configuration

@ZLangJIT
Copy link
Author

ZLangJIT commented Sep 30, 2024

But the drivers need to know what device is that! It is not that simple as adding a shared memory segment in a middle of nowhere. You need to implement actual device emulation.

If you want to passthrough host devices, we already have VFIO, but it requires root on the host and special configuration

true

eg, lets say a driver and host are set up in such a way that the host instructs rvvm to create a shared memory device named "/dev/foo" which the guest driver expects to read from if it exists

such will require coordination from the host (shell executing rvvm) and the guest (something that supplies the device or opens the supplied device if the kernel can supply it automatically)

But the drivers need to know what device is that! It is not that simple as adding a shared memory segment in a middle of nowhere. You need to implement actual device emulation.

for the above, lets assume rvvm --shared /dev/shm/foo and lets assume we build rvvm with support for a shared memory device that accepts --shared paths to shared memory, and lets assume it exposes it as some device the kernel can natively support, (or as a dedicated kernel module device driver specifically for such exposed device)

@ZLangJIT
Copy link
Author

ZLangJIT commented Sep 30, 2024

hmm it might also be possible to utilize shared memory for malloc/realloc/free such that we directly use host memory instead of pre-allocated memory tho im not sure how this might interact with total/free memory reporting, virtual address spaces, kernel boot-up, memory tools such as (tracing/conservative) garbage collectors, valgrind, ect, and so on

@LekKit
Copy link
Owner

LekKit commented Sep 30, 2024

No. It doesn't work that way, user emulation should be used then.

For deallocating unused guest memory virtio-balloon free page reporting should be implemented and use vma_clean() API internally.

@ZLangJIT
Copy link
Author

ZLangJIT commented Sep 30, 2024

hmm alright

so according to https://lwn.net/Articles/808807/

it would be safe to assume that this would transform -m xyzM such that it specifies the amount of memory to reserve, and then virtio-balloon and MMIO would commit/free/remap memory on demand as needed by the bootloader/kernel ?

additionally we assume this works on windows/unix hosts if they support virtio-balloon ?

@ZLangJIT
Copy link
Author

in regards to host dynamic memory, probs related to #76

@LekKit
Copy link
Owner

LekKit commented Sep 30, 2024

virtio-balloon is a guest device, and has two separate modes of operation:

  • Actual "ballooning" (as implied by the original devie name), where host forces the guest to "reserve" it's physical pages and then reclaims those pages on the host. It's a weird, old mode which is unlikely to ever be implemented in RVVM because it's just badly designed
  • Free page reporting: A newer mode, where guests gets to report pages which are actually unused, then the VM can zero them without any harm for the guest.

The needed host-specific functionality is already implemented in src/vma_ops.c, specifically vma_clean() function. It works on any OS including Windows. It is also already used to release unused JIT heap space. So it's a matter of writing a virtio-balloon device emulation driver.

It is mostly redundant feature if we consider KSM, but it surely will help non-Linux hosts.

@ZLangJIT
Copy link
Author

ZLangJIT commented Sep 30, 2024

hmm, so to be clear, this would allow, for example something like

RVVM
  - KERNEL
    - MALLOC
      - request memory from host and remap as contig via mmio
    - FREE
      - release memory back to host

eg the -m 100M option would maybe become redundant since we would become limited by the host free memory instead ?

@LekKit
Copy link
Owner

LekKit commented Oct 1, 2024

eg the -m 100M option would maybe become redundant since we would become limited by the host free memory instead ?

Well it would be still a good idea to cap the guest at some memory limit otherwise it can crash or lag the host by allocating too much memory.

Additionally the -m 8G option currently doesn't mean that 8 gigs or RAM will be allocated on host immediately on start. It is lazily allocated, but afterwards it is not deallocated back but could be merged with other guests on a Linux host via KSM.

@ZLangJIT
Copy link
Author

ZLangJIT commented Oct 1, 2024

yea, the vm (both qemu and rvvm) will RESERVE the memory but not COMMIT it upfront (eg if we -m 8 GB it will reserve 8 GB but it will not commit 8 GB up-front as-if malloc 8GB)

@ZLangJIT
Copy link
Author

ZLangJIT commented Oct 1, 2024

Well it would be still a good idea to cap the guest at some memory limit otherwise it can crash or lag the host by allocating too much memory.

which is what -m would do right, eg -m 1G would cap the mem at 1 GB (but we can still crash if we dont have 1 GB free host mem and we try to allocate past such)

eg we -m 4 TB and then we attempt to allocate 1 TB worth of 100 MB chunks the host would eventually OOM or crash

@ZLangJIT
Copy link
Author

ZLangJIT commented Oct 1, 2024

virtio-balloon is a guest device, and has two separate modes of operation:

* Actual "ballooning" (as implied by the original devie name), where host forces the guest to "reserve" it's physical pages and then reclaims those pages on the host. It's a weird, old mode which is unlikely to ever be implemented in RVVM because it's just badly designed

* Free page reporting: A newer mode, where guests gets to report pages which are **actually** unused, then the VM can zero them without any harm for the guest.

The needed host-specific functionality is already implemented in src/vma_ops.c, specifically vma_clean() function. It works on any OS including Windows. It is also already used to release unused JIT heap space. So it's a matter of writing a virtio-balloon device emulation driver.

It is mostly redundant feature if we consider KSM, but it surely will help non-Linux hosts.

which is what makes it capable of returning unused paged back to the os

eg, its current memory usage without baloon would be its peak usage for the lifetime of the processes due to the vm being unable to tell which parts of any memory it has been requested have been unused (eg freed)

eg free(malloc(10000))

  • when malloc gets done the vm knows only that it has to commit however much more memory to satisfy the kernel request of more memory should it have no memory available to reuse
  • when free gets done the vm does not know if this freed memory will be reused or not so it just keeps it around and and kernel will reuse it as reusable memory instead of requesting more memory
  • when balloon free gets done the vm is notified that this memory will no longer be reused and thus can return it back to the host, which itself may reuse it for further allocations as per free

@ZLangJIT
Copy link
Author

ZLangJIT commented Oct 1, 2024

hmm alright

so according to https://lwn.net/Articles/808807/

it would be safe to assume that this would transform -m xyzM such that it specifies the amount of memory to reserve, and then virtio-balloon and MMIO would commit/free/remap memory on demand as needed by the bootloader/kernel ?

additionally we assume this works on windows/unix hosts if they support virtio-balloon ?

we should probs move to #76

@ihateradiohead
Copy link

image

@ZLangJIT
Copy link
Author

ZLangJIT commented Oct 7, 2024

im thinking something like this

ill work on it some more next week

linux sysv shm API note - https://dev.to/0xog_pg/using-shared-memory-in-linux-1p62

note termux provides a shm wrapper for ashmem

#if ANDROID
           "    -shm path name   Map a shared memory region from <path> to guest </dev/rvvm_shm/name>\n"
#endif
           "    -v, -verbose     Enable verbose logging\n"
        } else if (cmp_arg(arg_name, "vfio_pci")) {
            if (!pci_vfio_init_auto(machine, arg_val)) return false;
        }
        #if ANDROID
        else if (cmp_arg(arg_name, "-shm")) {
			const char * path = arg_val;
        	i+=arg_size;
        	if (i == argc) {
                rvvm_error("-shm expects <path> <name>");
        	}
	        arg_size = get_arg(argv + i, &arg_name, &arg_val);
	        const char * shm_name = arg_val;
            if (!shm_init(machine, path, arg_val)) {
                rvvm_error("Failed to attach shm path \"%s\" with guest name /dev/rvvm_shm/%s", path, arg_val);
                return false;
            }
            #endif
/*
shm.h - Memory-Mapped Shared Memory
Copyright (C) 2024  ZLangJIT <github.com/ZLangJIT>

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.
*/

#ifndef RVVM_SHM_H
#define RVVM_SHM_H

#include "rvvmlib.h"

//! SHM context description
typedef struct {
    void*     buffer; //!< Buffer in host memory
    uint64_t  size;  //!< Buffer size in host memory
} shm_ctx_t;

/*
 * SHM API
 */

//! Gets the shared memory size
static inline size_t shm_size(const shm_ctx_t* shm)
{
    return shm->size;
}

//! \brief   Attach shared memory context to the machine.
PUBLIC rvvm_mmio_dev_t* shm_init(rvvm_machine_t* machine, const char * path, const char * dev_name);

#endif
/*
shm.c - Memory-Mapped Shared Memory
Copyright (C) 2024  ZLangJIT <github.com/ZLangJIT>

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.
*/

#include "shm.h"

#include "utils.h"

#ifdef USE_FDT
#include "fdtlib.h"
#endif

#if WIN32
#include "shm_windows.h"
#elif ANDROID
#include "shm_android.h"
#else
// man shm_open
#include <sys/mman.h>
#endif

static void shm_remove(rvvm_mmio_dev_t* device)
{
	free(device);
}

static rvvm_mmio_type_t shm_dev_type = {
    .name = "rrvm_shm",
    .remove = shm_remove,
};

PUBLIC rvvm_mmio_dev_t* shm_init(rvvm_machine_t* machine, rvvm_addr_t addr, const char * path, const char * dev_name)
{
#if WIN32
#elif ANDROID
#else
#endif
    shm_ctx_t* shm_ctx = safe_new_obj(shm_ctx_t);
    shm_ctx->buffer = NULL;
    shm_ctx->size = 0;
    // Map the framebuffer into physical memory
    rvvm_mmio_dev_t shm_region = {
        .mapping = shm_ctx->buffer,
        .size = shm_size(shm_ctx),
        .type = &shm_dev_type,
    };
    rvvm_mmio_dev_t* mmio = rvvm_attach_mmio(machine, &shm_region);
    if (mmio == NULL) return mmio;
// #ifdef USE_FDT
//     struct fdt_node* fb_fdt = fdt_node_create_reg("framebuffer", addr);
//     fdt_node_add_prop_reg(fb_fdt, "reg", addr, fb_region.size);
//     fdt_node_add_prop_str(fb_fdt, "compatible", "simple-framebuffer");
//     switch (fb->format) {
//         case RGB_FMT_R5G6B5:
//             fdt_node_add_prop_str(fb_fdt, "format", "r5g6b5");
//             break;
//         case RGB_FMT_R8G8B8:
//             fdt_node_add_prop_str(fb_fdt, "format", "r8g8b8");
//             break;
//         case RGB_FMT_A8R8G8B8:
//             fdt_node_add_prop_str(fb_fdt, "format", "a8r8g8b8");
//             break;
//         case RGB_FMT_A8B8G8R8:
//             fdt_node_add_prop_str(fb_fdt, "format", "a8b8g8r8");
//             break;
//         default:
//             rvvm_warn("Unknown RGB format in framebuffer_init()!");
//             break;
//     }
//     fdt_node_add_prop_u32(fb_fdt, "width",  fb->width);
//     fdt_node_add_prop_u32(fb_fdt, "height", fb->height);
//     fdt_node_add_prop_u32(fb_fdt, "stride", framebuffer_stride(fb));

//     fdt_node_add_child(rvvm_get_fdt_soc(machine), fb_fdt);
// #endif
    return mmio;
}

@LekKit
Copy link
Owner

LekKit commented Oct 7, 2024

Is it intended for SysV SHM IPC or for plain memory mapped files?

@ZLangJIT
Copy link
Author

Is it intended for SysV SHM IPC or for plain memory mapped files?

i would like both if possible

@ZLangJIT
Copy link
Author

ZLangJIT commented Oct 15, 2024

ok i have managed to map the files but i cannot seem to find them in rvvm guest

INFO: Attached MMIO device at 0x40004000, type "nvme"
INFO: mapping path 'images/android-logo-mask.png' via mmap
INFO: mapped path 'images/android-logo-mask.png' at address 0x77ab0f6000 with size 12104
INFO: Attached MMIO device at 0x00000000, type "rvvm_shm_data_file"

...

sh-5.2# find / | grep --color rvvm
sh-5.2# find / | grep --color shm
/dev/shm
/proc/sys/kernel/shm_next_id
/proc/sys/kernel/shm_rmid_forced
/proc/sys/kernel/shmall
/proc/sys/kernel/shmmax
/proc/sys/kernel/shmmni
/proc/sys/vm/hugetlb_shm_group
/proc/sysvipc/shm
/sys/kernel/slab/shmem_inode_cache
/sys/kernel/slab/shmem_inode_cache/total_objects
/sys/kernel/slab/shmem_inode_cache/cpu_slabs
/sys/kernel/slab/shmem_inode_cache/objects
/sys/kernel/slab/shmem_inode_cache/objects_partial
/sys/kernel/slab/shmem_inode_cache/cpu_partial
/sys/kernel/slab/shmem_inode_cache/validate
/sys/kernel/slab/shmem_inode_cache/min_partial
/sys/kernel/slab/shmem_inode_cache/poison
/sys/kernel/slab/shmem_inode_cache/red_zone
/sys/kernel/slab/shmem_inode_cache/slabs
/sys/kernel/slab/shmem_inode_cache/destroy_by_rcu
/sys/kernel/slab/shmem_inode_cache/sanity_checks
/sys/kernel/slab/shmem_inode_cache/align
/sys/kernel/slab/shmem_inode_cache/aliases
/sys/kernel/slab/shmem_inode_cache/store_user
/sys/kernel/slab/shmem_inode_cache/trace
/sys/kernel/slab/shmem_inode_cache/reclaim_account
/sys/kernel/slab/shmem_inode_cache/order
/sys/kernel/slab/shmem_inode_cache/object_size
/sys/kernel/slab/shmem_inode_cache/shrink
/sys/kernel/slab/shmem_inode_cache/hwcache_align
/sys/kernel/slab/shmem_inode_cache/objs_per_slab
/sys/kernel/slab/shmem_inode_cache/partial
/sys/kernel/slab/shmem_inode_cache/slabs_cpu_partial
/sys/kernel/slab/shmem_inode_cache/ctor
/sys/kernel/slab/shmem_inode_cache/slab_size
/sys/firmware/devicetree/base/soc/shm@755ef57000
/sys/firmware/devicetree/base/soc/shm@755ef57000/size
/sys/firmware/devicetree/base/soc/shm@755ef57000/name
/usr/lib/libxcb-shm.so
/usr/lib/libxcb-shm.so.0
/usr/lib/libxcb-shm.so.0.0.0
/usr/lib/libxshmfence.so
/usr/lib/libxshmfence.so.1
/usr/lib/libxshmfence.so.1.0.0
/usr/lib/python3.12/lib-dynload/_posixshmem.cpython-312-riscv64-linux-gnu.so
/usr/share/bash-completion/completions/sshmitm
/usr/share/xcb/shm.xml
sh-5.2#
        else if (cmp_arg(arg_name, "shm_id")) {
        	if (shm_data_path == NULL && shm_exe_path == NULL) {
        		rvvm_error("Please insert any shm_ argument except shm_id, before this shm_id argument");
        		return false;
        	}
        	shm_id = arg_val;
            if (shm_exe_path == NULL ? !shm_init_data(machine, shm_data_path, shm_id) : !shm_init_exe(machine, shm_exe_path, shm_id)) {
                rvvm_error("Failed to attach shm path \"%s\" with guest name /dev/rvvm_shm/%s", shm_exe_path == NULL ? shm_data_path : shm_exe_path, shm_id);
                return false;
            }
            shm_data_path = NULL;
            shm_exe_path = NULL;
            shm_id = NULL;
        }
        else if (cmp_arg(arg_name, "shm_data")) {
        	if (shm_data_path != NULL) {
        		rvvm_error("A seperator -shm_id was not encountered between the last -shm_data argument and this shm_data argument");
        		return false;
        	}
        	else if (shm_exe_path != NULL) {
        		rvvm_error("A seperator -shm_id was not encountered between the last -shm_exe argument and this shm_data argument");
        		return false;
        	}
            shm_data_path = arg_val;
            shm_exe_path = NULL;
        }
        else if (cmp_arg(arg_name, "shm_exe")) {
        	if (shm_data_path != NULL) {
        		rvvm_error("A seperator -shm_id was not encountered between the last -shm_data argument and this shm_exe argument");
        		return false;
        	}
        	else if (shm_exe_path != NULL) {
        		rvvm_error("A seperator -shm_id was not encountered between the last -shm_exe argument and this shm_exe argument");
        		return false;
        	}
            shm_data_path = NULL;
            shm_exe_path = arg_val;
        }
/*
shm.h - Memory-Mapped Shared Memory
Copyright (C) 2024  ZLangJIT <github.com/ZLangJIT>

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.
*/

#ifndef RVVM_SHM_H
#define RVVM_SHM_H

#include "rvvmlib.h"

//! SHM context description
typedef struct {
    void*     buffer; //!< Buffer in host memory
    uint64_t  size;  //!< Buffer size in host memory
} shm_ctx_t;

/*
 * SHM API
 */

//! \brief   Attach shared memory context to the machine.
PUBLIC rvvm_mmio_dev_t* shm_init_named(rvvm_machine_t* machine, const char * global_name, const char * dev_name);
PUBLIC rvvm_mmio_dev_t* shm_init_data(rvvm_machine_t* machine, const char * path, const char * dev_name);
PUBLIC rvvm_mmio_dev_t* shm_init_exe(rvvm_machine_t* machine, const char * path, const char * dev_name);

#endif
/*
shm.c - Memory-Mapped Shared Memory
Copyright (C) 2024  ZLangJIT <github.com/ZLangJIT>

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.
*/

#include "shm.h"

#include "utils.h"

#ifdef USE_FDT
#include "fdtlib.h"
#endif

#if _WIN32

#include <fileapi.h> // CreateFile
#include <memoryapi.h> // MapViewOfFile, UnmapViewOfFile

#else

#include <fcntl.h> // open
#include <unistd.h> // close
#include <sys/stat.h> // stat
#include <sys/mman.h> // mmap

#endif

PUBLIC rvvm_mmio_dev_t* shm_init_(rvvm_machine_t* machine, rvvm_mmio_dev_t* shm_region, const char * dev_name);

static void shm_remove_file(rvvm_mmio_dev_t* device)
{
    shm_ctx_t* shm_ctx = (shm_ctx_t*)device->data;
#if _WIN32
    rvvm_info("unmapping path '%s' via UnmapViewOfFile", shm_ctx->source);
    UnmapViewOfFile(shm_ctx->buffer);
#else
    rvvm_info("unmapping path '%s' via munmmap", shm_ctx->source);
	munmap(shm_ctx->buffer, shm_ctx->size);
#endif
    free(shm_ctx);
}

static rvvm_mmio_type_t shm_data_dev_type = {
    .name = "rvvm_shm_data_file",
    .remove = shm_remove_file,
};

static rvvm_mmio_type_t shm_exe_dev_type = {
    .name = "rvvm_shm_exe_file",
    .remove = shm_remove_file,
};

PUBLIC rvvm_mmio_dev_t* shm_init_named(rvvm_machine_t* machine, const char * global_name, const char * dev_name)
{
	UNUSED(machine);
	UNUSED(global_name);
	UNUSED(dev_name);
	return NULL;
}

PUBLIC rvvm_mmio_dev_t* shm_init_data(rvvm_machine_t* machine, const char * path, const char * dev_name)
{
	shm_ctx_t* shm_ctx = safe_new_obj(shm_ctx_t);
	shm_ctx->source = path;
#if _WIN32
    rvvm_info("mapping path '%s' via MapViewOfFile", path);
	HANDLE file_handle = CreateFileA(path,
		GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL
	);
	if (GetLastError() == ERROR_FILE_NOT_FOUND || file_handle == INVALID_HANDLE_VALUE) {
		free(shm_file);
		free(shm_ctx);
		return NULL;
	}
	shm_ctx->size = GetFileSize(file_handle, NULL);
	HANDLE file_mapping = CreateFileMapping(file_handle, NULL, PAGE_READ, 0, shm_ctx->size, NULL);
	CloseHandle(file_handle);
	shm_ctx->buffer = MapViewOfFile(file_mapping, FILE_MAP_READ, 0, 0, shm_ctx->size);
	CloseHandle(file_mapping);
#else
    rvvm_info("mapping path '%s' via mmap", path);
	int file_handle = open(path, O_RDONLY);
	if (file_handle == -1) {
		free(shm_ctx);
		rvvm_error("Failed to map file %s (open() failed)", path);
		return NULL;
	}
	struct stat file_info;
	if (fstat(file_handle, &file_info) == -1) {
		close(file_handle);
		free(shm_ctx);
		rvvm_error("Failed to map file %s (fstat() failed)", path);
		return NULL;
	}
	shm_ctx->size = file_info.st_size;
	shm_ctx->buffer = mmap(NULL, shm_ctx->size, PROT_READ, MAP_PRIVATE, file_handle, 0);
	close(file_handle);
	if (shm_ctx->buffer == MAP_FAILED) {
		free(shm_ctx);
		rvvm_error("Failed to map file %s (MAP_FAILED)", path);
		return NULL;
	}
#endif
	// Map the buffer into physical memory
	rvvm_info("mapped path '%s' at address %p with size %zu", path, shm_ctx->buffer, shm_ctx->size);
	rvvm_mmio_dev_t shm_region = {
		.mapping = shm_ctx->buffer,
		.size = shm_ctx->size,
		.data = shm_ctx,
		.type = &shm_data_dev_type,
	};
	UNUSED(dev_name);
	rvvm_mmio_dev_t* mmio = rvvm_attach_mmio(machine, &shm_region);
	if (mmio == NULL) return mmio;
	#ifdef USE_FDT
	struct fdt_node* fb_fdt = fdt_node_create("rvvm_shm_data_file");
	fdt_node_add_prop_str(fb_fdt, "source", shm_ctx->source);
	fdt_node_add_prop(fb_fdt, "buffer", shm_region.mapping, shm_region.size);
	fdt_node_add_prop_u64(fb_fdt, "size", shm_region.size);
	#endif
	return mmio;
}

PUBLIC rvvm_mmio_dev_t* shm_init_exe(rvvm_machine_t* machine, const char * path, const char * dev_name)
{
	shm_ctx_t* shm_ctx = safe_new_obj(shm_ctx_t);
    shm_ctx->source = path;
#if _WIN32
    rvvm_info("mapping path '%s' via MapViewOfFile", path);
	HANDLE file_handle = CreateFileA(path,
		GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL
	);
	if (GetLastError() == ERROR_FILE_NOT_FOUND || file_handle == INVALID_HANDLE_VALUE) {
		free(shm_ctx);
		return NULL;
	}
	shm_ctx->size = GetFileSize(file_handle, NULL);
	HANDLE file_mapping = CreateFileMapping(file_handle, NULL, PAGE_EXECUTE_READ, 0, shm_ctx->size, NULL);
	CloseHandle(file_handle);
	shm_ctx->buffer = MapViewOfFile(file_mapping, FILE_MAP_READ | FILE_MAP_EXECUTE, 0, 0, shm_ctx->size);
	CloseHandle(file_mapping);
#else
    rvvm_info("mapping path '%s' via mmap", path);
	int file_handle = open(path, O_RDONLY);
	if (file_handle == -1) {
		free(shm_ctx);
		rvvm_error("Failed to map file %s (open() failed)", path);
		return NULL;
	}
	struct stat file_info;
	if (fstat(file_handle, &file_info) == -1) {
		close(file_handle);
		free(shm_ctx);
		rvvm_error("Failed to map file %s (fstat() failed)", path);
		return NULL;
	}
	shm_ctx->size = file_info.st_size;
	shm_ctx->buffer = mmap(NULL, shm_ctx->size, PROT_READ | PROT_EXEC, MAP_PRIVATE, file_handle, 0);
	close(file_handle);
	if (shm_ctx->buffer == MAP_FAILED) {
		free(shm_ctx);
		rvvm_error("Failed to map file %s (MAP_FAILED)", path);
		return NULL;
	}
#endif
	// Map the buffer into physical memory
	rvvm_info("mapped path '%s' at address %p with size %zu", path, shm_ctx->buffer, shm_ctx->size);
	rvvm_mmio_dev_t shm_region = {
		.mapping = shm_ctx->buffer,
		.size = shm_ctx->size,
		.data = shm_ctx,
		.type = &shm_exe_dev_type,
	};

	UNUSED(dev_name);
	rvvm_mmio_dev_t* mmio = rvvm_attach_mmio(machine, &shm_region);
	if (mmio == NULL) return mmio;
	#ifdef USE_FDT
	struct fdt_node* fb_fdt = fdt_node_create("rvvm_shm_exe_file");
	fdt_node_add_prop_str(fb_fdt, "source", shm_ctx->source);
	fdt_node_add_prop(fb_fdt, "buffer", shm_region.mapping, shm_region.size);
	fdt_node_add_prop_u64(fb_fdt, "size", shm_region.size);
	fdt_node_add_child(rvvm_get_fdt_soc(machine), fb_fdt);
	#endif
	return mmio;
}

@ZLangJIT
Copy link
Author

ZLangJIT commented Oct 16, 2024

	rvvm_info("mapped path '%s' at address %p with size %zu", path, shm_ctx->buffer, shm_ctx->size);
	rvvm_mmio_dev_t shm_region = {
		.mapping = shm_ctx->buffer,
		.size = shm_ctx->size,
		.data = shm_ctx,
		.type = &shm_data_dev_type,
	};
	UNUSED(dev_name);
	rvvm_mmio_dev_t* mmio = rvvm_attach_mmio(machine, &shm_region);
	if (mmio == NULL) return mmio;
	#ifdef USE_FDT
	rvvm_info("Exposing via Device Tree /sys/firmware/devicetree/base/soc/rvvm_shm_data_file@%p", shm_ctx->buffer);
    struct fdt_node* memory = fdt_node_create_reg("rvvm_shm_data_file", (uint64_t)shm_region.mapping);
    fdt_node_add_prop_str(memory, "device_type", "memory");
	fdt_node_add_prop_str(memory, "source", shm_ctx->source);
    fdt_node_add_prop(memory, "buffer", shm_region.mapping, (uint32_t)shm_region.size);
	fdt_node_add_prop_u64(memory, "size", shm_region.size);
    fdt_node_add_child(rvvm_get_fdt_soc(machine), memory);
	#endif
	return mmio;
uffer # file ./sys/firmware/devicetree/base/soc/rvvm_shm_data_file\@7a808ba000/bu
./sys/firmware/devicetree/base/soc/rvvm_shm_data_file@7a808ba000/buffer: PNG image data, 512 x 128, 8-bit/color RGBA, non-interlaced

ok we can now expose it via the device tree

not exactly shared memory but its usable

@ZLangJIT
Copy link
Author

hmm it appears to stall if we attempt to expose any file that is larger than 1040944 bytes or expose a 1040944 byte file twice (-shm_data f.i -shm_id foo -shm_data f.i -shm_id foo)

however we can expose small files many times

cat images/android-logo-mask.png > f.i2 ; ./boot_rvvm_disk.sh "" "-shm_data f.i2 -shm_id foo -shm_data f.i2 -shm_id foo -shm_data f.i2 -shm_id foo -shm_data f.i2 -shm_id foo -shm_data f.i2 -shm_id foo -shm_data f.i2 -shm_id foo -shm_data f.i2 -shm_id foo -shm_data f.i2 -shm_id foo -shm_data f.i2 -shm_id foo "

tho atm shm_id has no effect as each file is exposed by a rvvm_<type>@<addr> id

@ZLangJIT
Copy link
Author

additionally we consume extra memory based on how many files we expose and the total size of all exposed files

@ZLangJIT
Copy link
Author

hmm INFO: Generated DTB at 0xb1e02890, size 2086764

a bootable one (where it doesnt stall) is INFO: Generated DTB at 0xb1f00b50, size 1045676

@ZLangJIT
Copy link
Author

hmm https://patchwork.kernel.org/project/qemu-devel/patch/20190322073555.20889-1-ppandit@redhat.com/ seems to mention the device tree has a fixed max size of ~1MB to 2MB

@LekKit
Copy link
Owner

LekKit commented Oct 21, 2024

What are you trying to solve with this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants