Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mmap failed for llama2-7b on windows #165

Open
tarunmcom opened this issue Jul 28, 2023 · 4 comments
Open

mmap failed for llama2-7b on windows #165

tarunmcom opened this issue Jul 28, 2023 · 4 comments

Comments

@tarunmcom
Copy link

succefully exported to .bin file from llama2-7b using the export_meta_llama_bin.py but getting error while running the file.
showing message-- mmap failed

@richinseattle
Copy link
Contributor

I will look tomorrow if someone else doesn't get to it

@tarunmcom
Copy link
Author

Thanks for replying some more info:-

  1. The issue is with the llama2 converted bin file, other files eg stories42M.bin are working fine on windows.
  2. Issue is while running on windows as the same llama2's bin file (model.bin) is working in Ubuntu
  3. I tried to debug in Visual Studio even the line:- file_size = ftell(file); // get the file size, in bytes is returning -1

@richinseattle
Copy link
Contributor

richinseattle commented Jul 30, 2023

One issue is that the long type on win64 is still 32bit. Really the codebase should switch to using stdint as there is another patch to fix integer overflows in another area of the code. I will wait until the other size patch gets committed before making one that works for windows.

This specific change can be made minimally invasive for cross platform with this in win.h and changing file_size to be ssize_t type in run.c.

#define ssize_t __int64
#define ftell _ftelli64

@richinseattle
Copy link
Contributor

Fixed in #179

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants