We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
./talk-llama -mw ./models/ggml-small.en.bin -ml ../llama.cpp/models/gpt4all-7B/gpt4all-lora-quantized-new.bin -p "Georgi" -t 8 whisper_init_from_file_no_state: loading model from './models/ggml-small.en.bin' whisper_model_load: loading model whisper_model_load: n_vocab = 51864 whisper_model_load: n_audio_ctx = 1500 whisper_model_load: n_audio_state = 768 whisper_model_load: n_audio_head = 12 whisper_model_load: n_audio_layer = 12 whisper_model_load: n_text_ctx = 448 whisper_model_load: n_text_state = 768 whisper_model_load: n_text_head = 12 whisper_model_load: n_text_layer = 12 whisper_model_load: n_mels = 80 whisper_model_load: f16 = 1 whisper_model_load: type = 3 whisper_model_load: mem required = 608.00 MB (+ 16.00 MB per decoder) whisper_model_load: adding 1607 extra tokens whisper_model_load: model ctx = 464.56 MB whisper_model_load: model size = 464.44 MB whisper_init_state: kv self size = 15.75 MB whisper_init_state: kv cross size = 52.73 MB llama_model_load: loading model from '../llama.cpp/models/gpt4all-7B/gpt4all-lora-quantized-new.bin' - please wait ... llama_model_load: invalid model file '../llama.cpp/models/gpt4all-7B/gpt4all-lora-quantized-new.bin' (bad magic) llama_init_from_file: failed to load model
main: processing, 8 threads, lang = en, task = transcribe, timestamps = 0 ...
init: found 1 capture devices: init: - Capture device #0: 'MacBook Pro Microphone' init: attempt to open default capture device ... init: obtained spec for input device (SDL Id = 2): init: - sample rate: 16000 init: - format: 33056 (required: 33056) init: - channels: 1 (required: 1) init: - samples per frame: 1024 Segmentation fault: 11
The text was updated successfully, but these errors were encountered:
Use gpt4all-lora-quantized.bin instead of gpt4all-lora-quantized-new.bin - see #702 (comment)
gpt4all-lora-quantized.bin
gpt4all-lora-quantized-new.bin
Sorry, something went wrong.
Wow! working... thank you bootkernel !
No branches or pull requests
./talk-llama -mw ./models/ggml-small.en.bin -ml ../llama.cpp/models/gpt4all-7B/gpt4all-lora-quantized-new.bin -p "Georgi" -t 8
whisper_init_from_file_no_state: loading model from './models/ggml-small.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51864
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 768
whisper_model_load: n_audio_head = 12
whisper_model_load: n_audio_layer = 12
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 768
whisper_model_load: n_text_head = 12
whisper_model_load: n_text_layer = 12
whisper_model_load: n_mels = 80
whisper_model_load: f16 = 1
whisper_model_load: type = 3
whisper_model_load: mem required = 608.00 MB (+ 16.00 MB per decoder)
whisper_model_load: adding 1607 extra tokens
whisper_model_load: model ctx = 464.56 MB
whisper_model_load: model size = 464.44 MB
whisper_init_state: kv self size = 15.75 MB
whisper_init_state: kv cross size = 52.73 MB
llama_model_load: loading model from '../llama.cpp/models/gpt4all-7B/gpt4all-lora-quantized-new.bin' - please wait ...
llama_model_load: invalid model file '../llama.cpp/models/gpt4all-7B/gpt4all-lora-quantized-new.bin' (bad magic)
llama_init_from_file: failed to load model
main: processing, 8 threads, lang = en, task = transcribe, timestamps = 0 ...
init: found 1 capture devices:
init: - Capture device #0: 'MacBook Pro Microphone'
init: attempt to open default capture device ...
init: obtained spec for input device (SDL Id = 2):
init: - sample rate: 16000
init: - format: 33056 (required: 33056)
init: - channels: 1 (required: 1)
init: - samples per frame: 1024
Segmentation fault: 11
The text was updated successfully, but these errors were encountered: