0

I'm trying to make an app with flutter dart for android. the app that I will made is that a chatbot (like chat gpt) using built in model (the ai model will have .gguf extension). So the ai model will be INSIDE the apk itself (yes, the size of the app will be huge). I want it to run locally/offline.

I found that dart doesn't support for "read and use" the model .gguf. it needs "binding" method with llama.cpp (tbh, i still dont get it...). then i search on pub.dev, is there any package for this particluar needs. And I found this package https://pub.dev/packages/llama_cpp

I succeed to setting up all the needed shared library (libggml_shared.so, libllama.so, libomp.so) within llama.cpp into flutter project (myProject\android\app\src\main\jniLibs\arm64-v8a).

But then when I try to build and run on physical device android, it got an error like this:

E/flutter (30525): [ERROR:flutter/runtime/dart_isolate.cc(1402)] Unhandled exception:
E/flutter (30525): Invalid argument(s): Couldn't resolve native function 'llama_backend_init' in 'package:llama_cpp/src/lib_llama_cpp.dart' : No asset with id 'package:llama_cpp/src/lib_llama_cpp.dart' found. No available native assets. Attempted to fallback to process lookup. undefined symbol: llama_backend_init.
E/flutter (30525):
E/flutter (30525): #0      Native._ffi_resolver.#ffiClosure0 (dart:ffi-patch/ffi_patch.dart)
E/flutter (30525): #1      Native._ffi_resolver_function (dart:ffi-patch/ffi_patch.dart:1939:20)
E/flutter (30525): #2      llama_backend_init (package:llama_cpp/src/lib_llama_cpp.dart)
E/flutter (30525): #3      loadModel (package:llama_cpp/src/common.dart:116:13)
E/flutter (30525): #4      new NativeLLama (package:llama_cpp/src/native_llama.dart:46:28)
E/flutter (30525): #5      LlamaCpp._llamaIsolate (package:llama_cpp/llama_cpp.dart:175:19)
E/flutter (30525): #6      _delayEntrypointInvocation.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:317:17)
E/flutter (30525): #7      _RawReceivePort._handleMessage (dart:isolate-patch/isolate_patch.dart:193:12)
Syncing files to device SM A525F...

I check on file libllama.so, is there any llama_backend_init with this command:

nm -D libllama.so | grep llama_backend_init

and the result is :

000000000017380c T llama_backend_init

Which mean, I assume there is llama_backend_init inside libllama.so

I've search on the internet, but no clue. Maybe folks here know what is wrong with this.

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.