本地 ollama 的模型是 qwen2.5:7b 和向量模型 bge-m3 ,做插入操作时报了如下错误
关键错误:IndexError: index 0 is out of bounds for axis 0 with size 0
-------------原始信息---------------
INFO:httpx:HTTP Request: POST http://localhost:11434/api/embeddings "HTTP/1.1 200 OK"
Generating embeddings: 100%|██████████| 2/2 [00:22
rag.insert(f.read())
File "D:\download\ff\LightRAG-main\lightrag\lightrag.py", line 238, in insert
return loop.run_until_complete(self.ainsert(string_or_strings))
File "E:\Python310\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "D:\download\ff\LightRAG-main\lightrag\lightrag.py", line 286, in ainsert
await self.chunks_vdb.upsert(inserting_chunks)
File "D:\download\ff\LightRAG-main\lightrag\storage.py", line 112, in upsert
results = self._client.upsert(datas=list_data)
File "D:\download\ff\LightRAG-main\venv\lib\site-packages\nano_vectordb\dbs.py", line 100, in upsert
self.__storage["matrix"][i] = update_d[f_VECTOR].astype(Float)
IndexError: index 0 is out of bounds for axis 0 with size 0
Process finished with exit code 1
-------------------------代码中配置如下-----------------
rag = LightRAG(
working_dir=WORKING_DIR,
llm_model_func=ollama_model_complete,
llm_model_name="qwen2.5:7b",
llm_model_max_async=4,
llm_model_max_token_size=32768,
llm_model_kwargs={"host": "http://localhost:11434", "options": {"num_ctx": 32768}},
embedding_func=EmbeddingFunc(
embedding_dim=768,
max_token_size=8192,
func=lambda texts: ollama_embedding(
texts, embed_model="nomic-embed-text", host="http://localhost:11434"
),
),
)
with open("./book.txt", "r", encoding="utf-8") as f:
rag.insert(f.read()) --------------------这行报错
这是什么原因,是哪里配得不对吗,还是代码有问题