Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Metadata extraction using LLM API service #29

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

osma
Copy link
Contributor

@osma osma commented Jul 31, 2024

This draft PR contains an initial rough implementation of the LLM-based metadata extraction described in #21. Because some functionality is still missing and there are uncertainties in the implementation, I'm leaving it as a Draft PR.

The initial prototyping was performed in this Jupyter notebook that has essentially the same functionality, but in this PR the code from the notebook has been retrofitted into the Meteor codebase.

How it works

This code adds a new LLMExtractor class that performs the main work of metadata extraction by calling an LLM API service such as llama.cpp running locally. Here is an outline of the changes:

  • src/settings.py, src/util.py and .env.example have been extended to handle LLM configuration settings LLM_API_URL, LLM_API_KEY and LLM_MODEL
  • a new LLMExtractor class
  • Meteor class stores the LLM configuration and calls LLMExtractor if backend=LLMExtractor parameter is given in the API method call
  • MeteorDocument class has a new method extract_text_as_json that returns the text and pdfinfo metadata in a JSON format that the LLMs expect.
  • a new select field for choosing the backend (Finder or LLMExtractor) has been added the HTML template index.html ; the default value is Finder
  • a new Origin value LLM has been added and all information coming from the LLM is tagged as having that origin. The existing values didn't seem to fit because the LLM won't tell (at least currently) where in the document the information came from.
  • unit tests for the LLMExtractor functionality. The LLM API service is mocked using unittest.mock so that the tests don't have to set up a real LLM service and wait for its responses.

How it looks like

There is a new select element for choosing the backend:

image

How to test it

  1. Install llama.cpp on your computer (on Linux, using CPU only: git clone the repository and run make to compile it)
  2. Download a fine-tuned model such as NatLibFi/Qwen2-0.5B-Instruct-FinGreyLit-GGUF in GGML format i.e. Qwen2-0.5B-Instruct-FinGreyLit-Q4_K_M.gguf
  3. Start the llama.cpp server using the GGUF model: ./llama-server -m Qwen2-0.5B-Instruct-FinGreyLit-Q4_K_M.gguf and leave it running
  4. Set the environment variable LLM_API_URL to point to the llama.cpp server API endpoint: export LLM_API_URL=http://localhost:8080 (or edit the .env file)
  5. Start up Meteor from this branch and run it normally. In the UI, select "LLMExtractor" as the extraction method. If performing API calls, set the parameter backend=LLMExtractor.

Example using a Norwegian document in English language:

$ time curl -d fileUrl=https://www.ssb.no/forside/_attachment/453137 -d backend=LLMExtractor http://127.0.0.1:5000/json
{"year":{"origin":{"type":"LLM"},"value":"2021"},"language":{"origin":{"type":"LLM"},"value":"eng"},"title":{"origin":{"type":"LLM"},"value":"Family composition and transitions into long-term care services among the elderly"},"publisher":{"origin":{"type":"LLM"},"value":"Statistics Norway"},"publicationType":null,"authors":[{"origin":{"type":"LLM"},"firstname":"Astri","lastname":"Syse"},{"origin":{"type":"LLM"},"firstname":"Alyona","lastname":"Artamonova"},{"origin":{"type":"LLM"},"firstname":"Michael","lastname":"Thomas"},{"origin":{"type":"LLM"},"firstname":"Marijke","lastname":"Veenstra"}],"isbn":null,"issn":null}
real	0m19.678s
user	0m0.000s
sys	0m0.015s

Here is the same metadata as shown in the Meteor web UI:

image

As far as I can tell, this metadata is correct, except that the LLM for some reason didn't pick up the ISSN on page 3. But this was using the relatively stupid Qwen2-0.5B based small model, not the larger Mistral-7B based model that gives much better quality responses.

Here is again the same document, but this time the LLM is the larger Mistral-7B based model, quantized to Q6_K GGUF format, running on a V100 GPU using llama.cpp, all 33 layers offloaded to the GPU, requiring around 12.5GB VRAM.

time curl -d fileUrl=https://www.ssb.no/forside/_attachment/453137 -d backend=LLMExtractor http://127.0.0.1:5000/json
{"year":{"origin":{"type":"LLM"},"value":"2021"},"language":{"origin":{"type":"LLM"},"value":"eng"},"title":{"origin":{"type":"LLM"},"value":"Family composition and transitions into long-term care services among the elderly"},"publisher":{"origin":{"type":"LLM"},"value":"Statistics Norway"},"publicationType":null,"authors":[{"origin":{"type":"LLM"},"firstname":"Astri","lastname":"Syse"},{"origin":{"type":"LLM"},"firstname":"Alyona","lastname":"Artamonova"},{"origin":{"type":"LLM"},"firstname":"Michael","lastname":"Thomas"},{"origin":{"type":"LLM"},"firstname":"Marijke","lastname":"Veenstra"}],"isbn":null,"issn":{"origin":{"type":"LLM"},"value":"1892-753X"}}
real    0m3.358s
user    0m0.002s
sys     0m0.003s

Note that the request now completed in 3.4 seconds (including downloading the PDF) and this time the ISSN was successfully extracted as well.

Missing functionality

  • The LLMs already support returning more metadata fields than Meteor does, such as DOI, p-ISBN, p-ISSN and COAR resource type, but Meteor doesn't handle this so this information is lost. Meteor should be extended to return also the new fields.

Code/implementation issues

  • The extraction of text and metadata in MeteorDocument.extract_text_as_json is implemented separately from the text extraction that the class performs anyway. This means that there is some duplicated code and extra work is done. It should probably be better integrated with the existing code. One issue is that Meteor by default looks at the first 5 and last 5 pages, while the LLM extractors have been developed using text from the 8 first and 2 last pages. Also text extraction for LLMs is performed using the parameter sorted=True, while Meteor doesn't use that option. Such differences in details make it hard to reuse the existing code.
  • The LLMs have been trained to return ISO 693-3 three-letter language codes, while Meteor uses BCP47 codes that are often only two letters. I didn't try to do any mapping, but I think that in the future, perhaps the LLMs could also switch to BCP47, because that's what is generally used on the Internet.

Other potential problems

  • There are a few code style issues in Meteor that I find a bit confusing. For example _underscore_methods (or __double_underscore_methods) and underscore variables, used to signal that some information is internal to a class, aren't used consistently. Also some classes have the habit of mutating objects and accessing them from inside another class when they could simply pass them around instead - as an example, Finder.extract_metadata could be given a MeteorDocument as parameter and it could return a Metadata object. I'd perhaps like to do a few small changes in these areas, but that's outside the scope of this PR. Here I just tried to follow the established style.
  • I noticed that the API method definitions for the / and /json methods in src/routes/extract.py don't properly declare the parameters supported by each method. Instead, the parameters are read at runtime from the request/form. As a consequence of this, Swagger-UI documentation doesn't show the parameters and instead claims that these two methods don't take any parameters. This also means that they cannot be tested using Swagger-UI. I think I could fix this in a PR...

Fixes #21

@osma
Copy link
Contributor Author

osma commented Aug 16, 2024

I've added a few more features to the original draft implementation (better configuration handling + ability to select the backend in the web UI and in API methods). I've updated the OP accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for LLM backend: implementation plan
1 participant