Skip to main content

The Nuclia Understanding API

The Nuclia Understanding API (or NUA) allows to call the processing services of Nuclia and retrieve the results, it does not involve any Knowledge Box, nothing gets stored in Nuclia cloud infrastructure.

Authentication with a NUA key

  • CLI: nuclia auth nua REGION NUA_KEY

  • SDK:

    from nuclia import sdk
    sdk.NucliaAuth().nua(token=NUA_KEY)

In order to check which NUA keys you have access you can run execute:

  • CLI:

    nuclia auth nuas
  • SDK:

    from nuclia import sdk
    sdk.NucliaAuth().nuas()

In order to set default NUA key you should use:

nuclia auth default_nua NUA_CLIENT_ID

Services

Predict

predict can return the embeddings of an input text:

  • CLI:

    nuclia nua predict sentence --text="A SENTENCE"
  • SDK:

    from nuclia import sdk
    predict = sdk.NucliaPredict()
    predict.sentence(text="A SENTENCE")

It can identify tokens in a text:

  • CLI:

    nuclia nua predict tokens --text="Who is Henriet? Does she speak English or Dutch?"

    tokens=[Token(text='Henriet', ner='PERSON', start=7, end=14), Token(text='English', ner='LANGUAGE', start=31, end=38), Token(text='Dutch', ner='LANGUAGE', start=42, end=47)] time=0.009547710418701172

  • SDK:

    from nuclia import sdk
    predict = sdk.NucliaPredict()
    predict.tokens(text="Who is Henriet? Does she speak English or Dutch?")

It can generate text from a prompt:

  • CLI:

    nuclia nua predict generate --text="How to tell a good story?"
  • SDK:

    from nuclia import sdk
    predict = sdk.NucliaPredict()
    predict.generate(text="How to tell a good story?")

It can summarize a list of texts:

  • CLI:

    nuclia nua predict summarize --texts='["TEXT1", "TEXT2"]'
  • SDK:

    from nuclia import sdk
    predict = sdk.NucliaPredict()
    predict.summarize(texts=["TEXT1", "TEXT2"])

It can generate a response to a question given a context:

  • CLI:

    nuclia nua predict rag --question="QUESTION" --context='["TEXT1", "TEXT2"]'
  • SDK:

    from nuclia import sdk
    predict = sdk.NucliaPredict()
    predict.rag(question="QUESTION", context=["TEXT1", "TEXT2"])

It can rephrase a user question into a proper question more suitable for a search engine (optionally using a context):

  • CLI:

    nuclia nua predict rephrase --question="french revolution causes"
    > What were the causes of the French Revolution?
    nuclia nua predict rephrase --question="next step" --user_context='["pan con tomate recipe", "first step: blend the tomatoes"]'
    > What is the next step in the pan con tomate recipe after blending the tomatoes?
  • SDK:

    from nuclia import sdk
    predict = sdk.NucliaPredict()
    predict.rephrase(question="french revolution causes")

You can provide a custom prompt to the rephrase method:

from nuclia import sdk
predict = sdk.NucliaPredict()
res = predict.rephrase(
question="ONU creation date",
prompt="Rephrase the question but preserve acronyms if any. Question: {question}"
)

Agent

agent allows to generate LLM agents from an initial prompt:

  • CLI:

    nuclia nua agent generate_prompt --text="Toronto" --agent_definition="city guide"

    (with the CLI, you will obtain the prompt text itself, not an agent directly)

  • SDK:

    from nuclia import sdk
    nuclia_agent = sdk.NucliaAgent()
    agent = nuclia_agent.generate_agent("Toronto", "city guide")
    print(agent.ask("Tell me about the parks"))

    (with the SDK, you will obtain an agent directly, you can call ask on it to generate answers)

Process

process allows to process a file:

  • CLI:

    nuclia nua process file --path="path/to/file.txt"

    And you can check the status with:

    nuclia nua process status
  • SDK:

    from nuclia import sdk
    process = sdk.NucliaProcess()
    process.file(path="path/to/file.txt")
    print(process.status())