Introduction
NucliaDB is the database platform Nuclia uses to store and index data.
Core features:
- Easily compare the vectors from different models.
- Store text, files and vectors, labels and annotations.
- Access and modify your resources efficiently.
- Perform semantic, keyword, fulltext and graph searches.
- Export your data in a format compatible with most NLP pipelines (HuggingFace datasets, pytorch, etc).
Quick start
1. Install NucliaDB and run it locally
pip install nucliadb
nucliadb
Or with docker:
docker pull nuclia/nucliadb:latest
2. Create your first Knowledge Box
A Knowledge Box is a data container in NucliaDB.
To help you interact with NucliaDB, install your sdk first:
pip install nucliadb_sdk
Docker:
docker run -it -p 8080:8080 -v nucliadb-standalone:/data nuclia/nucliadb:latest
Then with just a few lines of code, you can start filling NucliaDB with data:
from nucliadb_sdk import create_knowledge_box
my_kb = create_knowledge_box("my_new_kb")
3. Upload data
To help you upload data, you can also use the sentence_transformers
python package:
pip install sentence_transformers
You can use it to insert some vectors:
from nucliadb_sdk import KnowledgeBox, File, Entity
from sentence_transformers import SentenceTransformer
encoder = SentenceTransformer("all-MiniLM-L6-v2")
resource_id = my_kb.upload(
key="mykey1",
binary=File(data=b"asd", filename="data"),
text="I'm Sierra, a very happy dog",
labels=["emotion/positive"],
entities=[Entity(type="NAME", value="Sierra", positions=[(4, 9)])],
vectors={"all-MiniLM-L6-v2": encoder.encode(["I'm Sierra, a very happy dog"])[0]},
)
my_kb["mykey1"]
Then insert more data to improve your search index:
sentences = [
"She's having a terrible day", "what a delighful day",
"Dog in catalan is gos", "he is heartbroken",
"He said that the race is quite tough", "love is tough"
]
labels = ["emotion/negative", "emotion/positive", "emotion/neutral", "emotion/negative", "emotion/neutral", "emotion/negative"]
for i in range(len(sentences)):
resource_id = my_kb.upload(
text=sentences[i],
labels=[labels[i]],
vectors={"all-MiniLM-L6-v2": encoder.encode([sentences[i]])[0]},
)
4. Search
Finally, you can perform a search on your data:
from sentence_transformers import SentenceTransformer
encoder = SentenceTransformer("all-MiniLM-L6-v2")
query_vectors = encoder.encode(["To be in love"])[0]
results = my_kb.search(vector = query_vectors, vectorset="all-MiniLM-L6-v2", min_score=0.25)
Connecting you database to Nuclia Cloud
Connecting your database to Nuclia Cloud allows you to own your data while utilizing Nuclia's Understanding API™. Sign up for a free account
Nuclia's Understanding API™ provides a data extraction, enrichmentment and inference.
By utilizing it, you can allow Nulica to do all the heavy lifting for you while you own your own data.
To enable, provide the NUA_API_KEY
environment variable when you run NucliaDB:
docker run -it -e NUA_API_KEY=<YOUR-NUA-API-KEY> \
-p 8080:8080 -v nucliadb-standalone:/data nuclia/nucliadb:latest
Then, upload a file into your knowledge box:
curl "http://localhost:8080/api/v1/kb/<KB_UUID>/upload" \
-X POST \
-H "X-NUCLIADB-ROLES: WRITER" \
-H "X-FILENAME: `echo -n "myfile" | base64`"
-T /path/to/file
After the data has been processed, you will be able to search against it:
curl http://localhost:8080/api/v1/kb/${KB_UUID}/search?query=your+own+query \
-H "X-NUCLIADB-ROLES: READER"