Skip to content

Installation

  • macOS 12+ or Linux (glibc 2.31+)
  • No runtime dependencies required

Purchase a license at fialr.com/licensing to receive a download link and license key.

After purchasing, download the binary for your platform:

PlatformFileBLAKE3
macOS (Apple Silicon)fialr-1.0.0-macos-arm64a1b2c3d4...
macOS (Intel)fialr-1.0.0-macos-x64e5f6a7b8...
Linux (x64)fialr-1.0.0-linux-x64c9d0e1f2...

Full checksums and detached signatures are published with each release.

Terminal window
b3sum fialr-1.0.0-macos-arm64

Compare the output against the BLAKE3 hash listed above.

Each binary ships with a detached Ed25519 signature (.sig file). Verify with the fialr public key:

Terminal window
minisign -Vm fialr-1.0.0-macos-arm64 -P RWSGmBqXfRPhKPRMN4E+JGuhMGZ6NlEiXPNnA8tY6VDz

The public key is also published at fialr.com/signing-key.

Terminal window
chmod +x fialr-1.0.0-macos-arm64
sudo mv fialr-1.0.0-macos-arm64 /usr/local/bin/fialr

On first run, fialr requires a one-time activation with your license key:

Terminal window
fialr activate YOUR-LICENSE-KEY

This contacts api.fialr.com once to validate the key and bind it to the machine. After activation, fialr operates fully offline. No further network calls are made.

Each license allows activation on up to 3 machines.

Confirm the installation:

Terminal window
fialr --version
fialr 1.0.0 (build a1b2c3d4)

For AI-assisted enrichment, semantic search, and vector embeddings, install Ollama and pull the required models:

Terminal window
# Install Ollama
brew install ollama # macOS
# See ollama.com for Linux
# LLM for enrichment inference (~2 GB)
ollama pull llama3.2
# Embedding model for semantic search and similarity (~275 MB)
ollama pull nomic-embed-text

Both models require approximately 2.3 GB of disk space total. Ollama downloads them once and caches locally.

fialr runs without Ollama, but the following features require it: enrichment (AI metadata generation), vector embeddings, semantic search, and embedding-based near-duplicate detection. Without local models, these features are unavailable unless you configure a cloud provider.

If disk space is a constraint, you can skip the local models and use a cloud provider (Claude API) for enrichment instead:

Terminal window
fialr config ai --provider claude --key sk-ant-...

Cloud enrichment works for Tier 2–3 files. Tier 1 files require local AI by default (cloud access needs explicit two-step confirmation). Vector embeddings and semantic search currently require local Ollama — there is no cloud embedding provider.

To extract text from scanned PDFs and images, install Tesseract:

Terminal window
brew install tesseract # macOS
sudo apt install tesseract-ocr # Debian/Ubuntu

The ocrmypdf Python package (included in fialr[enrichment]) handles the OCR pipeline. Without Tesseract, scanned documents produce no extracted text and enrichment falls back to filename-only metadata. Native PDFs, Office documents, photos, and audio files are unaffected.