Complete guide to installing, configuring, and running your first queries with Perpendicularity.
- Python 3.11+ (download)
- Git for cloning the repository
- Docker for containerized deployment
- NVIDIA GPU for local models (HuggingFace Transformers)
- API Keys for cloud models:
- Google AI API key for Gemini models
- Anthropic API key for Claude models
- OpenAI API key (if using OpenAI models)
- AWS EC2 instance (recommended: g5.xlarge with 24GB GPU for Ollama)
- Ubuntu 22.04 or Amazon Linux 2023
# Install uv if you don't have it
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone repository
git clone https://github.com/t-neumann/perpendicularity.git
cd perpendicularity
# Install with API extras (includes FastAPI, uvicorn)
uv sync --extra api
# Or install with local model support (includes transformers, torch)
uv sync --extra local-models --extra api# Clone repository
git clone https://github.com/t-neumann/perpendicularity.git
cd perpendicularity
# Install in development mode
pip install -e .
# Or install with extras
pip install -e ".[api]" # For API server
pip install -e ".[local-models]" # For HuggingFace models
pip install -e ".[api,local-models]" # For both# Clone repository
git clone https://github.com/t-neumann/perpendicularity.git
cd perpendicularity
# Build Docker image
docker buildx build --platform linux/amd64 -t perpendicularity:0.1.0 .
# Run (see Deployment section for full options)
docker run -p 8000:8000 perpendicularity:0.1.0If you plan to use Gemini or Claude, set up API keys:
# Option A: Environment variables (recommended)
export GOOGLE_API_KEY="your-gemini-api-key"
export ANTHROPIC_API_KEY="your-claude-api-key"
# Option B: Add to shell profile for persistence
echo 'export GOOGLE_API_KEY="your-key"' >> ~/.bashrc
echo 'export ANTHROPIC_API_KEY="your-key"' >> ~/.bashrc
source ~/.bashrcGet API Keys:
- Gemini: Google AI Studio
- Claude: Anthropic Console
Edit config/agent_config.yaml to point to your MCP server instances:
# config/agent_config.yaml
mcp_servers:
genomic_ops:
url: "http://your-genomic-server:8000/mcp"
transport: "streamable-http"
txgemma:
url: "http://your-txgemma-server:8000/mcp"
transport: "streamable-http"Setting up MCP Servers:
- GenomicOps-MCP: Setup Instructions
- TxGemma-MCP: Setup Instructions
Edit config/agent_config.yaml:
# For cloud models (requires API keys)
default_model: "gemini"
# For local models (requires Ollama or HuggingFace)
default_model: "ollama_qwen14b"See Models Guide for detailed model comparison.
Test that everything is working:
# Test CLI is installed
perpendicularity --version
# Should show: 0.1.0
# Test with a simple question (uses default model)
perpendicularity ask "What is aspirin?"
# Test with specific model
perpendicularity ask "What is aspirin?" --model geminiIf this works, you're ready to go! ✅
perpendicularity ask "Which is safer: aspirin or ibuprofen?"What happens:
- Agent connects to configured model (e.g., Gemini)
- Evaluates both drugs using TxGemma-MCP tools
- Searches literature for safety data
- Provides evidence-based recommendation
perpendicularity ask \
"For human locus chr8:127735434-127742951, find genes, \
evaluate therapeutic relevance, \
and suggest candidate drugs" \
--prompt genomicsWhat happens:
- Queries GenomicOps-MCP for genes in human region
- Returns annotated gene list with human coordinates
perpendicularity interactiveFeatures:
- Conversational interface
- Multi-turn dialogue
- Rich terminal formatting (automatic)
- History and context retention
Example session:
You: What is the SMILES for aspirin?
Agent: The SMILES for aspirin is: CC(=O)OC1=CC=CC=C1C(=O)O
You: Evaluate its toxicity
Agent: [Uses TxGemma-MCP to evaluate toxicity...]
You: Compare it to ibuprofen
Agent: [Fetches ibuprofen data and compares...]
Exit with Ctrl+C or type exit.
Perpendicularity automatically detects your environment:
When running in a terminal (TTY), you get:
- ✅ Colored output
- ✅ Formatted tables
- ✅ Syntax highlighting
- ✅ Progress indicators
- ✅ Step-by-step reasoning display
perpendicularity ask "What is aspirin?"
# Automatically uses rich formattingWhen piped or in scripts, output is plain text:
perpendicularity ask "What is aspirin?" | tee output.txt
# Automatically switches to plain text
perpendicularity ask "What is aspirin?" > result.txt
# Plain text for file outputperpendicularity ask "What is aspirin?" --plain
# Always outputs plain text, even in terminal# Use more reasoning steps
perpendicularity ask "complex question" --max-steps 10
# Use different agent
perpendicularity ask "question" --agent-type react
# Use different prompt strategy
perpendicularity ask "question" --prompt conservative
# Combine options
perpendicularity ask "question" \
--model claude \
--agent-type langgraph \
--prompt genomics \
--max-steps 7For persistent settings, edit config/agent_config.yaml:
# Set defaults
default_model: "ollama_qwen14b"
agent:
type: "langgraph"
max_steps: 5
verbose: true
# Add custom models
models:
my_custom_model:
type: "openai"
name: "custom-model-name"
base_url: "http://localhost:8080/v1"See Configuration Reference for all options.
# Start API server
perpendicularity api
# Access at http://localhost:8000# Development mode with auto-reload
perpendicularity api --reload --log-level debug
# Production with multiple workers
perpendicularity api --workers 4 --log-level warning
# Custom port
perpendicularity api --port 3000
# Custom config
perpendicularity api --config my_config.yamlWeb Interface Features:
- Real-time streaming of agent reasoning
- Model selection dropdown
- Agent type selection
- Markdown-rendered responses
- Syntax highlighting for code/SMILES
See API Guide and Frontend Guide.
Now that you're set up:
- Learn about Models: Models Guide - Choose the right model for your use case
- Understand Agents: Agents Guide - LangGraph vs ReAct
- Master the CLI: CLI Guide - Complete command reference
- Configure Everything: Configuration Reference - All options explained
- Deploy to Production: Deployment Guide - Docker, EC2, scaling
# 1. Start with exploratory prompt
perpendicularity ask "Find genes related to diabetes" --prompt exploratory
# 2. Evaluate specific targets
perpendicularity ask "What drugs target gene XYZ?" --prompt genomics
# 3. Safety assessment
perpendicularity ask "Evaluate toxicity of drug ABC" --prompt conservative# 1. Test with local model (fast, free)
perpendicularity ask "test query" --model ollama_qwen14b
# 2. Refine with more steps
perpendicularity ask "test query" --model ollama_qwen14b --max-steps 10
# 3. Production run with cloud model
perpendicularity ask "test query" --model gemini# Process multiple queries
cat queries.txt | while read query; do
perpendicularity ask "$query" --plain >> results.txt
done
# Or use xargs
cat queries.txt | xargs -I {} perpendicularity ask "{}" --plain# Fast iteration with free local model
perpendicularity ask "test" --model ollama_qwen14b
# Once confident, use cloud model for quality
perpendicularity ask "test" --model gemini# Safety-critical decisions
--prompt conservative
# Hypothesis generation
--prompt exploratory
# Genomic analysis
--prompt genomics
# General use
--prompt default# Simple query: 3-5 steps sufficient
perpendicularity ask "What is aspirin?" --max-steps 3
# Complex analysis: 7-10 steps
perpendicularity ask "Compare 5 drugs for efficacy and safety" --max-steps 10# In scripts, always use --plain
perpendicularity ask "$query" --plain > output.txt
# Prevents ANSI codes in output files# Gemini: ~$0.10-0.50 per complex query
# Claude: ~$0.50-2.00 per complex query
# Ollama: $0.00 per query (hardware cost only)
# For development, prefer Ollama:
perpendicularity ask "test" --model ollama_qwen14b- README.md - Project overview
- Architecture - How it works
- Examples - Code examples
- Model Context Protocol - MCP standard
- LangGraph Docs - Agent framework
- GenomicOps-MCP - Genomics tools
- TxGemma-MCP - Therapeutics tools
You're ready to start discovering therapeutic insights! 🧬💊✨
For questions or issues, see Troubleshooting or open an issue.