Skip to content

Latest commit

 

History

History
467 lines (326 loc) · 10.5 KB

File metadata and controls

467 lines (326 loc) · 10.5 KB

Getting Started with Perpendicularity

Complete guide to installing, configuring, and running your first queries with Perpendicularity.


📋 Prerequisites

Required

  • Python 3.11+ (download)
  • Git for cloning the repository

Optional (for specific features)

  • Docker for containerized deployment
  • NVIDIA GPU for local models (HuggingFace Transformers)
  • API Keys for cloud models:
    • Google AI API key for Gemini models
    • Anthropic API key for Claude models
    • OpenAI API key (if using OpenAI models)

For EC2 Deployment

  • AWS EC2 instance (recommended: g5.xlarge with 24GB GPU for Ollama)
  • Ubuntu 22.04 or Amazon Linux 2023

🚀 Installation

Option 1: Using uv (Recommended - Fastest)

# Install uv if you don't have it
curl -LsSf https://astral.sh/uv/install.sh | sh

# Clone repository
git clone https://github.com/t-neumann/perpendicularity.git
cd perpendicularity

# Install with API extras (includes FastAPI, uvicorn)
uv sync --extra api

# Or install with local model support (includes transformers, torch)
uv sync --extra local-models --extra api

Option 2: Using pip

# Clone repository
git clone https://github.com/t-neumann/perpendicularity.git
cd perpendicularity

# Install in development mode
pip install -e .

# Or install with extras
pip install -e ".[api]"              # For API server
pip install -e ".[local-models]"     # For HuggingFace models
pip install -e ".[api,local-models]" # For both

Option 3: Docker (Production)

# Clone repository
git clone https://github.com/t-neumann/perpendicularity.git
cd perpendicularity

# Build Docker image
docker buildx build --platform linux/amd64 -t perpendicularity:0.1.0 .

# Run (see Deployment section for full options)
docker run -p 8000:8000 perpendicularity:0.1.0

⚙️ Configuration

Step 1: Set Up API Keys (for Cloud Models)

If you plan to use Gemini or Claude, set up API keys:

# Option A: Environment variables (recommended)
export GOOGLE_API_KEY="your-gemini-api-key"
export ANTHROPIC_API_KEY="your-claude-api-key"

# Option B: Add to shell profile for persistence
echo 'export GOOGLE_API_KEY="your-key"' >> ~/.bashrc
echo 'export ANTHROPIC_API_KEY="your-key"' >> ~/.bashrc
source ~/.bashrc

Get API Keys:

Step 2: Configure MCP Servers

Edit config/agent_config.yaml to point to your MCP server instances:

# config/agent_config.yaml

mcp_servers:
  genomic_ops:
    url: "http://your-genomic-server:8000/mcp"
    transport: "streamable-http"
  
  txgemma:
    url: "http://your-txgemma-server:8000/mcp"
    transport: "streamable-http"

Setting up MCP Servers:

Step 3: Choose Your Default Model

Edit config/agent_config.yaml:

# For cloud models (requires API keys)
default_model: "gemini"

# For local models (requires Ollama or HuggingFace)
default_model: "ollama_qwen14b"

See Models Guide for detailed model comparison.


🧪 Verify Installation

Test that everything is working:

# Test CLI is installed
perpendicularity --version
# Should show: 0.1.0

# Test with a simple question (uses default model)
perpendicularity ask "What is aspirin?"

# Test with specific model
perpendicularity ask "What is aspirin?" --model gemini

If this works, you're ready to go! ✅


💡 Your First Queries

Example 1: Simple Drug Question

perpendicularity ask "Which is safer: aspirin or ibuprofen?"

What happens:

  1. Agent connects to configured model (e.g., Gemini)
  2. Evaluates both drugs using TxGemma-MCP tools
  3. Searches literature for safety data
  4. Provides evidence-based recommendation

Example 2: Genomic Analysis

perpendicularity ask \
  "For human locus chr8:127735434-127742951, find genes, \
   evaluate therapeutic relevance, \
   and suggest candidate drugs" \
  --prompt genomics

What happens:

  1. Queries GenomicOps-MCP for genes in human region
  2. Returns annotated gene list with human coordinates

Example 3: Interactive Mode

perpendicularity interactive

Features:

  • Conversational interface
  • Multi-turn dialogue
  • Rich terminal formatting (automatic)
  • History and context retention

Example session:

You: What is the SMILES for aspirin?
Agent: The SMILES for aspirin is: CC(=O)OC1=CC=CC=C1C(=O)O

You: Evaluate its toxicity
Agent: [Uses TxGemma-MCP to evaluate toxicity...]

You: Compare it to ibuprofen
Agent: [Fetches ibuprofen data and compares...]

Exit with Ctrl+C or type exit.


🎨 Output Modes

Perpendicularity automatically detects your environment:

Rich Mode (Interactive Terminal)

When running in a terminal (TTY), you get:

  • ✅ Colored output
  • ✅ Formatted tables
  • ✅ Syntax highlighting
  • ✅ Progress indicators
  • ✅ Step-by-step reasoning display
perpendicularity ask "What is aspirin?"
# Automatically uses rich formatting

Plain Mode (Scripts/Pipes)

When piped or in scripts, output is plain text:

perpendicularity ask "What is aspirin?" | tee output.txt
# Automatically switches to plain text

perpendicularity ask "What is aspirin?" > result.txt
# Plain text for file output

Force Plain Mode

perpendicularity ask "What is aspirin?" --plain
# Always outputs plain text, even in terminal

🔧 Advanced Configuration

Customize Agent Behavior

# Use more reasoning steps
perpendicularity ask "complex question" --max-steps 10

# Use different agent
perpendicularity ask "question" --agent-type react

# Use different prompt strategy
perpendicularity ask "question" --prompt conservative

# Combine options
perpendicularity ask "question" \
  --model claude \
  --agent-type langgraph \
  --prompt genomics \
  --max-steps 7

Configuration File

For persistent settings, edit config/agent_config.yaml:

# Set defaults
default_model: "ollama_qwen14b"

agent:
  type: "langgraph"
  max_steps: 5
  verbose: true

# Add custom models
models:
  my_custom_model:
    type: "openai"
    name: "custom-model-name"
    base_url: "http://localhost:8080/v1"

See Configuration Reference for all options.


🌐 Starting the Web Interface

Quick Start

# Start API server
perpendicularity api

# Access at http://localhost:8000

With Options

# Development mode with auto-reload
perpendicularity api --reload --log-level debug

# Production with multiple workers
perpendicularity api --workers 4 --log-level warning

# Custom port
perpendicularity api --port 3000

# Custom config
perpendicularity api --config my_config.yaml

Web Interface Features:

  • Real-time streaming of agent reasoning
  • Model selection dropdown
  • Agent type selection
  • Markdown-rendered responses
  • Syntax highlighting for code/SMILES

See API Guide and Frontend Guide.


📚 Next Steps

Now that you're set up:

  1. Learn about Models: Models Guide - Choose the right model for your use case
  2. Understand Agents: Agents Guide - LangGraph vs ReAct
  3. Master the CLI: CLI Guide - Complete command reference
  4. Configure Everything: Configuration Reference - All options explained
  5. Deploy to Production: Deployment Guide - Docker, EC2, scaling

🎯 Common Workflows

Research Workflow

# 1. Start with exploratory prompt
perpendicularity ask "Find genes related to diabetes" --prompt exploratory

# 2. Evaluate specific targets
perpendicularity ask "What drugs target gene XYZ?" --prompt genomics

# 3. Safety assessment
perpendicularity ask "Evaluate toxicity of drug ABC" --prompt conservative

Development Workflow

# 1. Test with local model (fast, free)
perpendicularity ask "test query" --model ollama_qwen14b

# 2. Refine with more steps
perpendicularity ask "test query" --model ollama_qwen14b --max-steps 10

# 3. Production run with cloud model
perpendicularity ask "test query" --model gemini

Batch Processing

# Process multiple queries
cat queries.txt | while read query; do
  perpendicularity ask "$query" --plain >> results.txt
done

# Or use xargs
cat queries.txt | xargs -I {} perpendicularity ask "{}" --plain

💡 Tips & Best Practices

1. Start with Local Models for Testing

# Fast iteration with free local model
perpendicularity ask "test" --model ollama_qwen14b

# Once confident, use cloud model for quality
perpendicularity ask "test" --model gemini

2. Use Appropriate Prompts

# Safety-critical decisions
--prompt conservative

# Hypothesis generation
--prompt exploratory

# Genomic analysis
--prompt genomics

# General use
--prompt default

3. Increase Steps for Complex Queries

# Simple query: 3-5 steps sufficient
perpendicularity ask "What is aspirin?" --max-steps 3

# Complex analysis: 7-10 steps
perpendicularity ask "Compare 5 drugs for efficacy and safety" --max-steps 10

4. Use Plain Mode for Automation

# In scripts, always use --plain
perpendicularity ask "$query" --plain > output.txt

# Prevents ANSI codes in output files

5. Monitor Costs with Cloud Models

# Gemini: ~$0.10-0.50 per complex query
# Claude: ~$0.50-2.00 per complex query
# Ollama: $0.00 per query (hardware cost only)

# For development, prefer Ollama:
perpendicularity ask "test" --model ollama_qwen14b

🎓 Learning Resources

Documentation

External Resources


You're ready to start discovering therapeutic insights! 🧬💊✨

For questions or issues, see Troubleshooting or open an issue.