Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 0 additions & 59 deletions examples/agentic_learning_demo.py

This file was deleted.

1 change: 0 additions & 1 deletion examples/slackbot/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ dependencies = [
"raggy[tpuf] @ git+https://github.com/zzstoatzz/raggy.git",
"pretty-mod",
"claude-agent-sdk",
"agentic-learning",
]


Expand Down
10 changes: 6 additions & 4 deletions examples/slackbot/src/slackbot/_internal/templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
- **Assume Prefect 3.x:** Unless the user specifies otherwise, assume the user is using Prefect 3.x. You can mention this assumption IF RELEVANT (e.g., "In Prefect 3.x, you would...").
- **Code is King:** When providing code examples, ensure they are complete and correct. Use your `verify_import_statements` tool's output to guide you.
- **Honesty Over Invention:** If your tools don't find a clear answer, say so. It's better to admit a knowledge gap than to provide incorrect information.
- **Stay on Topic:** Only reference notes you've stored about the user if they are directly relevant to the current question.
- **Proportionality:** If asked a simple question, you don't need to do a bunch of work. Just answer the question once you find it. However, feel free to dig into broad questions.

## CRITICAL - Removed/Deprecated Features
Expand All @@ -62,17 +63,18 @@
If a user explicitly mentions using Prefect 2.x, that's fine, but recommend upgrading to 3.x or using workers in 2.x.

## Tool Usage Protocol
You have a suite of tools to gather information. Use them methodically.
You have a suite of tools to gather and store information. Use them methodically.

1. **For Technical/Conceptual Questions:** Use `research_prefect_topic`. It delegates to a specialized agent that will do comprehensive research for you.
2. **For Bugs or Error Reports:** Use `read_github_issues` to find existing discussions or solutions.
3. **For Community Discussions:** Use `search_github_discussions` to find existing GitHub discussions on topics.
4. **For Checking the Work of the Research Agent:** Use `explore_module_offerings` and `display_callable_signature` to verify specific syntax recommendations.
5. **For CLI Commands:** use `check_cli_command` with --help before suggesting any Prefect CLI command to verify it exists and has the correct syntax. This prevents suggesting non-existent commands.
4. **For Remembering User Details:** When a user shares information about their goals, environment, or preferences, use `store_facts_about_user` to save these details for future interactions.
5. **For Checking the Work of the Research Agent:** Use `explore_module_offerings` and `display_callable_signature` to verify specific syntax recommendations.
6. **For CLI Commands:** use `check_cli_command` with --help before suggesting any Prefect CLI command to verify it exists and has the correct syntax. This prevents suggesting non-existent commands.
- **IMPORTANT:** When checking commands that require optional dependencies (e.g., AWS, Docker, Kubernetes integrations), use the `uv run --with 'prefect[<extra>]'` syntax.
- Examples: `uv run --with 'prefect[aws]'`, `uv run --with 'prefect[docker]'`, `uv run --with 'prefect[kubernetes]'`
- This ensures the command runs with the necessary dependencies installed.
6. **For Creating GitHub Discussions (USE SPARINGLY):** Use `create_discussion_and_notify` only when:
7. **For Creating GitHub Discussions (USE SPARINGLY):** Use `create_discussion_and_notify` only when:
- The thread contains valuable insights, solutions, or patterns not documented elsewhere
- You've searched both issues and discussions and found no existing coverage of the topic
- The conversation would clearly benefit the broader Prefect community
Expand Down
14 changes: 6 additions & 8 deletions examples/slackbot/src/slackbot/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
from contextlib import asynccontextmanager
from typing import Any

from agentic_learning import learning
from fastapi import FastAPI, HTTPException, Request
from prefect import Flow, State, flow, get_run_logger, task
from prefect.blocks.notifications import SlackWebhook
Expand Down Expand Up @@ -101,13 +100,11 @@ async def run_agent(
settings=decorator_settings,
max_tool_calls=settings.max_tool_calls_per_turn,
):
# wrap agent run with learning context for persistent memory
async with learning(agent=f"slackbot-{user_context['user_id']}"):
result = await create_agent(model=settings.model_name).run(
user_prompt=cleaned_message,
message_history=conversation,
deps=user_context,
)
result = await create_agent(model=settings.model_name).run(
user_prompt=cleaned_message,
message_history=conversation,
deps=user_context,
)
finally:
_progress_message.reset(token)
_tool_usage_counts.reset(counts_token)
Expand Down Expand Up @@ -250,6 +247,7 @@ async def handle_message(payload: SlackPayload, db: Database):

user_context = build_user_context(
user_id=event.user,
user_question=cleaned_message,
thread_ts=thread_ts,
workspace_name=await get_workspace_domain(),
channel_id=event.channel or "unknown",
Expand Down
52 changes: 52 additions & 0 deletions examples/slackbot/src/slackbot/assets.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,13 @@

from prefect.assets import Asset, AssetProperties, add_asset_metadata, materialize
from pydantic import BaseModel
from pydantic_ai import RunContext
from pydantic_ai.messages import ModelMessage, SystemPromptPart
from raggy.documents import Document
from raggy.vectorstores.tpuf import TurboPuffer

from marvin import cast_async
from slackbot.settings import settings
from slackbot.slack import get_channel_name
from slackbot.types import UserContext

Expand Down Expand Up @@ -70,6 +74,54 @@ def thread_summary_asset(
)


def user_facts_asset(user_context: UserContext) -> Asset:
user_id = user_context["user_id"]
workspace_name = user_context["workspace_name"]
bot_id = user_context["bot_id"]
return Asset(
key=f"slack://{workspace_name}/bot/{bot_id}/facts/{user_id}",
properties=AssetProperties(
name=f"User Facts {user_id}",
description=f"Facts learned about user {user_id} by bot {bot_id}",
owners=["slackbot"],
),
)


async def store_user_facts(ctx: RunContext[UserContext], facts: list[str]) -> str:
"""Store facts extracted from a Slack thread using context for namespacing."""

with TurboPuffer(
namespace=f"{settings.user_facts_namespace_prefix}{ctx.deps['user_id']}"
) as tpuf:
tpuf.upsert(documents=[Document(text=fact) for fact in facts])

user_facts = user_facts_asset(ctx.deps)

slack_thread = await slack_thread_asset(ctx.deps)
slackbot = slackbot_asset(ctx.deps)

@materialize(user_facts, asset_deps=[slack_thread, slackbot])
async def materialize_user_facts():
add_asset_metadata(
user_facts,
{
"user_id": ctx.deps["user_id"],
"fact_count": len(facts),
"timestamp": datetime.now().isoformat(),
"namespace": f"{settings.user_facts_namespace_prefix}{ctx.deps['user_id']}",
"thread_ts": ctx.deps["thread_ts"],
"workspace_name": ctx.deps["workspace_name"],
"channel_id": ctx.deps["channel_id"],
"bot_id": ctx.deps["bot_id"],
"facts": facts,
},
)
return f"Stored {len(facts)} facts about user {ctx.deps['user_id']} from thread {ctx.deps['thread_ts']}"

return await materialize_user_facts()


async def summarize_thread(
user_context: UserContext, conversation: list[ModelMessage]
) -> ThreadSummary:
Expand Down
50 changes: 49 additions & 1 deletion examples/slackbot/src/slackbot/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,11 @@
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers import Provider
from pydantic_ai.settings import ModelSettings
from raggy.vectorstores.tpuf import TurboPuffer, query_namespace
from turbopuffer import NotFoundError

from slackbot._internal.templates import DEFAULT_SYSTEM_PROMPT
from slackbot.assets import store_user_facts
from slackbot.github import (
GitHubAuthError,
GitHubError,
Expand Down Expand Up @@ -133,13 +136,23 @@ def _insert():
@task(task_run_name="build user context for {user_id}")
def build_user_context(
user_id: str,
user_question: str,
thread_ts: str,
workspace_name: str,
channel_id: str,
bot_id: str,
) -> UserContext:
try:
user_notes = query_namespace(
query_text=user_question,
namespace=f"{settings.user_facts_namespace_prefix}{user_id}",
top_k=5,
)
except NotFoundError:
user_notes = "<No notes found>"
return UserContext(
user_id=user_id,
user_notes=user_notes,
thread_ts=thread_ts,
workspace_name=workspace_name,
channel_id=channel_id,
Expand All @@ -164,7 +177,6 @@ def create_agent(
UserContext, str
](
model=ai_model,
system_prompt=DEFAULT_SYSTEM_PROMPT,
model_settings=ModelSettings(temperature=settings.temperature),
tools=[
research_prefect_topic, # Tool for researching Prefect topics
Expand All @@ -177,6 +189,42 @@ def create_agent(
deps_type=UserContext,
)

@agent.system_prompt
def personality_and_maybe_notes(ctx: RunContext[UserContext]) -> str:
system_prompt = DEFAULT_SYSTEM_PROMPT + (
f"\n\nUser notes: {ctx.deps['user_notes']}"
if ctx.deps["user_notes"]
else ""
)
print(f"System prompt: {system_prompt}")
Copy link

Copilot AI Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This debug print statement should be removed or replaced with proper logging using the run logger that's already available in the function scope.

Suggested change
print(f"System prompt: {system_prompt}")
logger.info(f"System prompt: {system_prompt}")

Copilot uses AI. Check for mistakes.
return system_prompt

@agent.tool
async def store_facts_about_user(
ctx: RunContext[UserContext], facts: list[str]
) -> str:
"""Store facts about the user that are useful for answering their questions."""
print(f"Storing {len(facts)} facts about user {ctx.deps['user_id']}")
# This creates an asset dependency: USER_FACTS depends on SLACK_MESSAGES
Comment on lines +207 to +208
Copy link

Copilot AI Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These debug print statements should be replaced with proper logging using get_run_logger() for consistency with the rest of the codebase.

Copilot uses AI. Check for mistakes.
message = await store_user_facts(ctx, facts)
print(message)
return message

@agent.tool
def delete_facts_about_user(ctx: RunContext[UserContext], related_to: str) -> str:
"""Delete facts about the user related to a specific topic."""
print(f"forgetting stuff about {ctx.deps['user_id']} related to {related_to}")
Copy link

Copilot AI Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These debug print statements should be replaced with proper logging using get_run_logger() for consistency with the rest of the codebase.

Suggested change
print(f"forgetting stuff about {ctx.deps['user_id']} related to {related_to}")
get_run_logger().info(f"forgetting stuff about {ctx.deps['user_id']} related to {related_to}")

Copilot uses AI. Check for mistakes.
user_id = ctx.deps["user_id"]
with TurboPuffer(
namespace=f"{settings.user_facts_namespace_prefix}{user_id}"
) as tpuf:
vector_result = tpuf.query(related_to)
ids = [str(v.id) for v in vector_result.rows or []]
tpuf.delete(ids)
message = f"Deleted {len(ids)} facts about user {user_id}"
print(message)
Copy link

Copilot AI Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These debug print statements should be replaced with proper logging using get_run_logger() for consistency with the rest of the codebase.

Copilot uses AI. Check for mistakes.
return message

@agent.tool
async def create_discussion_and_notify(
ctx: RunContext[UserContext],
Expand Down
10 changes: 6 additions & 4 deletions examples/slackbot/src/slackbot/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
- **Assume Prefect 3.x:** Unless the user specifies otherwise, assume the user is using Prefect 3.x. You can mention this assumption IF RELEVANT (e.g., "In Prefect 3.x, you would...").
- **Code is King:** When providing code examples, ensure they are complete and correct. Use your `verify_import_statements` tool's output to guide you.
- **Honesty Over Invention:** If your tools don't find a clear answer, say so. It's better to admit a knowledge gap than to provide incorrect information.
- **Stay on Topic:** Only reference notes you've stored about the user if they are directly relevant to the current question.
- **Proportionality:** If asked a simple question, you don't need to do a bunch of work. Just answer the question once you find it. However, feel free to dig into broad questions.

## CRITICAL - Removed/Deprecated Features
Expand All @@ -41,17 +42,18 @@
If a user explicitly mentions using Prefect 2.x, that's fine, but recommend upgrading to 3.x or using workers in 2.x.

## Tool Usage Protocol
You have a suite of tools to gather information. Use them methodically.
You have a suite of tools to gather and store information. Use them methodically.

1. **For Technical/Conceptual Questions:** Use `research_prefect_topic`. It delegates to a specialized agent that will do comprehensive research for you.
2. **For Bugs or Error Reports:** Use `read_github_issues` to find existing discussions or solutions.
3. **For Community Discussions:** Use `search_github_discussions` to find existing GitHub discussions on topics.
4. **For Checking the Work of the Research Agent:** Use `explore_module_offerings` and `display_callable_signature` to verify specific syntax recommendations.
5. **For CLI Commands:** use `check_cli_command` with --help before suggesting any Prefect CLI command to verify it exists and has the correct syntax. This prevents suggesting non-existent commands.
4. **For Remembering User Details:** When a user shares information about their goals, environment, or preferences, use `store_facts_about_user` to save these details for future interactions.
5. **For Checking the Work of the Research Agent:** Use `explore_module_offerings` and `display_callable_signature` to verify specific syntax recommendations.
6. **For CLI Commands:** use `check_cli_command` with --help before suggesting any Prefect CLI command to verify it exists and has the correct syntax. This prevents suggesting non-existent commands.
- **IMPORTANT:** When checking commands that require optional dependencies (e.g., AWS, Docker, Kubernetes integrations), use the `uv run --with 'prefect[<extra>]'` syntax.
- Examples: `uv run --with 'prefect[aws]'`, `uv run --with 'prefect[docker]'`, `uv run --with 'prefect[kubernetes]'`
- This ensures the command runs with the necessary dependencies installed.
6. **For Creating GitHub Discussions (USE SPARINGLY):** Use `create_discussion_and_notify` only when:
7. **For Creating GitHub Discussions (USE SPARINGLY):** Use `create_discussion_and_notify` only when:
- The thread contains valuable insights, solutions, or patterns not documented elsewhere
- You've searched both issues and discussions and found no existing coverage of the topic
- The conversation would clearly benefit the broader Prefect community
Expand Down
19 changes: 8 additions & 11 deletions examples/slackbot/src/slackbot/settings.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
import os
from pathlib import Path
from typing import ClassVar
from typing import ClassVar, Literal

from prefect.blocks.system import Secret
from prefect.exceptions import ObjectNotFound
from prefect.variables import Variable
from pydantic import Field, field_validator, model_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
Expand Down Expand Up @@ -57,9 +56,13 @@ def validate_log_level(cls, v: str) -> str:
default="anthropic-api-key",
description="Name of the Prefect secret block containing Anthropic API key",
)
letta_api_key_secret_name: str = Field(
default="letta-api-key",
description="Name of the Prefect secret block containing Letta API key",

vector_store_type: Literal["turbopuffer"] = Field(
default="turbopuffer", description="Type of vector store to use"
)
user_facts_namespace_prefix: str = Field(
default="user-facts-",
description="Prefix for user facts namespaces in vector store",
)

# Development settings
Expand Down Expand Up @@ -91,12 +94,6 @@ def _apply_post_validation_defaults(self) -> "SlackbotSettings":
os.environ["TURBOPUFFER_API_KEY"] = api_key
except Exception:
pass # If secret doesn't exist, turbopuffer will handle the error
if not os.getenv("LETTA_API_KEY"):
try:
api_key = Secret.load(self.letta_api_key_secret_name, _sync=True).get() # type: ignore
os.environ["LETTA_API_KEY"] = api_key
except ObjectNotFound:
pass # If secret doesn't exist, learning-sdk won't be used
if not self.admin_slack_user_id:
self.admin_slack_user_id = Variable.get("admin-slack-id", _sync=True)
return self
Expand Down
1 change: 1 addition & 0 deletions examples/slackbot/src/slackbot/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

class UserContext(TypedDict):
user_id: str
user_notes: str
thread_ts: str
workspace_name: str
channel_id: str
Expand Down
3 changes: 0 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -96,9 +96,6 @@ extend-select = ["I"]
[tool.ruff.lint.per-file-ignores]
"__init__.py" = ["F401", "I001", "RUF013"]

[tool.uv]
prerelease = "allow"

[tool.uv.sources]
slackbot = { workspace = true }

Expand Down
Loading