A blendtutor lesson package with interactive coding exercises and AI-powered feedback.
Open inst/lessons/pseudocode_planning.yaml and fill in:
lesson_name— the display title students seedescription— a short summary for lesson listingsexercise$prompt— what the student should doexercise$llm_evaluation_prompt— how the LLM grades submissions (must include{student_code}so blendtutor can insert the student's code)
See the scaffolded file for the full schema with all optional fields.
blendtutor::validate_lesson("inst/lessons/pseudocode_planning.yaml")Fix any [FAIL] items before moving on.
Open evals/eval_pseudocode_planning.R and fill in the # TODO sections:
EXERCISE— paste your exercise prompteval_data— add input/target pairs (at least 2 correct, 3 incorrect submissions covering common failure modes)system_prompt_for()— write the evaluation criteria specific to your exercise
Then set your API key and run:
# Add to .Renviron (restart R after)
echo 'FIREWORKS_API_KEY=your-key-here' >> .RenvironNote:
.Renvironis automatically added to.gitignoreto protect your API key from being accidentally committed.
source("evals/eval_pseudocode_planning.R")If the LLM misclassifies a submission, tweak the criteria in system_prompt_for() and re-run until you're happy with accuracy.
devtools::install()
blendtutor::invalidate_lesson_cache()
blendtutor::list_lessons()
blendtutor::start_lesson("pseudocode_planning")blendtutor::use_blendtutor_lesson("new_lesson_name")
blendtutor::use_blendtutor_evals("new_lesson_name")This creates a new YAML in inst/lessons/ and a matching eval in evals/.
justenougheng/
DESCRIPTION # blendtutor in Imports
inst/lessons/ # Lesson YAML files
evals/ # Eval scripts for testing grading accuracy
.claude/skills/ # Claude Code skill for guided help
If you're using Claude Code, run /help-me-build for step-by-step guidance on writing lessons, evaluation prompts, and evals.