中文 README · Technical Report · mywork Guide · Troubleshooting
Turn a full guide into a study path, fold interview questions into footnotes, turn your own work into answers you can actually speak out loud, and then let a harsh interviewer keep drilling deeper.
OfferPotato is a local-first interview-preparation workspace.
You bring three kinds of input:
- a study guide
- interview question banks
- your own work materials such as papers, code, design docs, experiment logs, and notes
OfferPotato turns them into a study-and-practice website you can really use:
- the guide stays in its original learning order
- interview questions sink to chapter endings and highlight the matching knowledge points
- each question can generate a personalized answer that cites both guide knowledge and
mywork - every question can also enter interviewer pressure mode, where a deliberately tough interviewer keeps following up
If you feel stuck because:
- you have too many docs and no idea what to study first
- interviewers keep asking “tell me about your project” and you do not know how to connect your experience to core concepts
- you can answer surface-level questions but break down when the interviewer keeps digging
- you want Codex to help, but you do not want to hand-build the indexing, prompting, batching, and UI yourself
then this repo is probably worth a try.
| You provide | OfferPotato does | You get |
|---|---|---|
| guide documents | preserves order, extracts anchors, builds a reading path | a guide-centered study site |
| interview question banks | deduplicates, categorizes, and links questions to anchors and chapters | a clear map from concepts to questions |
mywork/ |
scans recursively, grades relevance, and cites conservatively | answers that sound like your own experience when evidence exists |
| Codex | runs managed jobs with live refresh | translation, answer generation, import, and follow-up workflows |
OfferPotato is not limited to LLM interviews. The same pattern works for any domain where you can provide study material, interview questions, and candidate work evidence.
- it does not flatten everything into a giant list of questions
- it does not force every answer to sound project-backed when your work is actually unrelated
- it combines the guide, question bank,
mywork, and Codex inside one workspace - it does not stop at generating an answer; it can keep pushing you in interviewer pressure mode
- Guide-centered study flow Read the guide in source order as your main learning path.
- Interview backlinks and knowledge highlights Questions appear at the end of each chapter and at the exact knowledge points they hit.
- Personalized answer generation
Answers combine guide context, question context, and relevant
myworkevidence. - Interviewer pressure mode A deliberately harsh interviewer can keep drilling into implementation details, tradeoffs, metrics, and failure cases.
- Managed Codex window Use Codex inside the site with file attachment, current-document reference, and model or effort switching.
- Visual first-run setup and task center Configure sources, build indexes, and monitor jobs from the UI.
- Interview import Add new interview content via pasted text or screenshot OCR and persist it as question-bank content.
Even if you have not prepared your own data yet, you can still clone the repo and explore the built-in public examples first.
git clone https://github.com/ly-xxx/Offer-Potato.git
cd Offer-Potato
npm install
npm run setup:serveThe default flow will:
- verify that
codexorcodex-cliis available - sync public sources from config
- build the SQLite index
- build the frontend and backend
- launch on port
6324
Then open:
http://127.0.0.1:6324- or the LAN URL printed by the script
If you do not yet have a practical codex / codex-cli access path, many users search for a codex relay or codex proxy service. For light usage, some people keep the daily cost around 1 to 2 CNY, but pricing, stability, and compliance depend on the provider and should be evaluated by you.
The guide, highlighted knowledge points, chapter-level interview questions, mywork evidence, and the floating Codex window all live in one place.
Global search is not a separate dead-end page. It keeps the guide, question bank, and Codex in one workspace.
| Search can hit both interview questions and guide chapters | Once you find a hit, you can jump straight back into the guide and keep asking Codex |
|---|---|
![]() |
![]() |
| Before edit | Agent editing in progress | Edit completed |
|---|---|---|
![]() |
![]() |
![]() |
This is not just a terminal embedded in a browser. It is a managed workflow that ties together the current guide page, file references, instruction input, and live write-back.
Personalized answers are not one-line outputs. They are structured answer packages you can actually speak from.
| 20-second opener, project basis, and direct answer | Knowledge skeleton, missing basics, and generation history | High-probability next follow-ups, tracebacks, and interviewer entry |
|---|---|---|
![]() |
![]() |
![]() |
The system does not stop at a “final answer”. It keeps the evidence trail, project angle, knowledge map, follow-up questions, and generation history in the same answer view.
| Round one: when you say you do not know, it still forces you to expose your reasoning path | Round two: after you answer, it keeps drilling deeper |
|---|---|
![]() |
![]() |
If you thought answer generation was the finish line, this mode quickly turns it into an actual mock interview. The first round does not let “I do not know” end the exchange; the second round keeps testing your detail level, boundaries, and prioritization even after you respond.
| Mist | Slate | Paper |
|---|---|---|
![]() |
![]() |
![]() |
The repo already ships with public example content so a fresh clone can start immediately:
sources/documents/llm-agent-interview-guidesources/question-banks/llm-interview-questionssources/question-banks/qa-hub
See docs/SOURCES.en.md for upstream attribution.
For your own data, the recommended layout is:
sources/documents/for study guidessources/question-banks/for interview question banks./mywork/for your own projects, papers, code, and notes
mywork/ stays out of Git by default.
Useful inputs include:
- project READMEs
- paper PDFs or drafts
- code directories
- notebooks
- experiment logs
- technical design notes
- debugging and retrospective notes
See docs/MYWORK.en.md for suggested organization.
OfferPotato handles mywork conservatively:
- it checks whether a directory looks like a real project before deep indexing
- it stops early on empty shells or structurally mismatched folders
- it only recurses deeply when the materials can support a coherent project profile
- it down-ranks weakly related projects instead of force-matching them
That is why work evidence is graded as:
directadjacentnone
The goal is honest personalization, not fake project matching.
Recommended sequence:
- run
npm run setup:serve - open the site
- review the default public sources in Settings
- bind
myworkto your own directory, or keep./mywork - start an indexing job from Settings and watch progress in Tasks
- begin from the guide view, then generate answers or open interviewer pressure mode
If you do not want the default sources, Settings can switch them to:
- local directories
- Git repositories
OfferPotato auto-discovers this structure:
sources/
├── documents/
│ └── <your-guide-source>/
└── question-banks/
└── <your-question-bank-source>/
In practice, adding a new public source often means placing a new directory there, saving Settings, and rebuilding the index.
npm run bootstrapsync Git-backed public sources declared in confignpm run build:databuild the SQLite indexnpm run refresh:datarefresh sources and rebuild the indexnpm run batch:translate-questionsbatch-translate interview questionsnpm run batch:generatebatch-generate personalized answersnpm run batch:codexexecute batch jobs through Codexnpm run buildbuild frontend and backendnpm run startstart the production servernpm run setup:serveinstall, index, build, and launch in one commandnpm run clean:dataclean databases, generated answers, caches, and intermediate artifacts
- docs/TECHNICAL_REPORT.en.md architecture, managed agents, skills, data flow, and persistence
- docs/MYWORK.en.md
how to organize
mywork - docs/CONFIGURATION.en.md source configuration and runtime overrides
- docs/CLI_AGENT.en.md embedded CLI agent behavior
- docs/TROUBLESHOOTING.en.md troubleshooting notes
The repo follows a “public base + private workset” publishing model:
- public guides and question banks stay in the repo
mywork/stays out of Git by defaultconfig/*.runtime.jsonstays out of Git by default- databases, generated answers, and model caches should not be committed
Before publishing or sharing the repo, read:















