Skip to content

Log resolved model/config when using .prompt.yml #203

@mingzaily

Description

@mingzaily

Summary

When actions/ai-inference runs in structured .prompt.yml mode, the workflow log still shows the action input defaults in the with: block, for example:

  • model: openai/gpt-4o
  • system-prompt: You are a helpful assistant
  • max-tokens: 200

However, the implementation later loads the .prompt.yml file and resolves the effective model and model parameters from promptConfig.

That means a run can actually use values from the prompt file while the visible log output strongly suggests the defaults are in effect.

Why this is confusing

From the repo:

  • action.yml defines defaults for model, system-prompt, and max-tokens
  • dist/index.js then does:
    • const modelName = promptConfig?.model || core.getInput('model')
    • const maxCompletionTokensInput = promptConfig?.modelParameters?.maxCompletionTokens ?? core.getInput('max-completion-tokens')
    • const maxTokensInput = promptConfig?.modelParameters?.maxTokens ?? core.getInput('max-tokens')
    • const temperature = promptConfig?.modelParameters?.temperature ?? ...

So behavior is correct, but observability is misleading.

Reproduction pattern

Use a .prompt.yml file with values such as:

model: openai/gpt-4.1
responseFormat: json_schema
modelParameters:
  temperature: 0.0
  maxCompletionTokens: 900

and call the action like this:

- uses: actions/ai-inference@v1
  with:
    prompt-file: ./.github/prompts/sample.prompt.yml
    input: |
      foo: bar

The log shows:

  • Using prompt YAML file format
  • but the earlier with: block still displays the input defaults (openai/gpt-4o, You are a helpful assistant, max-tokens: 200)

This makes it hard to tell which model/config was actually used.

Expected behavior

One of these would solve it:

  1. Add an explicit log section after loading .prompt.yml, for example:
    • resolved model
    • resolved response format
    • resolved max completion tokens / max tokens
    • resolved temperature / topP
  2. Or document clearly in the README that the with: block shows action input defaults and is not the final resolved config in .prompt.yml mode.
  3. Ideally both.

Notes

I do not think the inference behavior is wrong here. This looks like a logging / diagnostics gap specific to .prompt.yml mode.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions