submit_ollama_pairs_live() is a robust row-wise wrapper around
ollama_compare_pair_live(). It takes a tibble of pairs (ID1 / text1 /
ID2 / text2), submits each pair to a local (or remote) Ollama server,
and collects the results.
Usage
submit_ollama_pairs_live(
pairs,
model,
trait_name,
trait_description,
prompt_template = set_prompt_template(),
host = getOption("pairwiseLLM.ollama_host", "http://127.0.0.1:11434"),
verbose = TRUE,
status_every = 1,
progress = TRUE,
think = FALSE,
num_ctx = 8192L,
include_raw = FALSE,
save_path = NULL,
parallel = FALSE,
workers = 1,
...
)Arguments
- pairs
Tibble or data frame with at least columns
ID1,text1,ID2,text2. Typically created bymake_pairs(),sample_pairs(), andrandomize_pair_order().- model
Ollama model name (for example
"mistral-small3.2:24b","qwen3:32b","gemma3:27b").- trait_name
Trait name to pass to
ollama_compare_pair_live().- trait_description
Trait description to pass to
ollama_compare_pair_live().- prompt_template
Prompt template string, typically from
set_prompt_template().- host
Base URL of the Ollama server. Defaults to the option
getOption("pairwiseLLM.ollama_host", "http://127.0.0.1:11434").- verbose
Logical; if
TRUE, prints status, timing, and result summaries.- status_every
Integer; print status and timing for every
status_every-th pair. Defaults to 1 (every pair). Errors are always printed.- progress
Logical; if
TRUE, shows a textual progress bar.- think
Logical; see
ollama_compare_pair_live()for behavior. WhenTRUEand the model name starts with"qwen", the temperature is set to0.6; otherwise the temperature remains0.- num_ctx
Integer; context window to use via
options$num_ctx. The default is8192L.- include_raw
Logical; if
TRUE, each row of the returned tibble will include araw_responselist-column with the parsed JSON body from Ollama. Note: Raw responses are not saved to the incremental CSV file.- save_path
Character string; optional file path (e.g., "output.csv") to save results incrementally. If the file exists, the function reads it to identify and skip pairs that have already been processed (resume mode). Requires the
readrpackage.- parallel
Logical; if
TRUE, enables parallel processing usingfuture.apply. Requires thefutureandfuture.applypackages. Defaults toFALSE.- workers
Integer; the number of parallel workers (threads) to use if
parallel = TRUE. Defaults to 1.- ...
Reserved for future extensions and forwarded to
ollama_compare_pair_live().
Value
A list containing two elements:
- results
A tibble with one row per successfully processed pair.
- failed_pairs
A tibble containing the rows from
pairsthat failed to process (due to API errors or timeouts), along with anerror_messagecolumn.
Details
This function offers:
Incremental Saving: Writes results to a CSV file as they complete. If the process is interrupted, re-running the function with the same
save_pathwill automatically skip pairs that were already successfully processed.Parallel Processing: Uses the
futurepackage to process multiple pairs simultaneously. Note: Since Ollama typically runs locally on the GPU, parallel processing may degrade performance or cause out-of-memory errors unless the hardware can handle concurrent requests. Defaults are set to sequential processing.
Temperature and context length are controlled as follows:
By default,
temperature = 0for all models.For Qwen models (model names beginning with
"qwen") andthink = TRUE,temperatureis set to0.6.The context window is set via
options$num_ctx, which defaults to8192but may be overridden via thenum_ctxargument.
In most user-facing workflows, it is more convenient to call
submit_llm_pairs() with backend = "ollama" rather than using
submit_ollama_pairs_live() directly.
As with ollama_compare_pair_live(), this function assumes that:
An Ollama server is running and reachable at
host.The requested models have been pulled in advance (for example
ollama pull mistral-small3.2:24b).
See also
ollama_compare_pair_live()for single-pair Ollama comparisons.submit_llm_pairs()for backend-agnostic comparisons over tibbles of pairs.
Examples
if (FALSE) { # \dontrun{
# Requires a running Ollama server and locally available models.
data("example_writing_samples", package = "pairwiseLLM")
pairs <- example_writing_samples |>
make_pairs() |>
sample_pairs(n_pairs = 5, seed = 123) |>
randomize_pair_order(seed = 456)
td <- trait_description("overall_quality")
tmpl <- set_prompt_template()
# Live comparisons with incremental saving
res_mistral <- submit_ollama_pairs_live(
pairs = pairs,
model = "mistral-small3.2:24b",
trait_name = td$name,
trait_description = td$description,
prompt_template = tmpl,
save_path = "ollama_results.csv",
verbose = TRUE
)
# Access results
res_mistral$results
} # }