modules package
Submodules
modules.llava_benchmark module
- class modules.llava_benchmark.LlavaBenchmark(benchmarks: List)[source]
Bases:
object
LLaVA Benchmark class for processing media and storing results.
- benchmarks
List of benchmark instances.
- Type:
list
modules.ollama module
- class modules.ollama.Ollama[source]
Bases:
object
- static is_binary_installed()[source]
Checks if the ollama binary is present on the system.
- Returns:
True if ollama binary is present, False otherwise.
- Return type:
bool
- static is_model_installed(model: str)[source]
Checks if a given model is installed.
- Parameters:
model (str) – The name of the model to check.
- Returns:
True if the model is installed, False otherwise.
- Return type:
bool
- static print_prompt(prompt: str) None [source]
Print the formatted prompt.
- Parameters:
prompt (str) – The prompt to print.
- static read_yaml(yaml_file_path: str)[source]
Reads a YAML file and returns its content as a dictionary.
- Parameters:
yaml_file_path (str) – Path to the YAML file.
- Returns:
Parsed YAML data.
- Return type:
dict
- static run_benchmark(model: str, prompt: str, media_file_path: str) CompletedProcess [source]
Run a benchmark using the specified model, prompt, and media file path.
- Parameters:
model (str) – The name of the model to use.
prompt (str) – The prompt for the benchmark.
media_file_path (str) – The path to the media file.
- Returns:
The result of the benchmark execution.
- Return type:
subprocess.CompletedProcess