EnviroLLM

Your toolkit for tracking, benchmarking, and optimizing resource usage of local LLMs.

The Problem

Users lack the tools to measure the resource usage and energy impact of local LLMs. Without visibility into resource consumption, it's difficult to make informed decisions about model selection, optimization, and sustainable AI practices.

The Solution

Get started with one command:

npx envirollm start

Use the CLI directly or alongside the dashboards:

System Monitoring

Track your system performance across setups. Monitor CPU, GPU, and memory usage in real-time and get generic recommendations.

Start Monitoring

Model Benchmarking

Benchmark local LLMs and compare performance. Test inference speed, resource usage, and energy efficiency across different models and prompts.

Benchmark Models

What You Can Expect

Model Benchmarking

Test models with custom or preset prompts. Compare energy consumption, speed, and quality across different models.

Benchmark interface showing task presets and model selection

Side-by-Side Comparisons

Visualize performance metrics and identify the best model for your specific use case.

Comparison view showing energy efficiency and performance charts

Smart Recommendations

Get automatic model recommendations based on your benchmarks.

Model recommendations showing best overall, most efficient, fastest, and best quality options