EnviroLLM

Open-source toolkit for tracking, benchmarking, and optimizing resource usage of local LLMs

THE PROBLEM

Users lack the tools to measure the resource usage and energy impact of local LLMs. Without visibility into resource consumption, it's impossible to make informed decisions about model selection, optimization, or sustainable AI practices.

SYSTEM MONITORING

Track resource usage of your local LLMs with visual dashboards. Monitor CPU, GPU, and memory usage in real-time and get optimization recommendations.

Start Monitoring

MODEL BENCHMARKING

Benchmark local LLMs and compare their performance. Test inference speed, resource usage, and energy efficiency across different models.

Benchmark Models