Ollama

Running LLMs on GPU instances