Calling GitHub Copilot Models from OpenHands using LiteLLM Proxy

Integrating GitHub Copilot models into your development workflow can significantly enhance productivity. By leveraging LiteLLM Proxy alongside OpenHands, you can seamlessly connect to various large language models (LLMs), including GitHub's offerings. This guide will walk you through setting up LiteLLM Proxy and OpenHands using Docker Compose, enabling you to call GitHub Copilot models effectively.openhands backends, litellm-proxy Prerequisites Before proceeding, ensure you have the following: Docker and Docker Compose: Installed on your system. GitHub Personal Access Token (PAT): Required for authenticating with GitHub's API.authenticating to the rest api Step 1: Obtain a GitHub Personal Access Token To authenticate with GitHub's API, you'll need to create a Personal Access Token (PAT): Log in to your GitHub account. Navigate to Settings > Developer settings. In the left sidebar, click on Personal access tokens > Tokens (classic). Click Generate new token (classic). Provide a descriptive note, set an expiration, and select the necessary scopes. Click Generate token and copy the token for later use. Note: Treat your PAT like a password. Store it securely and do not share it. Step 2: Configure LiteLLM Proxy LiteLLM Proxy serves as a gateway to various LLM providers, including GitHub. To configure it: Create a Configuration File: Save the following content as litellm_config.yaml:quick_start model_list: - model_name: github-gpt-4o-mini litellm_params: model: github/gpt-4o-mini api_key: "os.environ/GITHUB_API_KEY" This configuration specifies the GitHub Copilot model and retrieves the API key from the environment variable GITHUB_API_KEY. There are several more models to choose from marketplace. Set Environment Variables: Ensure the GITHUB_API_KEY environment variable is set with your GitHub PAT. Step 3: Set Up Docker Compose To orchestrate the deployment of LiteLLM Proxy and OpenHands, use Docker Compose with the following configuration: services: litellm: image: ghcr.io/berriai/litellm:main-latest ports: - "4000:4000" volumes: - ./litellm_config.yaml:/app/config.yaml environment: - GITHUB_API_KEY=${GITHUB_API_KEY} - LITELLM_MASTER_KEY=somekey - UI_USERNAME=someuser - UI_PASSWORD=somepassword command: --config /app/config.yaml --detailed_debug openhands: image: docker.all-hands.dev/all-hands-ai/openhands:0.23 container_name: openhands-app ports: - "3000:3000" environment: - SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.24-nikolaik - WORKSPACE_MOUNT_PATH=${HOME}/OH - LOG_ALL_EVENTS=true volumes: - ${HOME}/OH:/opt/worspace_base - /var/run/docker.sock:/var/run/docker.sock - ${HOME}/.openhands-state:/.openhands-state extra_hosts: - "host.docker.internal:host-gateway" Explanation: LiteLLM Service: Mounts the litellm_config.yaml configuration file. Exposes port 4000. Sets environment variables, including GITHUB_API_KEY. OpenHands Service: Depends on LiteLLM Proxy. Exposes port 3000. Configures necessary environment variables and volume mounts. Step 4: Deploy the Services With the configuration in place: Ensure your GITHUB_API_KEY environment variable is set: export GITHUB_API_KEY=your_personal_access_token Start the services using Docker Compose: docker-compose up This command will pull the necessary images and start both services. Step 5: Configure OpenHands to Use LiteLLM Proxy Once the services are running: Access the OpenHands UI at http://localhost:3000. Navigate to Settings. Enable Advanced options. Set the following configurations: Custom Model: litellm_proxy/github-gpt-4o-mini Base URL: http://litellm:4000 API Key: somekey (matches LITELLM_MASTER_KEY) These settings direct OpenHands to route requests through LiteLLM Proxy to access the GitHub Copilot model. Conclusion By following these steps, you've integrated GitHub Copilot models into your development environment using LiteLLM Proxy and OpenHands.

Apr 3, 2025 - 20:39
 0
Calling GitHub Copilot Models from OpenHands using LiteLLM Proxy

Integrating GitHub Copilot models into your development workflow can significantly enhance productivity. By leveraging LiteLLM Proxy alongside OpenHands, you can seamlessly connect to various large language models (LLMs), including GitHub's offerings. This guide will walk you through setting up LiteLLM Proxy and OpenHands using Docker Compose, enabling you to call GitHub Copilot models effectively.openhands backends, litellm-proxy

Prerequisites

Before proceeding, ensure you have the following:

  • Docker and Docker Compose: Installed on your system.
  • GitHub Personal Access Token (PAT): Required for authenticating with GitHub's API.authenticating to the rest api

Step 1: Obtain a GitHub Personal Access Token

To authenticate with GitHub's API, you'll need to create a Personal Access Token (PAT):

  1. Log in to your GitHub account.
  2. Navigate to Settings > Developer settings.
  3. In the left sidebar, click on Personal access tokens > Tokens (classic).
  4. Click Generate new token (classic).
  5. Provide a descriptive note, set an expiration, and select the necessary scopes.
  6. Click Generate token and copy the token for later use.

Note: Treat your PAT like a password. Store it securely and do not share it.

Step 2: Configure LiteLLM Proxy

LiteLLM Proxy serves as a gateway to various LLM providers, including GitHub. To configure it:

  1. Create a Configuration File: Save the following content as litellm_config.yaml:quick_start
   model_list:
     - model_name: github-gpt-4o-mini
       litellm_params:
         model: github/gpt-4o-mini
         api_key: "os.environ/GITHUB_API_KEY"

This configuration specifies the GitHub Copilot model and retrieves the API key from the environment variable GITHUB_API_KEY. There are several more models to choose from marketplace.

  1. Set Environment Variables: Ensure the GITHUB_API_KEY environment variable is set with your GitHub PAT.

Step 3: Set Up Docker Compose

To orchestrate the deployment of LiteLLM Proxy and OpenHands, use Docker Compose with the following configuration:

services:
  litellm:
    image: ghcr.io/berriai/litellm:main-latest
    ports:
      - "4000:4000"
    volumes:
      - ./litellm_config.yaml:/app/config.yaml
    environment:
      - GITHUB_API_KEY=${GITHUB_API_KEY}
      - LITELLM_MASTER_KEY=somekey
      - UI_USERNAME=someuser
      - UI_PASSWORD=somepassword
    command: --config /app/config.yaml --detailed_debug

  openhands:
    image: docker.all-hands.dev/all-hands-ai/openhands:0.23
    container_name: openhands-app
    ports:
      - "3000:3000"
    environment:
      - SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.24-nikolaik
      - WORKSPACE_MOUNT_PATH=${HOME}/OH
      - LOG_ALL_EVENTS=true
    volumes:
      - ${HOME}/OH:/opt/worspace_base
      - /var/run/docker.sock:/var/run/docker.sock
      - ${HOME}/.openhands-state:/.openhands-state
    extra_hosts:
      - "host.docker.internal:host-gateway"

Explanation:

  • LiteLLM Service:

    • Mounts the litellm_config.yaml configuration file.
    • Exposes port 4000.
    • Sets environment variables, including GITHUB_API_KEY.
  • OpenHands Service:

    • Depends on LiteLLM Proxy.
    • Exposes port 3000.
    • Configures necessary environment variables and volume mounts.

Step 4: Deploy the Services

With the configuration in place:

  1. Ensure your GITHUB_API_KEY environment variable is set:
   export GITHUB_API_KEY=your_personal_access_token
  1. Start the services using Docker Compose:
   docker-compose up

This command will pull the necessary images and start both services.

Step 5: Configure OpenHands to Use LiteLLM Proxy

Once the services are running:

  1. Access the OpenHands UI at http://localhost:3000.
  2. Navigate to Settings.
  3. Enable Advanced options.
  4. Set the following configurations:
    • Custom Model: litellm_proxy/github-gpt-4o-mini
    • Base URL: http://litellm:4000
    • API Key: somekey (matches LITELLM_MASTER_KEY)

These settings direct OpenHands to route requests through LiteLLM Proxy to access the GitHub Copilot model.

Conclusion

By following these steps, you've integrated GitHub Copilot models into your development environment using LiteLLM Proxy and OpenHands.