GET /v2/dedicated-inferences/gpu-model-config
Get supported GPU and model configurations for Dedicated Inference. Use this to
discover supported GPU slugs and model slugs (e.g. Hugging Face). Send a GET
request to /v2/dedicated-inferences/gpu-model-config.
Servers
- https://api.digitalocean.com
How to start integrating
- Add HTTP Task to your workflow definition.
- Search for the API you want to integrate with and click on the name.
- This loads the API reference documentation and prepares the Http request settings.
- Click Test request to test run your request to the API and see the API's response.