GET /v2/dedicated-inferences/{dedicated_inference_id}/accelerators

List all accelerators (GPUs) in use by a Dedicated Inference instance. Send a GET request to /v2/dedicated-inferences/{dedicated_inference_id}/accelerators. Optionally filter by slug and use page/per_page for pagination.

Servers

Path parameters

Name Type Required Description
dedicated_inference_id String Yes

A unique identifier for a Dedicated Inference instance.

Query parameters

Name Type Required Description
page Integer No

Which 'page' of paginated results to return.

Default value: 1

slug String No

Filter accelerators by GPU slug.

per_page Integer No

Number of items returned per page

Default value: 20

How to start integrating

  1. Add HTTP Task to your workflow definition.
  2. Search for the API you want to integrate with and click on the name.
    • This loads the API reference documentation and prepares the Http request settings.
  3. Click Test request to test run your request to the API and see the API's response.