Skip to content

Introduction

DeepVA Cloud can be used via a web application with an easy-to-use Web-UI or accessed by third-party applications via a powerful REST API.

Getting Started

Access all DeepVA services via a powerful REST API. Our API has resource-oriented URLs, accepts JSON-encoded request bodies, returns JSON-encoded responses, and uses standard HTTP response codes.

The API allows you to apply recognition functionality on your image and video files.

There are several recognition functions called Visual Mining Modules. The API is built around a simple idea. You apply a Visual Mining Module (e.g. Face Recognition) on your inputs (images or videos) and it returns the metadata of it. The type of output is based on what Visual Mining Module you run the input through.

DeepVA is a job-based system that enables you to create a mining job (Job resource) and read the job to retrieve its state, progress and the result.

All API endpoints are being accessed via the https://api.deepva.com domain. The relative path prefix /v1/ indicates that we are currently using version 1 of the API. In the following documentation single brackets {variable} are used to indicate that this is a variable you should replace with a real value.

Authorization

DeepVA needs an API Key to authenticate and to grant access to our service. You can go to the DeepVA platform webpage to access your API Key or to create a new key for your application. After creating your API Key, you are ready to make API calls.

For every API call, you need to provide the API Key using the Authorization header of your request.

curl -v -H "Authorization: Key {your-api-key}" https://api.deepva.com/api/v1/jobs/
import requests

headers = {"Content-Type": "application/json",
               "Authorization": f"Key {your-api-key}"}       
response = requests.get("https://api.deepva.com/api/v1/jobs/",
                        headers=headers)               
if response.status_code == 200:
    response_json = response.json()
    jobs = response_json['data']
    print(jobs)
else:
    print(f"Error: {response.status_code}")

Errors

The most common errors are:

  • 400 BadRequest - There is an error in your request
  • 401 NotAuthenticated - Your are not authenticated. Check your authentication headers
  • 404 NotFound - Requested resource couldn’t be found
  • 405 MethodNotAllowed - Not allowed method of service was called
  • 500 GeneralError - Internal server error. We will work on it.

If you have successfully requested a resource, the status code should be 200 OK.

Supported media types

The following type of sources are supported as input for a mining job:

Type Description Example
DeepVA Storage URL Media file uploaded to the DeepVA Storage endpoint (hosted by DeepVA) storage://NB30315igAkYzfJlBtdt
Public URL Media file reachable via a public URL https://demo.deepva.com/demo1.jpg
Youtube Link Video on Youtube https://www.youtube.com/watch?v=UwsrzCVZAb8
Vimeo Link Video on Vimeo https://vimeo.com/123558737
Local path Network share or local disk (On-Prem only!) /media/my-custom-video-storage/

The following media types are supported:

Format Type MIME type
JPEG image image/jpeg or image/jpg
PNG image image/png
MP4 video video/mp4
MPEG video video/mpeg
MXF video application/mxf
MOV video video/quicktime

Example

In the following example use case you will request our API for recognizing the face of a celebrity in an image by applying the Face Recognition module. You will need to call the POST method on the /jobs endpoint to create a new job. In the request body, you describe all the properties that the job needs to execute your recognition task.

{
  "sources": [
    "{url-to-your-image}"
  ],
  "modules": {
    "face_recognition": {
      "model": "celebrities"
    }
  }
}

There are two important fields to set in the request body:

  • sources - the files you want to process
  • modules - the Visual Mining Modules you want to apply (the actual recognition task you want to start, e.g. face recognition) and its configuration

The response of the request will be the job that you were creating:

{
  "id": "6b5195d6-cc05-4a99-a293-8d73be0aa37f",
  "tag": "",
  "state": "waiting",
  "progress": 0.0,
  "duration": 0,
  "time_created": "2019-08-12 08:14:28.430144",
  "time_started": null,
  "time_completed": null,
  "sources": [
      "https://demo.deepva.com/assets/image1.jpg"
  ],
  "modules": {
      "face_recognition": {
        "model": "celebrities"
      }
  },
  "result": {
      "detailed_link": "https://api.deepva.com/v1/jobs/6b5195d6-cc05-4a99-a293-8d73be0aa37f/detailed-results",
      "summary": []
}
In this example response, the job has no recognition result as it has not been processed by the DeepVA system yet. For multiple images or a long video as a source, processing can take some time. This means that you can query the job at a later time by calling the /jobs endpoint with the ID of the job. The following example demonstrates the request of a job by its ID.

curl -v -H "Authorization: Key {your-api-key}" https://api.deepva.com/api/v1/jobs/{job_id}/
import requests

headers = {"Content-Type": "application/json",
               "Authorization": f"Key {your_api_key}"}       
response = requests.get(f"https://api.deepva.com/api/v1/jobs/{job_id}",
                        headers=headers)               
if response.status_code == 200:
    job_object = response.json()
    print(job_object)
else:
    print(f"Error: {response.status_code}")

The response looks like this:

{
  "id": "6b5195d6-cc05-4a99-a293-8d73be0aa37f",
  "tag": "",
  "state": "completed",
  "progress": 1.0,
  "duration": 0.2,
  "time_created": "2019-08-12 08:14:28.430144",
  "time_started": "2019-08-12 08:14:29.510771",
  "time_completed": "2019-08-12 08:14:29.530825",
  "sources": [
      "https://demo.deepva.com/assets/image1.jpg"
  ],
  "modules": {
      "face_recognition": {
        "model": "celebrities"
      }
  },
  "result": {
      "detailed_link": "https://api.deepva.com/v1/jobs/6b5195d6-cc05-4a99-a293-8d73be0aa37f/detailed-results",
      "summary": [
        {
            "source": "https://demo.deepva.com/assets/image1.jpg",
            "media_type": "image",
            "info": {},
            "items": [
                {
                    "type": "face",
                    "label": "Frederik Böhm"
                }
            ]
        }
      ]
}
The result contains two items:

  • detailed_link - link to requesting the detailed result
  • summary - a summary of the result (labels of the image) regarding its recognition type

If the summary with just the labels is not enough you can request the detailed results by calling the /jobs/{job_id}/detailed-results endpoint. For videos, the result can become huge, that's why it has been separated in another resource (detailed-results). The detailed-results endpoint also provides more data like the bounding-box of the detected face or its facial landmarks.

Visual Mining Modules

In the modules field of the job, you specify which of the Visual Mining Modules should be applied to your sources during creating a job. The following modules are available in the first beta version of DeepVA Cloud:

There are several modules available like Face Recognition, Object & Scene Recognition and many more. You can find an overview here.

Storage

If you can not provide the URL of your images or videos that you want to send to the API, you can use our /storage endpoint. You can upload your media file and specify the storage URI (storage://q1gyaldQ146pAagevHIK) as source when creating a job.

Send the image via POST and multipart/form-data like in this curl example:

curl -H "Authorization: Key {api-key}" -F "folder=/" -F "file=@{path-to-file}" https://api.deepva.com/api/v1/storage/
API_KEY = "{api-key}"
multipart_form_data = {
    'file': (os.path.basename(file), open("example.jpg", 'rb'))
}
response = requests.post("https://api.deepva.com/api/v1/storage/",
                         files=multipart_form_data,
                         data={'folder': "/"},
                         headers={'Authorization': 'Key {0}'.format(API_KEY)}
                         )
if response.status_code == 201:
    return response.json()['url']   # storage url (e.g. storage://uo7hRJfapCQKZnhXGl78)
else:
    raise RuntimeError("Upload failed!")
var apiKey = "Key xyz";    // DeepVA API key e.g. "Key xyz"
var folderName = "/";    // Destination folder (e.g. "/my-subfolder/" or "/" for root folder)
var imageFileName = "example.jpg";    // Remote filename
FileStream fs = File.Open("path-to-local-file/example.jpg", FileMode.Open, FileAccess.Read, FileShare.None);    // File to upload

HttpClient httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
httpClient.DefaultRequestHeaders.Add("Authorization", apiKey);

MultipartFormDataContent multiForm = new MultipartFormDataContent();
multiForm.Add(new StringContent(folderName), "folder");
multiForm.Add(new StreamContent(fs), "file", imageFileName);

HttpResponseMessage response = await httpClient.PostAsync("https://api.deepva.com/api/v1/storage/", multiForm);
response.EnsureSuccessStatusCode();
httpClient.Dispose();

The response will contain the storage URI that you need to provide when creating a job.

{
  "id": 594,
  "url": "storage://q1gyaldQ146pAagevHIK",
  "file_size": null,
  "name": "test1.jpg",
  "is_folder": false,
  "uploaded": false,
  "is_image": true,
  "upload_failed": false
}

Webhooks

To avoid spamming the /jobs endpoint you can subscribe to a webhook for several events. Read more about webhooks here.