Search…
Quickstart: Keypoints Detection
In this quickstart tutorial we'll use the 300-W dataset and a stubbed out example model to create and run tests for the Keypoints Detection workflow.

Getting Started

With the kolena-client Python client installed, first let's initialize a client session:
import os
import kolena
kolena.initialize(os.environ["KOLENA_TOKEN"], verbose=True)
The data used in this tutorial is publicly available in the kolena-public-datasets S3 bucket in a metadata.csv file:
import pandas as pd
DATASET = "300-W"
BUCKET = "s3://kolena-public-datasets"
df = pd.read_csv(f"{BUCKET}/{DATASET}/meta/metadata.csv")
To load CSVs directly from S3, make sure to install the s3fs Python module: pip3 install s3fs[boto3]
This metadata.csv file describes a keypoints detection dataset with the following columns:
  • locator: location of the image in S3
  • normalization_factor: normalization factor of the image. This is used to normalize the error by providing a factor for each image. Common techniques for computation include the euclidian distance between two points or the diagonal measurement of the image.
  • points: stringified list of coordinates corresponding to the [x, y] coordinates of the keypoints ground truths
Each locator is present exactly one time and contains the keypoints ground truth for that image. Kolena currently only supports keypoints for single instance images. Images with multiple instances are not supported at this time.
For brevity, the 300-W dataset has been pared down to only 5 keypoints: outermost corner of each eye, bottom of nose, and corners of the mouth.

Step 1: Creating Tests

With our data already in an S3 bucket and metadata loaded into memory, we can start creating test cases!
Let's create a simple test case containing the entire dataset:
import json
from kolena.keypoints import ground_truth, TestCase, TestImage
from typing import List, Tuple
# Converts points from the [[x1, y1], [x2, y2]...] format in the csv file to the
# an [(x1, y1), (x2, y2)...] format.
def as_point_tuples(points: List[List[float]]) -> List[Tuple[float, float]]:
return [(point[0], point[1]) for point in points]
complete_test_case = TestCase(f"complete {DATASET}", images=[
TestImage(
locator=record.locator,
dataset=DATASET,
ground_truth=ground_truth.Keypoints(
normalization_factor=record.normalization_factor,
points=as_point_tuples(json.loads(record.points))
)
) for record in df.itertuples()
])
In this tutorial we created only a single simple test case, but more advanced test cases can be generated in a variety of fast and scalable ways. See Creating Test Cases for details.
Now that we have a basic test case for our entire dataset let's create a test suite for it:
from kolena.keypoints import TestSuite
test_suite = TestSuite(f"complete {DATASET}", test_cases=[
complete_test_case
])
This test suite represents a basic starting point for testing on Kolena.

Step 2: Running Tests

With basic tests defined for the 300-W dataset, we can start testing our models.
To start testing, we create an InferenceModel object describing the model being tested:
from kolena.keypoints import InferenceModel, TestImage
from kolena.keypoints.inference import Keypoints
def infer(test_image: TestImage) -> Keypoints:
...
model = InferenceModel("example-model", infer=infer, metadata=dict(
description="Example model from quickstart tutorial"
))
Finally, let's test:
from kolena.keypoints import test
test(model, test_suite)
That's it! We can now visit the web platform to analyze and debug our model's performance on this test suite:

Conclusion

In this quickstart tutorial we learned how to create new tests for Keypoints Detection datasets and how to test Keypoints Detection models on Kolena.
What we learned here just scratches the surface of what's possible with Kolena and covered a fraction of the kolena-client API — now that we're up and running, we can think about ways to create more detailed tests, improve existing tests, and dive deep into model behaviors.
Copy link
On this page
Getting Started
Step 1: Creating Tests
Step 2: Running Tests
Conclusion