top of page
Search
  • Writer's pictureUtkarsh Bhatt

Test MicroCluster APIs in style 😎


Time and again I come across messages from consumers of MicroCeph asking things like:

Hey Utkarsh, Did you just break the X API ? All of our CI is down!

Way to start the day right ? Well, In their defence, I actually did (break it).

But that's not important, what's important is that the current automated testing in place could not catch such breakages in time. And the failures only surfaced when the changes reached our consumers. So we definitely need to test our APIs to

  1. ensure stable user experience is not hindered.

  2. provide API consumers same consistency that MicroCeph CLI users get.


So What's the fuss about ? Go Test them!

Well, that I'll do, but this is not a MicroCeph only problem, there are so many MicroCluster projects that develop, publish and maintain their APIs and their maintainers have to do all of the testing and verification in exactly the same way that we'd do. This made me think that the solution to our API testing problem should not be a bundle of scripts tied together to just work for MicroCeph but a generic solution that could be easily run against any MicroCluster service.


Aight! What'd you cook ? 🍲

So I had no pre-determined idea on what the testing framework would look like but I did have a basic expectation that I wanted it to meet, let's see what and why.

Let's take a look at the API's that MicroCeph has here:

  - client_configs.go
  - configs.go
  - disks.go
  - microceph_configs.go
  - pool.go
  - resources.go
  - services.go

Each of the files listed above have atleast 1 (some have more) API endpoints described.


So we need to run 10ish tests right ?

WRONG! Each API endpoint supports REST methods like GET, PUT, DELETE etc. that need to be tested.


Okay, 30ish tests would do then right ?

WRONG! let's take a look at an API to see why.


Take the /1.0/disks endpoint for example. It supports:

  1. a list of block devices as input to enroll them as Ceph OSDs.

  2. a single block device as the same input to enroll it as an OSD (This was the API expectation I broke and fixed later as a patch xp).

  3. a loop file spec that MicroCeph uses to create file based OSDs.

  4. --wipe flag to wipe devices before enrollment.

  5. --encrypt flag to encrypt OSDs for data safety.

  6. --wal-device flag for adding write ahead log to OSD being enrolled.

  7. --db-device flag for adding a db device to OSD being enrolled.

  8. --wipe and --encrypt flags for both DB and WAL devices!!!!!

I think you get the gist. The scenarios that each endpoint supports could get complicated depending on the inputs that it accepts and it's overall functionality and an active project like MicroCeph is only expected to grow in terms of functionality and number of API endpoints going forward.


Thus, the basic expectation was: "Writing new tests should be easy-peasy"


So 2 Redbulls and an All-Nighter later; I present to you microtester!

$ sudo python3 ./microtester.py ./testsuite/
--sock="/var/snap/microceph/common/state/control.socket"
Validating Testsuite: disks.yaml
Executing Test Check OSD count
Validation Result: Total: 2, Pass: 2, Fail: 0. 

Executing Test Add 3 file based OSDs
Validation Result: Total: 2, Pass: 2, Fail: 0. 

Executing Test Delete OSD 1
Validation Result: Total: 1, Pass: 1, Fail: 0. 

Executing Test Add another OSD
Validation Result: Total: 1, Pass: 1, Fail: 0. 

Executing Test Check OSD count
Validation Result: Total: 2, Pass: 2, Fail: 0. 

Validating Testsuite: log-level.yaml
Executing Test Set log level
Validation Result: Total: 1, Pass: 1, Fail: 0. 

Executing Test Get log level
Validation Result: Total: 2, Pass: 2, Fail: 0. 

But what about "Writing new tests should be easy-peasy" ?

Well, I did not forget that, in-fact, this is how you write tests for microtester:

title: MicroCeph Disk API Tests
tests:
  - name: Check OSD count
    path: /1.0/disks
    method: GET
    expectations:
      response_code: 200
      response_list_count: 0

  - name: Add 3 file based OSDs
    path: /1.0/disks
    method: POST
    input:
      path: "loop,2G,3"
    expectations:
      response_code: 200
      response_dict_contains:
        validation_error: ""

  - name: Delete OSD 1
    path: /1.0/disks/1
    method: DELETE
    input:
      bypass_safety: true
    expectations:
      response_code: 200

  - name: Check OSD count
    path: /1.0/disks
    method: GET
    expectations:
      response_code: 200
      response_list_count: 3

as you can see, each test is around 7-10 lines of YAML (not code) and it get's even better.

The beauty of microtester lies in how it allows a user to define a test and more importantly the expectation of the API's response.

Users can define:

  1. Expected value of the response.

  2. HTTP response code that the API should return.

  3. Number of elements a list type response should have.

  4. key-value pairs that a dict type response should contain.

This gets even better with how easy it is to add more expectation types. But that is a topic for another blogs article.


If you're interested in checking out the implementation of microtester, the code lives in a pull-request to the MicroCeph repository. But I do have plans to give it a new home.


Hope you liked the blog, subscribe to keep em coming!

Cheers

15 views0 comments

Recent Posts

See All
Delivering Package

Like my work ? Want to more about me ?

bottom of page