Explore the screenshots »
Report a Bug · Request a Feature . Ask a Question
Table of Contents
Clustering raw qubit measurements to evaluate the quality of superconducting qubit circuits and move forward in understanding the sources of noise in measurements.
Сlustering was performed using a hierarchical clustering algorithm with ward linkage over a computable array of SSPD (Symmetric Segment-Path Distance) distances between trajectories.
Distances computation was performed on google cloud. The results of clustering into 5 clusters are shown in figure section. If you want to experiment with sample data, you can find it on Kaggle
High-fidelity measurements are important for the physical implementation of quantum information protocols. Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities that are systematically lower than those predicted by experimental parameters. Maximizing the information one can extract from a physical system requires the ability to perform accurate measurements. Our goal is to provide methods for diagnosing measurement errors and increasing fidelities by using various machine learning algorithms.
One way to solve such problems is to cluster quantum trajectories (or singleshots) and find some features in the data that are related to noise effects, heating effects, or qubit relaxation time.
We get our qubit measurements by driving the resonator and recording the output trajectory in phase (I-Q) space. It is very noisy data.
You can see examples of such singleshots of qubits in the excited and ground states in the screenshot section.
If you want to learn more about how single-shot clusterization can help us improve readout measurements - you can read this work that inspired us to research
Screenshots
Ground and exited singleshot measurement example
Qubit ground state singleshots clustering results
median singleshots by clusters are shown
Histograms and level lines of the empirical distribution law (contours)
Qubit exited state singleshots clustering results
median singleshots by clusters are shown
Histograms and level lines of the empirical distribution law (contours)
Ground and exited midpoints distribution by axis
Ground and exited cluster's growth rate
How fast instances of each cluster appear in dataset
All project requirements/dependencies (except traj-dist) are represented in pyproject.toml file. You can easily install all deps to your anaconda environment with Poetry. So you only need to download Poetry and traj-dist before installation.
- clone repo
- create new environment via conda (you can use environment file)
conda env create -f environment.yaml
- install dependencies with poetry
poetry install
- intall traj-dist
You can install it with pip or poetry.
You can load raw and pre-calculated data on Kaggle.
Dataset consists of:
- raw_data folder 🐖
Put here single-shots (raw signals coming from devices)- trajectory_data folder 🥓
trajectory data calculated with trajectory_generate file.- result_data folder 🥙
SSPD distance matrix computed with compute_distance_matrix file.
We compute distance matrix with Google Cloud on 128-core processor, and it took about 3 days to be finished. The computation took so long, even using all cores, because each single shot consists of 1500 x-y points. To fill the SSPD matrix for 15,000 such singleshots - you need to calculate about 120 million distances. For trajectories consisting of 1500 points, the distances are computed using cython in about 0.07 seconds.
So you don't have to do these calculations - you can simply download a pre-calculated SSPD matrix 🙃- clustering_and_visualizing file is responsible for data visualization 📈
See the open issues for a list of proposed features (and known issues).
- Top Feature Requests (Add your votes using the 👍 reaction)
- Top Bugs (Add your votes using the 👍 reaction)
- Newest Bugs
Reach out to the maintainer at one of the following places:
- GitHub issues
- Contact options listed on this GitHub profile
First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are greatly appreciated.
Please read our contribution guidelines, and thank you for being involved!
The original setup of this repository is by Nikolay Zhitkov.
For a full list of all authors and contributors, see the contributors page.
This project is licensed under the MIT license.
See LICENSE for more information.