-
Notifications
You must be signed in to change notification settings - Fork 160
Conversation
* Added class LinearDiscriminator that leverages scikit learn.
…tDiscriminator. * First implementation of linear._extract_data.
their outcome like for cal) to AbstractDiscriminator. * Added setter method for _fitted in AbstractDiscriminator. * Renamed LinearDiscriminator to LinearIQDiscriminator. * Implemented all methods in LinearIQDiscriminator required by AbstractDiscriminator.
In keeping with the style of the other parts of ignis, there should be a circuit method (which generates the calibration circuits), a Fitter class (which takes in the calibration) and a Filter class (this is the actual discriminator). The Filter will take a result and return a new result (which has been discriminated). |
I think the folder structure should be |
@dcmckayibm Thanks for looking into this. I agree with all points. Regarding the calibration we should keep Pulse in mind so that users can specify their own pulse schedules usable for calibration. |
@dcmckayibm Regarding the fitter: should this inherit from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Daniel, I think this is a great start. Most of my comments are about the structure of the class. Looking forward to this.
from qiskit.result import Result | ||
|
||
|
||
class AbstractDiscriminator(object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would be possible to follow the fitter pattern inside Ignis? Ie. this would be a DiscriminationFitter
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I was playing around with the idea earlier on today following David's comments but have not pushed anything yet. I imagine the idead would be to have something like class DiscriminationFitter(BaseFitter)
?
used for discrimination, e.g. to convert IQ data into 0 and 1's.""" | ||
|
||
@abstractmethod | ||
def __init__(self, result: Result, cal_circuits: list, data_circuits: list, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we should pass data_circuits
in the constructor. The idea would be that we train a discrimination fitter once, and then use it on many different data_circuits
/results
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense.
used for discrimination, e.g. to convert IQ data into 0 and 1's.""" | ||
|
||
@abstractmethod | ||
def __init__(self, result: Result, cal_circuits: list, data_circuits: list, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps result
should be cal_result
to make it clear this is where the calibration data will be obtained from.
discriminator_parameters: | ||
""" | ||
self.result = result | ||
self._cal_circuits = cal_circuits |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than setting directly, I think there should be a method to add this data. That way as data is acquired it can be added to the preexisting data. Then fitting can be done with a call fitter.fit()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe what you are proposing is the way things are currently done in Ignis. I'll modify the discriminator code. This will then allow one to retrain the fitter as new data comes in.
pass | ||
|
||
@abstractmethod | ||
def discriminate(self) -> Result: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should accept a result and a list of schedules. It might also be good to accept a list of MemorySlots
too discriminate.
return self._fitted | ||
|
||
@fitted.setter | ||
def fitted(self, value): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be controlled by the class itself and not manipulated by the user. Consequently, the class should set self._fitted
directly and the setter should be removed.
used for discrimination, e.g. to convert IQ data into 0 and 1's.""" | ||
|
||
@abstractmethod | ||
def __init__(self, result: Result, cal_circuits: list, data_circuits: list, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For each cal_circuit
I think you will need the associated state that it is a calibration circuit for. This will be an integer or string of length memoryslots integer_state = bin(state, 2)
.
""" | ||
Args: | ||
result: the Result obtained from e.g. backend.run().result() | ||
cal_circuits: a list of str or QuantumCircuit or Schedule or int or None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we are currently able to relate a given circuit to a pulse schedule. Will have to think on this. It might be safer to write cal_schedule
and similarily for every circuit
mention within this class.
self.discriminator_parameters = discriminator_parameters | ||
|
||
@abstractmethod | ||
def _extract_calibration(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This behaviour can likely be made non-abstract and implemented within this class as it should be similar for every discriminator. Should be essentially formalized as fitting data X (vector of shape [n_shots, n_qubits], and fit to state Y [n_qubits].
* Modified the linear discriminator to inherit from BaseFitter.
@taalexander @dcmckayibm What does the
|
* Unit test test_iq_discriminator.py is now implemented. * Changed _excpected_state to a dict.
@dcmckayibm @taalexander Please comment on the most recent commit which implements a functional
One question:
|
Good point, but I think you want to provide the qasm circuits (which can be converted to the users desired pulses with qasm->pulse). [edit: I looked at your test, good idea to just use the meas mitigation circuits] |
Structure looks good to me |
I've found it useful with the other ignis PRs to make a notebook in /examples/ that can serve the dual purpose of testing and as the eventual tutorial (eg look at the logging pr) |
* Added a Jupyter notebook to exemplify the IQ discriminator.
* Generalized LinearIQDiscriminationFitter to IQDiscriminationFitter. * LinearIQDiscriminationFitter now inherits from IQDiscriminationFitter. * Disabled a pylint error in test_state_tomography.py. * Added scikit-learn>=0.17 to requirements.
* Added the method extract_xdata in BaseFitter. * Added the method extract_xdata in IQDiscriminationFitter. * extract_xdata retrives the xdata from a result in a format that the discriminator can use. * Added the filter to the unit test.
* Move unit test for filter to case with self.qubits = [0, 1].
@dcmckayibm and @taalexander do you guys want to have a look this? I'd like a bit of feedback/ideas how to improve it. I think we can then merge. I'm also working on a NB that uses the hardware (instead of making data up out of thin Aer) but am running into a snag with the meas_level the backend returns (see qiskit-pulse slack channel). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is coming along nicely. Do you think you could:
- include a check to make sure that for all
qubits
to be discriminated there is at least one excited state and ground state inexpected_states
for the respective qubits, in order to make there is sufficient data to make a full calibration basis. - Add tests for an:
- undercomplete basis (error should be raised)
- overcomplete basis.
- Add types in either the docstring or signature for all methods.
@@ -16,6 +16,9 @@ test/python/*.prof | |||
# dotenv | |||
.env | |||
|
|||
# Jupyter notebooks checkpoints | |||
qiskit/ignis/examples/.ipynb_checkpoints/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the examples be pure python examples rather than a notebook? I think notebooks should be reserved for qiskit-tutorials
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For ignis I put these in the PR for testing then we pull them out into tutorials after (a way to do both concurrently without having to deal with branches in multiple repos)
@@ -252,6 +253,10 @@ def _calc_data(self): | |||
if self._ydata[serieslbl][-1]['std'][-1] == 0: | |||
self._ydata[serieslbl][-1]['std'][-1] = 1e-4 | |||
|
|||
def extract_xdata(self, result: ExperimentResult): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this method is missing an implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fitter
dependent and came about when trying to write a base filter
class (I'm also not sold on the filter
concept). I wrote DiscriminationFilter
with the idea in mind that it could be the first stab of a BaseFilter
class. I realized that I needed something like extract_xdata
when implementing the filter
. Here's the rational: BaseFitter
is given the xdata
(makes sense when you think of a T1 fit). However, this is not the case for the IQDiscriminationFitter
: it needs to extract the xdata from the meas_level=1
data (see function _calc_data
). Thus, if the filter
wants to use the discriminator's fit
function it needs to know how to package the xdata
so that it can be given to the predict
function of the fitter
. Hence the unimplemented method in BaseFitter
which would suggest BaseFitter
be abstract. I think a more elegant solution would be to forego the filter
class and simply have a predict(...)
method in fitter
a la sklearn.
@@ -0,0 +1,288 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see above comment above example formats.
"def qubit_shot(i0, q0, std):\n", | ||
" return [i0 + random.gauss(0, std), q0 + random.gauss(0, std)]\n", | ||
"\n", | ||
"def create_shots(i0, q0, shots, qubits):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this just return the ExperimentResultData
result object for now, until the simulator supports measurement level 1? This would avoid the conversion steps below which could be confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also given you likely need such a result generation function to write tests could you reuse it by just importing this function in the example?
from qiskit.result.models import ExperimentResultData | ||
|
||
|
||
class DiscriminationFilter: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a personal opinion that goes against the Ignis convention, but I don't like the idea of the Filter
. Fitter
are already stateful as they contain the fitting data. As far as I know the only purpose a Filter
serves is to produce a new discriminated measurement result. It seems that introducing a new class just for this is a burden on the user in terms of signal to noise ratio. For example, looking at the fitters in scipy
they provide a single class with methods fit
and predict
. Thoughts @dcmckayibm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is that a Fitter
operates on calibration data to produce some type of characterization and a Filter
is an object that transforms results into results. Not every Fitter
has need for a Filter
(e.g. T1), which is one rationale for a new class. Also once created the Filter
can be passed around independently of the Fitter
. This is already how it's done for the measurement correction (right or wrong).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand this is how it is done currently 😄 and my above comment is non-blocking. The above is my off-topic opinion.
Another option would be having the Filter be an object that the Fitter has a reference too and has a method discriminate
. It could still be returned by the fit
method. Since the Filter
class is the same for different discriminator implementations, there is really no need for the user to every has to see or use it.
I just worry that sometimes from a user's perspective too many objects can be cumbersome and increase the signal to noise ratio when it comes to understanding the code. Consider how simple to use sklearn.discriminant_analysis
is to use, which accomplishes the same task without a Filter object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also not sold on the Filter
at least in the case of discrimination. Do we really need Filter
class? Can we not have all the needed functionality in Fitter
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fitter operates on calibration data, Filter on external data. Do we really need it? That could be asked of any object-oriented class distinction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My preferred way of using the discriminator would be à la scikit-learn
discriminator = LinearIQDiscriminator(cal_results, expected_states, ...) # fit() done in here
new_results = discriminator.predict(results)
If I understand the fitter
/ filter
idea properly, the filter
requires the user to type in an extra line of code:
discriminator = LinearIQDiscriminator(cal_results, expected_states, ...) # fit() done in here
filter = DiscriminationFilter(discriminator)
new_results = filter.predict(results)
and when you look under the hood you realize that the filter
was more or less the discriminator all along. I think introducing an extra class to handle external data, which is very much like calibration data but without the expected states, is unnecessary.
if isinstance(cal_results, list) and isinstance(expected_states, list): | ||
if len(cal_results) != len(expected_states): | ||
msg = 'Inconsistent number of results and expected results.' | ||
raise QiskitError(msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just place the msg directly in the error.
# Sanity checks | ||
if isinstance(cal_results, list) and isinstance(expected_states, list): | ||
if len(cal_results) != len(expected_states): | ||
msg = 'Inconsistent number of results and expected results.' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
msg = 'Inconsistent number of results and expected results.' | |
msg = 'Number of input results and assigned states must be equal.' |
cal_results: calibration results, list of qiskit.Result or | ||
qiskit.Result | ||
qubits: the qubits for which we want to discriminate. | ||
circuit_names: The names of the circuits in cal_results. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be named schedules
as circuits do not support measurement level 1 and should support passing the Schedule
directly (the name string should also be acceptable). Also, this should be an optional list of schedules and should default to all experiments incal_results
if not provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can do schedules too. I take it we would then need to loop over the schedules and extract the name from each schedule and the name would then be the expected state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_memory
already does this for you. You can just pass the Schedule
instance to it. We still require the expected state to be passed somewhere else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. The newest push accounts for this now.
cal_results. If cal_results is a Result and not a list then | ||
expected_states should be a string or a float or an int. | ||
discriminant: a discriminant from e.g. scikit learn. | ||
description: a string describing the discriminator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this be more appropriately set in the inheriting class, rather than be part of the initialization method?
return [i_center + random.gauss(0, std), | ||
q_center + random.gauss(0, std)] | ||
|
||
def create_shots(i_center, q_center): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above comment about placing this method somewhere in a test utils module for reuse in examples.
@taalexander Thanks for reviewing. I'm not sold on the idea of testing for a complete basis. This may restrict the user too much. Consider the following cases:
|
Ok those, are all good reasons 😄. How do you think we should handle the input reference state format? An integer representation only works for |0>, |1> space but this is the typical computational basis. If an integer is provided assume a qubit representation, or only accept strings? Also, it is important to remember that the returned information in |
From the discriminator's point of view |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its looking good @eggerdj, a couple of things as noted in the review and:
- In the example notebook could you make it a bit clearer how to call the discriminator and get a result back from it, right now it is hidden in the plotting method.
qiskit/ignis/utils.py
Outdated
@@ -0,0 +1,57 @@ | |||
# -*- coding: utf-8 -*- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be placed in the test folder instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I'll move it.
result.meas_level = 2 | ||
|
||
counts = Obj.from_dict(self.count(y_data)) | ||
result.data = ExperimentResultData(counts=counts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also store the resulting individual shot results since you already have this information available in y_data
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added y_data
to ExperimentResultData
by doing memory=y_data
. This works with get_memory(...)
as shown by the unit test.
""" | ||
return self._fit_fun.predict(x_data) | ||
|
||
def extract_xdata(self, result: ExperimentResult): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this handle the case of meas_level=1
, meas_return='single'
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. It handles meas_return='single'
no problem but will fail when meas_return='avg'
. I updated the code so that discriminator will work with single
and avg
.
expected_states: a list that should have the same length as | ||
cal_results. If cal_results is a Result and not a list then | ||
expected_states should be a string or a float or an int. | ||
discriminant: a discriminant from e.g. scikit learn. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other option would be to create an IQDiscriminationFitter
and leave fit_fun
and discriminate
as abstract methods (remove discriminant
as a parameter for the base fitter), you could then implement these separately (or create a ScikitIQDiscriminationFitter
) that implements these methods. I think this might be the best of both worlds.
""" | ||
|
||
def __init__(self, cal_results: Union[Result, List[Result]], | ||
qubits: List[int], expected_states: Union[List[str], str], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is really operating on input data from the memory_slots
(which may be out of order) rather than input qubits isn't this something more akin to a training_data_mask
. Ie., consider the case of multiple measurements being made on the same qubit. The input it seems would really need to be something like {qubit_idx: [memoryslot_idx0, memoryslot_idx1, ...]}
.
I apologize if one of your earlier proposals handled this case and I overlooked it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we talked about this on slack and mentioned a GroupDiscriminator
. The current behaviour of ScikitIQDiscriminationFitter
(and its children) is
- There is one discriminator (e.g. an LDA) in the class.
- This discriminator does the following:
a. Select the IQ data out of the memory for theself._qubits
qubits (i.e. a data mask really).
b. Discriminate this data. This means assign a label, i.e. astr
, to each IQ point.
That's it the discriminator does nothing more. There is no memory assignment to a Result
instance.
The level 2 memory is assigned in the filter
class.
Maybe we should implement a GroupIQDiscriminator
and a corresponding filter class that has as many ScikitIQDiscriminationFitter
's as needed and has a data structure that maps qubit indices to memory slots and says what training_data_mask
should be used for which qubit?
One justification for this is that it keeps ScikitIQDiscriminationFitter
conceptually simple: it has one discriminator that maps some IQ data to a label. The complexity of qubit indices and memslots and training data masks would then be in GroupIQDiscriminator
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the issue I was trying to hint at, is that there is no guarantee that memory_slot[0]
is where the result of qubit[0] is stored. We must either choose a convention or make this configurable. If we think of this as a fitting problem, one could provide a data mask for which memory slots to select, and then a target states (or qubits) for which states to discriminate.
The grouped discriminator could still go on top of this (I think).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I think I see what you are getting at: get_memory(...)
returns np.ndarray[shots, memory_slots]
for level 1 data single shot and there is no guarantee that, e.g., get_memory(...)[:, 0]
corresponds to the IQ data of qubit 0. Correct?
Is this behaviour intended? How do we know where to find the data for qubit X then?
* Added an example showing how to use the discriminate method of the discriminator.
…ters.py line 85. * Added corresponding unit test that checks the length of the memory using get_memory.
…le avg data. * Added corresponding unit test.
… data. * Added function _add_ydata to avoid code duplication. * Refactored extract_xdata some more for readability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignis.measurement
will require an __init__.py
module. Alternatively, could discrimination be considered characterization and discrimination should fall within this module?
To be consistent the |
* Reverted BaseFitter back to its original state.
* Refactored filter.apply to comply with the new discriminator methodology. * Added fitted property and expected states to BaseDiscriminatorFitter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks much better this way 💯.
return self._fitted | ||
|
||
@abstractmethod | ||
def _scale_data(self, xdata: List[List[float]], refit=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a standard implementation for this should be provided?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason I did not implement one is to avoid tying BaseDiscriminationFitter
to sklearn
. I'm fine with the idea of implement one using sklearn
. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine to use sklearn as a resource to do the scaling, in this case, you're just saying the default scaling method is to scale the mean to zero and the variance to unit variance. You could have implemented this by hand, or use Sklearn.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved the implementation of _scale_data
to BaseDiscriminationFitter
.
Returns: | ||
x data as a list of lists. | ||
""" | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps standard implementations of this and get_ydata
should be provided, or are you trying to handle the case of discriminating based on measurement level 0 data? If so I think this is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to leave the base discriminator independent of the data. Different backends and technologies might handle the level 1 data differently. E.g. Ions vs SC qubits: the format of the nd.array
from get_data
will be the same but there might be an extra data processing and formatting step between the get_memory(...)
and the discriminator's xdata
that is dependent on the technology/backend.
from qiskit.pulse.schedule import Schedule | ||
|
||
|
||
class BaseDiscriminationFitter: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this should be in something like fitter.py
, or the implemented discriminator should be moved to a new module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is something I wanted to get to. How about having BaseDiscriminationFitter
in fitters.py
and IQDiscriminationFitter
along with the linear and quadratic implementation in iq_discrimination.py
?
* Added a pass statement to get_ydata in base class. * Made Jupyter notebook compatible with new changes.
…where. * Implemented add_data in BaseDiscriminationFitter. * Implemented _add_expected_states and _get_schedules, class internal methods. * Added expected_sates and schedules as properties of BaseDiscriminator.
* Moved implementation of _scale_data to BaseDiscriminationFitter.
The new test folders were missing |
Looks good, can we move the plotting code to the fitter class? For example see https://github.com/Qiskit/qiskit-ignis/blob/e9247de37801f11ad5bc0a0f129b59573f567da2/qiskit/ignis/mitigation/measurement/fitters.py#L203. Can we also add a schedules/circuits.py which generates the schedule. Right now that code is in the example notebook. Following the example of https://github.com/Qiskit/qiskit-ignis/blob/master/qiskit/ignis/mitigation/measurement/circuits.py. The function |
Even better for the schedule generation would be to modify the existing measurement calibration circuit code and give an option to return the schedule. |
* Replaced state_labels with _ to get ride of linter errors. * Changed import order.
Summary
Ignis does not have a discriminator framework to go from measurement level 1 to measurement level 2.
Details and comments
This WIP PR introduce a way we might implement a discriminator in ignis. Main contributions are: