1.1 Entanglement Entropy by Hadamard Test#
Multiple Experiments#
Consider a scenario, you have multiple circuits that you want to run at once.
Call .measure()
one by one will be inefficient,
no to mention that you also need to call .analyze()
for their post-processing.
Here we provide a more efficient way solve this problem, where the true power of Qurrium as experiment manage toolkit.
a. Import the instances#
from qurry import EntropyMeasure
experiment_hadamard = EntropyMeasure(method="hadamard")
b. Preparing quantum circuit#
Prepare and add circuits to the .wave
for later usage.
from qiskit import QuantumCircuit
from qurry.recipe import TrivialParamagnet, GHZ
def make_neel_circuit(n):
qc = QuantumCircuit(n)
for i in range(0, n, 2):
qc.x(i)
return qc
for i in range(2, 8, 2):
experiment_hadamard.add(TrivialParamagnet(i), f"trivial_paramagnet_{i}")
experiment_hadamard.add(GHZ(i), f"ghz_{i}")
experiment_hadamard.add(make_neel_circuit(i), f"neel_{i}")
experiment_hadamard.waves
WaveContainer({
'trivial_paramagnet_2': <qurry.recipe.simple.paramagnet.TrivialParamagnet object at 0x7f6ab3aec590>,
'ghz_2': <qurry.recipe.simple.cat.GHZ object at 0x7f6ab3aec6e0>,
'neel_2': <qiskit.circuit.quantumcircuit.QuantumCircuit object at 0x7f6ab3ae97f0>,
'trivial_paramagnet_4': <qurry.recipe.simple.paramagnet.TrivialParamagnet object at 0x7f6ab3b82c10>,
'ghz_4': <qurry.recipe.simple.cat.GHZ object at 0x7f6ab3b82d50>,
'neel_4': <qiskit.circuit.quantumcircuit.QuantumCircuit object at 0x7f6ab3ae9a90>,
'trivial_paramagnet_6': <qurry.recipe.simple.paramagnet.TrivialParamagnet object at 0x7f6ab3b82e90>,
'ghz_6': <qurry.recipe.simple.cat.GHZ object at 0x7f6ab3b82fd0>,
'neel_6': <qiskit.circuit.quantumcircuit.QuantumCircuit object at 0x7f6ab3ae98d0>})
c. Execute multiple experiments at once#
Let’s demonstrate the true power of Qurrium.
from qurry.qurrent.hadamard_test.arguments import EntropyMeasureHadamardMeasureArgs
Preparing a configuration list for multiple experiments with following parameters:
class EntropyMeasureHadamardMeasureArgs(total=False):
"""Output arguments for :meth:`output`."""
shots: int
"""Number of shots."""
tags: Optional[tuple[str, ...]]
"""The tags to be used for the experiment."""
wave: Optional[Union[QuantumCircuit, Hashable]]
"""The key or the circuit to execute."""
degree: Optional[Union[int, tuple[int, int]]]
"""The degree range."""
config_list: list[EntropyMeasureHadamardMeasureArgs] = [
{
"shots": 1024,
"wave": f"{wave_names}_{i}",
"degree": i // 2,
"tags": (wave_names, f"size_{i}", f"system_range_{i//2}"),
}
for _ in range(10)
for i in range(2, 8, 2)
for wave_names in ["trivial_paramagnet", "ghz", "neel"]
]
print(len(config_list))
90
The .multiOutput
will return an id of this multimanager
instance,
which can be used to get the results and post-process them.
Each multimanager
will export the experiments in a folder you can specify
by setting save_location
parameter with default location for current directory
where Python executed.
It will create a folder with the name of the multimanager
instance,
and inside it will create a folder for storing each experiment data.
It will do firstly in the building process, but you can skip it by setting skip_build_write=True
to save time.
After all experiments are executed, it will export secondly,
which can also be skipped by setting skip_output_write=True
for no files output.
multi_exps1 = experiment_hadamard.multiOutput(
config_list,
summoner_name="qurrent.hadamard_test", # you can name it whatever you want
multiprocess_build=True,
# Using multiprocessing to build the experiments,
# it will be faster but take all the CPU
skip_build_write=True,
# Skip the writing of the experiment as files during the build,
save_location=".",
# Save the experiment as files in the current directory
multiprocess_write=True,
# Writing the experiment as files using multiprocessing,
)
multi_exps1
| MultiManager building...
| Write "qurrent.hadamard_test.001", at location "qurrent.hadamard_test.001"
| MultiOutput running...
| Auto analysis is called, running analysis...
| Export multimanager...
| Export multi.config.json for 6024bbc2-99ea-4bb8-be8f-314a46917107
| Exporting qurrent.hadamard_test.001/qurryinfo.json...
| Exporting qurrent.hadamard_test.001/qurryinfo.json done.
'6024bbc2-99ea-4bb8-be8f-314a46917107'
You can check the result of multiOutput
that we just executed by accessing the .multimanagers
experiment_hadamard.multimanagers
MultiManagerContainer(num=1, {
"6024bbc2-99ea-4bb8-be8f-314a46917107":
<MultiManager(name="qurrent.hadamard_test.001", jobstype="local", ..., exps_num=90)>,
})
experiment_hadamard.multimanagers[multi_exps1]
<MultiManager(id="6024bbc2-99ea-4bb8-be8f-314a46917107",
name="qurrent.hadamard_test.001",
tags=(),
jobstype="local",
pending_strategy="tags",
last_events={
'output.001': '2025-06-26 11:47:35',
'auto_report': '2025-06-26 11:47:36',},
exps_num=90)>
d. Get all post-processing result#
Since .analyze
in Hadamard Test doesn’t require any arguments, which we mentioned in Basic Usage, it will be excuted automatically.
So you can access the result like the following:
print("| Available results:")
for k, v in (
experiment_hadamard.multimanagers[multi_exps1]
.quantity_container["auto_report"]
.items()
):
print("| -", k, "with length", len(v))
| Available results:
| - ('trivial_paramagnet', 'size_6', 'system_range_3') with length 10
| - ('ghz', 'size_6', 'system_range_3') with length 10
| - ('neel', 'size_6', 'system_range_3') with length 10
| - ('trivial_paramagnet', 'size_4', 'system_range_2') with length 10
| - ('ghz', 'size_4', 'system_range_2') with length 10
| - ('neel', 'size_4', 'system_range_2') with length 10
| - ('trivial_paramagnet', 'size_2', 'system_range_1') with length 10
| - ('ghz', 'size_2', 'system_range_1') with length 10
| - ('neel', 'size_2', 'system_range_1') with length 10
Example of the content of
quantity_container
experiment_hadamard.multimanagers[multi_exps1].quantity_container["auto_report"][
("trivial_paramagnet", "size_4", "system_range_2")
][:2]
[{'purity': 1.0,
'entropy': np.float64(-0.0),
'input': {},
'header': {'serial': 0, 'datetime': '2025-06-26 11:47:36', 'log': {}}},
{'purity': 1.0,
'entropy': np.float64(-0.0),
'input': {},
'header': {'serial': 0, 'datetime': '2025-06-26 11:47:36', 'log': {}}}]
e. Run post-processing at once with specific analysis arguments#
At first, we need to get the each experiment’s id in the multimanager
instance.
expkeys_of_multi_exps1 = list(
experiment_hadamard.multimanagers[multi_exps1].exps.keys()
)
print("| The number of exp_id:", len(expkeys_of_multi_exps1))
print("| First 3 experiment keys:")
expkeys_of_multi_exps1[:3]
| The number of exp_id: 90
| First 3 experiment keys:
['bdf23d75-dcb2-40e0-b9d6-23865556b39a',
'40242fdc-48b8-4968-a585-16c728be1c71',
'85fe8932-7d6b-4588-a5a5-379317481c64']
If you want to run the post-processing for some specific experiments, for example, the first 3 experiments we get for the
multimanager
instance.
experiment_hadamard.multiAnalysis(
summoner_id=multi_exps1,
analysis_name="first_3",
skip_write=True,
multiprocess_write=False,
specific_analysis_args={k: idx < 3 for idx, k in enumerate(expkeys_of_multi_exps1)},
# Give False by `idx < 3` to skip analysis for this experiment
)
| "first_3.001" has been completed.
'6024bbc2-99ea-4bb8-be8f-314a46917107'
print("| Available results:")
print(
"| length:",
sum(
len(v)
for v in experiment_hadamard.multimanagers[multi_exps1]
.quantity_container["first_3.001"]
.values()
),
)
| Available results:
| length: 3
Or manually specify all the analysis arguments for each experiment.
experiment_hadamard.multiAnalysis(
summoner_id=multi_exps1,
skip_write=False,
analysis_name="all_manual",
multiprocess_write=True,
specific_analysis_args={k: {} for idx, k in enumerate(expkeys_of_multi_exps1)},
)
| "all_manual.001" has been completed.
| Export multimanager...
| Export multi.config.json for 6024bbc2-99ea-4bb8-be8f-314a46917107
| Exporting qurrent.hadamard_test.001/qurryinfo.json...
| Exporting qurrent.hadamard_test.001/qurryinfo.json done.
'6024bbc2-99ea-4bb8-be8f-314a46917107'
print("| Available results:")
print(
"| length:",
sum(
len(v)
for v in experiment_hadamard.multimanagers[multi_exps1]
.quantity_container["all_manual.001"]
.values()
),
)
| Available results:
| length: 90
All multiAnalysis
results#
experiment_hadamard.multimanagers[multi_exps1].quantity_container.keys()
dict_keys(['auto_report', 'first_3.001', 'all_manual.001'])
f. Read exported multimanager data#
multi_exps1_reades = experiment_hadamard.multiRead(
save_location=".",
summoner_name="qurrent.hadamard_test.001",
)
| Retrieve qurrent.hadamard_test.001...
| at: qurrent.hadamard_test.001
Post-Process Availablities and Version Info#
from qurry.process import AVAIBILITY_STATESHEET
AVAIBILITY_STATESHEET
| Qurrium version: 0.13.0
---------------------------------------------------------------------------
### Qurrium Post-Processing
- Backend Availability ................... Python Cython Rust JAX
- randomized_measure
- entangled_entropy.entropy_core_2 ....... Yes Depr. Yes No
- entangle_entropy.purity_cell_2 ......... Yes Depr. Yes No
- entangled_entropy_v1.entropy_core ...... Yes Depr. Yes No
- entangle_entropy_v1.purity_cell ........ Yes Depr. Yes No
- wavefunction_overlap.echo_core_2 ....... Yes Depr. Yes No
- wavefunction_overlap.echo_cell_2 ....... Yes Depr. Yes No
- wavefunction_overlap_v1.echo_core ...... Yes Depr. Yes No
- wavefunction_overlap_v1.echo_cell ...... Yes Depr. Yes No
- hadamard_test
- purity_echo_core ....................... Yes No Yes No
- magnet_square
- magnsq_core ............................ Yes No Yes No
- string_operator
- strop_core ............................. Yes No Yes No
- classical_shadow
- rho_m_core ............................. Yes No No Yes
- utils
- randomized ............................. Yes Depr. Yes No
- counts_process ......................... Yes No Yes No
- bit_slice .............................. Yes No Yes No
- dummy .................................. Yes No Yes No
- test ................................... Yes No Yes No
---------------------------------------------------------------------------
+ Yes ...... Working normally.
+ Error .... Exception occurred.
+ No ....... Not supported.
+ Depr. .... Deprecated.
---------------------------------------------------------------------------
by <Hoshi>