Quantum Annealing (QA) is a quantum computing paradigm for solving combinatorial optimization problems formulated as Quadratic Unconstrained Binary Optimization (QUBO) problems. An essential step in QA is minor embedding, which maps the problem graph onto the sparse topology of the quantum processor and then adjusts the problem weights. The process of mapping the problem variables to the hardware is computationally expensive and scales poorly with increasing problem size and hardware complexity. Existing heuristics are often developed for specific problem graphs or hardware topologies and are difficult to generalize. To address this limitation, we explore the use of machine learning methods, which would allow a much greater degree of flexibility, in particular of Reinforcement Learning (RL). RL offers a promis-ing alternative by treating minor embedding as a sequential decision-making problem, where an agent learns to construct minor embeddings by iteratively mapping the problem variables to the hardware qubits. We propose a RL-based approach to minor embedding using a Proximal Policy Optimization agent, testing its ability to embed both fully connected and randomly generated problem graphs on two hardware topologies, Chimera and Zephyr. The results show that our agent consistently produces valid minor embeddings even when they span over more than a thousand qubits, in particular on the more modern Zephyr topology. Our proposed approach is also able to scale to moderate problem sizes and adapts well to different graph structures, highlighting RL’s potential as a flexible and general-purpose framework for minor embedding in QA but also pointing to limitations that will need to be addressed, for example in reducing the number of qubits required.
Minor embedding for quantum annealing with reinforcement learning
Nembrini R.;Ferrari Dacrema M.;Cremonesi P.
2026-01-01
Abstract
Quantum Annealing (QA) is a quantum computing paradigm for solving combinatorial optimization problems formulated as Quadratic Unconstrained Binary Optimization (QUBO) problems. An essential step in QA is minor embedding, which maps the problem graph onto the sparse topology of the quantum processor and then adjusts the problem weights. The process of mapping the problem variables to the hardware is computationally expensive and scales poorly with increasing problem size and hardware complexity. Existing heuristics are often developed for specific problem graphs or hardware topologies and are difficult to generalize. To address this limitation, we explore the use of machine learning methods, which would allow a much greater degree of flexibility, in particular of Reinforcement Learning (RL). RL offers a promis-ing alternative by treating minor embedding as a sequential decision-making problem, where an agent learns to construct minor embeddings by iteratively mapping the problem variables to the hardware qubits. We propose a RL-based approach to minor embedding using a Proximal Policy Optimization agent, testing its ability to embed both fully connected and randomly generated problem graphs on two hardware topologies, Chimera and Zephyr. The results show that our agent consistently produces valid minor embeddings even when they span over more than a thousand qubits, in particular on the more modern Zephyr topology. Our proposed approach is also able to scale to moderate problem sizes and adapts well to different graph structures, highlighting RL’s potential as a flexible and general-purpose framework for minor embedding in QA but also pointing to limitations that will need to be addressed, for example in reducing the number of qubits required.| File | Dimensione | Formato | |
|---|---|---|---|
|
minor-embedding-for-quantum-annealing-with-reinforcement-learning.pdf
accesso aperto
:
Publisher’s version
Dimensione
12.19 MB
Formato
Adobe PDF
|
12.19 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


