Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ml_tools] A train config used more than once overwrites previous results. #569

Closed
debrevitatevitae opened this issue Sep 16, 2024 · 0 comments · Fixed by #593
Closed

[ml_tools] A train config used more than once overwrites previous results. #569

debrevitatevitae opened this issue Sep 16, 2024 · 0 comments · Fixed by #593
Assignees
Labels
bug Something isn't working

Comments

@debrevitatevitae
Copy link
Collaborator

Short description

If we train a model more than once in a row and we want to use the same training settings (i.e. same TrainConfig), every training will overwrite the results of the previous ones.

What is the expected result?

When the option create_subfolder_per_run=True and training a second time with the same TrainConfig, a new subdirectory should be created, where the results of the second training are stored. In other words, the results of the second training should not overwrite the results of the first one.

What is the actual result?

In the situation above, the results of the first training are overwritten.

Steps/Code to reproduce

from itertools import count

import torch
from qadence import (
    feature_map,
    hamiltonian_factory,
    hea,
    QNN,
    QuantumCircuit,
    TrainConfig,
    Z,
)
from qadence.ml_tools import to_dataloader, train_with_grad
from torch.utils.data import DataLoader


def dataloader(data_size: int = 25, batch_size: int = 5, infinite: bool = True) -> DataLoader:
    x = torch.linspace(0, 1, data_size).reshape(-1, 1)
    y = torch.sin(x)
    return to_dataloader(x, y, batch_size=batch_size, infinite=infinite)


n_qubits = 2
fm = feature_map(n_qubits)
ansatz = hea(n_qubits=n_qubits, depth=2)
observable = hamiltonian_factory(n_qubits, detuning=Z)
circuit = QuantumCircuit(n_qubits, fm, ansatz)

model = QNN(circuit, observable, backend="pyqtorch", diff_mode="ad")
batch_size = 100
input_values = {"phi": torch.rand(batch_size, requires_grad=True)}

cnt = count()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)


def loss_fn(model: torch.nn.Module, data: torch.Tensor) -> tuple[torch.Tensor, dict]:
    next(cnt)
    x, y = data[0], data[1]
    out = model(x)
    loss = criterion(out, y)
    return loss, {}


config = TrainConfig(max_iter=10, print_every=2, folder="dev", create_subfolder_per_run=True)

# train a first time
train_with_grad(model, dataloader(), optimizer, config, loss_fn)

# train a second time
train_with_grad(model, dataloader(), optimizer, config, loss_fn)

Tracebacks (optional)

No response

Environment details (optional)

No response

Would you like to work on this issue?

None

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants