This rule raises when a `torch.autograd.Variable` is instantiated. == Why is this an issue? The Pytorch Variable API has been deprecated. The behavior of Variables is now provided by the Pytorch tensors and can be controlled with the `requires_grad` parameter. The Variable API now returns tensors anyway, so there should not be any breaking changes. == How to fix it Replace the call to `torch.autograd.Variable` with a call to `torch.tensor` and set the `requires_grad` attribute to `True` if needed. === Code examples ==== Noncompliant code example [source,python,diff-id=1,diff-type=noncompliant] ---- import torch x = torch.autograd.Variable(torch.tensor([1.0]), requires_grad=True) # Noncompliant x2 = torch.autograd.Variable(torch.tensor([1.0])) # Noncompliant ---- ==== Compliant solution [source,python,diff-id=1,diff-type=compliant] ---- import torch x = torch.tensor([1.0], requires_grad=True) x2 = torch.tensor([1.0]) ---- == Resources === Documentation * Pytorch documentation - https://pytorch.org/docs/stable/autograd.html#variable-deprecated[Variable API] ifdef::env-github,rspecator-view[] (visible only on this page) == Implementation specification Should be pretty straighforward to implement. === Message Primary : Replace this call with a call to "torch.tensor". === Issue location Primary : Name of the function call === Quickfix Might be tricky to know how to call the `torch.tensor` function. If there is an import like `from torch import tensor`, then replace with `tensor(...)` If not, then replace with `torch.tensor(...)` endif::env-github,rspecator-view[]