You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran the resolver both with the label, and with the macro. When I started using the macro the other day, I wasn't actually sure anymore if calling the agent will give the LLM all comments on the PR. It doesn't. 😅
Surely enough, the documentation says:
The agent will only consider the comment where it's mentioned
I can see why we'd want it focused on a part of the code, when the comment that triggers the macro is somewhere on the code. But when it's a top-level comment... I'm not so sure.
I reflexively tried to talk to the agent the other day, in top-level comments, and multiples times I referred to previous comments to guide it towards completing the whole task. It makes more sense to me than repeating all info in previous comments?
Describe the UX of the solution you'd like
I would suggest it may be a bit more consistent maybe, if we either:
when it's a top-level comment that triggers this, give the LLM all comments as context. We could add a little prompting to make it focused on the last comment. And/or specify that the rest do not need addressed, they are there only for information or something.
(this one is more extreme 😅) always give the LLM all comments, just in some different format that makes a clear distinction between the comment with the task at hand, and the rest.
Describe alternatives you've considered
Not using the macro, just use the label. Unless it's really really only one tiny detail it needs to look at.
I do want the agent to have context. It is also true that when I "call for its attention" in one comment, I'd expect that it focuses on that comment, not the rest. Eh, as much as possible. (ok, this may be a problem).
Additional context
Actually, I started writing this issue to propose that it should always have access to all activity in the PR, and I changed my mind while writing it. Something about this could be just me: when running the agent locally I don't usually think in terms of "completing the whole task this round", but about successive tasks, all the time. Bit by bit, small or big, and we just continue. Maybe I just need to change my habits... 🤔
The text was updated successfully, but these errors were encountered:
What problem or use case are you trying to solve?
I ran the resolver both with the label, and with the macro. When I started using the macro the other day, I wasn't actually sure anymore if calling the agent will give the LLM all comments on the PR. It doesn't. 😅
Surely enough, the documentation says:
I can see why we'd want it focused on a part of the code, when the comment that triggers the macro is somewhere on the code. But when it's a top-level comment... I'm not so sure.
I reflexively tried to talk to the agent the other day, in top-level comments, and multiples times I referred to previous comments to guide it towards completing the whole task. It makes more sense to me than repeating all info in previous comments?
Describe the UX of the solution you'd like
I would suggest it may be a bit more consistent maybe, if we either:
Describe alternatives you've considered
Not using the macro, just use the label. Unless it's really really only one tiny detail it needs to look at.
I do want the agent to have context. It is also true that when I "call for its attention" in one comment, I'd expect that it focuses on that comment, not the rest. Eh, as much as possible. (ok, this may be a problem).
Additional context
Actually, I started writing this issue to propose that it should always have access to all activity in the PR, and I changed my mind while writing it. Something about this could be just me: when running the agent locally I don't usually think in terms of "completing the whole task this round", but about successive tasks, all the time. Bit by bit, small or big, and we just continue. Maybe I just need to change my habits... 🤔
The text was updated successfully, but these errors were encountered: