mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-04 21:00:40 +08:00
Compare commits
156 Commits
v0.29
...
hl/fix_azu
Author | SHA1 | Date | |
---|---|---|---|
fe62b8f7c7 | |||
2e75fa31bd | |||
8143f4b35b | |||
a17100e512 | |||
821227542a | |||
e9ce3ae869 | |||
b4cef661e6 | |||
2b614330ec | |||
b802b162d1 | |||
fd1a27c2ac | |||
95e4604abe | |||
d5f77560e3 | |||
6f27fc9271 | |||
ee516ed764 | |||
9f9548395f | |||
daf6c25f9a | |||
495ac565b0 | |||
82c88a1cf7 | |||
3d5509b986 | |||
86102abf8e | |||
df6b00aa36 | |||
4baf52292d | |||
e8ace9fcf9 | |||
3ec66e6aec | |||
80b535f41a | |||
805734376e | |||
a128db8393 | |||
9cf62e8220 | |||
73cf69889a | |||
b18a509120 | |||
6063bf5978 | |||
5d105c64d2 | |||
f06ee951d7 | |||
b264f42e3d | |||
a975b32376 | |||
5e9c56b96c | |||
f78762cf2e | |||
4a019ba7c4 | |||
16d980ec76 | |||
68c0fd7e3a | |||
2eeb9b0411 | |||
f3cb4e8384 | |||
946657a6d1 | |||
d2194c7ed9 | |||
d5dead5c7f | |||
6aac41a0df | |||
2453508023 | |||
84f2f4fe3d | |||
aa3e5b79c8 | |||
d9f64e52e4 | |||
ff52ae9281 | |||
d791e9f3d1 | |||
2afc3d3437 | |||
511f1ba6ae | |||
415817b421 | |||
18a8a741fa | |||
113229b218 | |||
4cdaad1fc5 | |||
e57d3101e4 | |||
f58c40a6ae | |||
c346d784e3 | |||
32460fac57 | |||
d8aa61622f | |||
2b2818a435 | |||
cdca5a55b2 | |||
9f9397b2d8 | |||
3a385b62d6 | |||
94e1126b00 | |||
5a0affd6cb | |||
d62cbb2fc4 | |||
f5bb508736 | |||
4047e71268 | |||
16b9ccd025 | |||
43dbe24a7f | |||
f4a9bc3de7 | |||
ad4721f55b | |||
20b1a1f552 | |||
4c98cffd37 | |||
453f8e19f3 | |||
95c94b80a2 | |||
e2586cb64a | |||
1bc0d488d5 | |||
1f836e405d | |||
c4358d1ca0 | |||
c10be827a1 | |||
10703a9098 | |||
162cc9d833 | |||
0f893bc492 | |||
000f0ba93e | |||
48c29c9ffa | |||
f6a9d3c2cc | |||
930cd69909 | |||
684a438167 | |||
f10c389406 | |||
20e69c3530 | |||
069f36fc1f | |||
1c6958069a | |||
e79c34e039 | |||
e045617243 | |||
70428ebb21 | |||
466ec4ce90 | |||
facfb5f46b | |||
cc686ef26d | |||
ead7491ca9 | |||
df0355d827 | |||
c3ea048b71 | |||
648829b770 | |||
4e80f3999c | |||
3bced45248 | |||
dd17aadfe3 | |||
199b463eaa | |||
7821e71b17 | |||
b686a707a4 | |||
bd68a0de55 | |||
6405284461 | |||
9069c37a05 | |||
2d619564f2 | |||
1b74942919 | |||
97f2b6f736 | |||
eecf115b91 | |||
f198e6fa09 | |||
e72bb28c4e | |||
81fa22e4df | |||
8aa89ff8e6 | |||
6d9bb93f62 | |||
25b807f71c | |||
03fa5b7d92 | |||
4679dce3af | |||
94aa8e8638 | |||
f5a069d6b4 | |||
2a42d009af | |||
9464fd9696 | |||
95df26c973 | |||
a315779713 | |||
c97b49c373 | |||
5a8ce252f7 | |||
5e40b3962a | |||
3f4fac1232 | |||
e692dee66a | |||
31620a82c0 | |||
2dbcb3e5dc | |||
74b4488c7e | |||
65d9269bf2 | |||
411245155f | |||
14fb98aa77 | |||
b4ae07bf82 | |||
d67d07acc7 | |||
12b1fe23da | |||
f857ea1f22 | |||
8b1abbcc2c | |||
a692a70027 | |||
fab8573c4d | |||
2d7636543c | |||
cf2b95b766 | |||
9ef0c451bf | |||
05ab5f699f |
797
LICENSE
797
LICENSE
@ -1,202 +1,661 @@
|
||||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
Preamble
|
||||
|
||||
1. Definitions.
|
||||
The GNU Affero General Public License is a free, copyleft license for
|
||||
software and other kinds of works, specifically designed to ensure
|
||||
cooperation with the community in the case of network server software.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
our General Public Licenses are intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
Developers that use our General Public Licenses protect your rights
|
||||
with two steps: (1) assert copyright on the software, and (2) offer
|
||||
you this License which gives you legal permission to copy, distribute
|
||||
and/or modify the software.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
A secondary benefit of defending all users' freedom is that
|
||||
improvements made in alternate versions of the program, if they
|
||||
receive widespread use, become available for other developers to
|
||||
incorporate. Many developers of free software are heartened and
|
||||
encouraged by the resulting cooperation. However, in the case of
|
||||
software used on network servers, this result may fail to come about.
|
||||
The GNU General Public License permits making a modified version and
|
||||
letting the public access it on a server without ever releasing its
|
||||
source code to the public.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
The GNU Affero General Public License is designed specifically to
|
||||
ensure that, in such cases, the modified source code becomes available
|
||||
to the community. It requires the operator of a network server to
|
||||
provide the source code of the modified version running there to the
|
||||
users of that server. Therefore, public use of a modified version, on
|
||||
a publicly accessible server, gives the public access to the source
|
||||
code of the modified version.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
An older license, called the Affero General Public License and
|
||||
published by Affero, was designed to accomplish similar goals. This is
|
||||
a different license, not a version of the Affero GPL, but Affero has
|
||||
released a new version of the Affero GPL which permits relicensing under
|
||||
this license.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
0. Definitions.
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
1. Source Code.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
2. Basic Permissions.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
Copyright [2023] [Codium ltd]
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the
|
||||
Program, your modified version must prominently offer all users
|
||||
interacting with it remotely through a computer network (if your version
|
||||
supports such interaction) an opportunity to receive the Corresponding
|
||||
Source of your version by providing access to the Corresponding Source
|
||||
from a network server at no charge, through some standard or customary
|
||||
means of facilitating copying of software. This Corresponding Source
|
||||
shall include the Corresponding Source for any work covered by version 3
|
||||
of the GNU General Public License that is incorporated pursuant to the
|
||||
following paragraph.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the work with which it is combined will remain governed by version
|
||||
3 of the GNU General Public License.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU Affero General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU Affero General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU Affero General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU Affero General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published
|
||||
by the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If your software can interact with users remotely through a computer
|
||||
network, you should also make sure that it provides a way for users to
|
||||
get its source. For example, if your program is a web application, its
|
||||
interface could display a "Source" link that leads users to an archive
|
||||
of the code. There are many ways you could offer source, and different
|
||||
solutions will be better for different programs; see section 13 for the
|
||||
specific requirements.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
115
README.md
115
README.md
@ -27,17 +27,6 @@ PR-Agent aims to help efficiently review and handle pull requests, by providing
|
||||
</a>
|
||||
</div>
|
||||
|
||||
[//]: # (### [Documentation](https://qodo-merge-docs.qodo.ai/))
|
||||
|
||||
[//]: # ()
|
||||
[//]: # (- See the [Installation Guide](https://qodo-merge-docs.qodo.ai/installation/) for instructions on installing PR-Agent on different platforms.)
|
||||
|
||||
[//]: # ()
|
||||
[//]: # (- See the [Usage Guide](https://qodo-merge-docs.qodo.ai/usage-guide/) for instructions on running PR-Agent tools via different interfaces, such as CLI, PR Comments, or by automatically triggering them when a new PR is opened.)
|
||||
|
||||
[//]: # ()
|
||||
[//]: # (- See the [Tools Guide](https://qodo-merge-docs.qodo.ai/tools/) for a detailed description of the different tools, and the available configurations for each tool.)
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [News and Updates](#news-and-updates)
|
||||
@ -53,6 +42,12 @@ PR-Agent aims to help efficiently review and handle pull requests, by providing
|
||||
|
||||
## News and Updates
|
||||
|
||||
## May 17, 2025
|
||||
|
||||
- v0.29 was [released](https://github.com/qodo-ai/pr-agent/releases)
|
||||
- `Qodo Merge Pull Request Benchmark` was [released](https://qodo-merge-docs.qodo.ai/pr_benchmark/). This benchmark evaluates and compares the performance of LLMs in analyzing pull request code.
|
||||
- `Recent Updates and Future Roadmap` page was added to the [Qodo Merge Docs](https://qodo-merge-docs.qodo.ai/recent_updates/)
|
||||
|
||||
## Apr 30, 2025
|
||||
|
||||
A new feature is now available in the `/improve` tool for Qodo Merge 💎 - Chat on code suggestions.
|
||||
@ -69,69 +64,53 @@ New tool for Qodo Merge 💎 - `/scan_repo_discussions`.
|
||||
|
||||
Read more about it [here](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/).
|
||||
|
||||
## Apr 14, 2025
|
||||
|
||||
GPT-4.1 is out. And it's quite good on coding tasks...
|
||||
|
||||
https://openai.com/index/gpt-4-1/
|
||||
|
||||
<img width="512" alt="image" src="https://github.com/user-attachments/assets/a8f4c648-a058-4bdc-9825-2a4bb71a23e5" />
|
||||
|
||||
## March 28, 2025
|
||||
|
||||
A new version, v0.28, was released. See release notes [here](https://github.com/qodo-ai/pr-agent/releases/tag/v0.28).
|
||||
|
||||
This version includes a new tool, [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/), which can answer free-text questions based on a documentation folder.
|
||||
|
||||
`/help_docs` is now being used to provide immediate automatic feedback to any user who [opens an issue](https://github.com/qodo-ai/pr-agent/issues/1608#issue-2897328825) on PR-Agent's open-source project
|
||||
|
||||
## Overview
|
||||
|
||||
<div style="text-align:left;">
|
||||
|
||||
Supported commands per platform:
|
||||
|
||||
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|
||||
| ----- |---------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|
|
||||
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ [Ask on code lines](https://qodo-merge-docs.qodo.ai/tools/ask/#ask-lines) | ✅ | ✅ | | |
|
||||
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/?h=auto#auto-approval) | ✅ | ✅ | ✅ | |
|
||||
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
|
||||
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
|
||||
| | [CI Feedback](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
||||
| | [PR Documentation](https://qodo-merge-docs.qodo.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
||||
| | [Custom Labels](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
||||
| | [Analyze](https://qodo-merge-docs.qodo.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
||||
| | [Similar Code](https://qodo-merge-docs.qodo.ai/tools/similar_code/) 💎 | ✅ | | | |
|
||||
| | [Custom Prompt](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Test](https://qodo-merge-docs.qodo.ai/tools/test/) 💎 | ✅ | ✅ | | |
|
||||
| | [Implement](https://qodo-merge-docs.qodo.ai/tools/implement/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Scan Repo Discussions](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/) 💎 | ✅ | | | |
|
||||
| | [Auto-Approve](https://qodo-merge-docs.qodo.ai/tools/improve/?h=auto#auto-approval) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | | | | | |
|
||||
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
|
||||
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | | | | | |
|
||||
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | | |
|
||||
| | [Global and wiki configurations](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [PR interactive actions](https://www.qodo.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
|
||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
|
||||
| | [Code Validation 💎](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Auto Best Practices 💎](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | ✅ | | | |
|
||||
| | | GitHub | GitLab | Bitbucket | Azure DevOps | Gitea |
|
||||
| ----- |---------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|:-----:|
|
||||
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | ⮑ [Ask on code lines](https://qodo-merge-docs.qodo.ai/tools/ask/#ask-lines) | ✅ | ✅ | | | |
|
||||
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/?h=auto#auto-approval) | ✅ | ✅ | ✅ | | |
|
||||
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | | |
|
||||
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | | |
|
||||
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | | |
|
||||
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | | |
|
||||
| | [CI Feedback](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) 💎 | ✅ | | | | |
|
||||
| | [PR Documentation](https://qodo-merge-docs.qodo.ai/tools/documentation/) 💎 | ✅ | ✅ | | | |
|
||||
| | [Custom Labels](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | | |
|
||||
| | [Analyze](https://qodo-merge-docs.qodo.ai/tools/analyze/) 💎 | ✅ | ✅ | | | |
|
||||
| | [Similar Code](https://qodo-merge-docs.qodo.ai/tools/similar_code/) 💎 | ✅ | | | | |
|
||||
| | [Custom Prompt](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | | |
|
||||
| | [Test](https://qodo-merge-docs.qodo.ai/tools/test/) 💎 | ✅ | ✅ | | | |
|
||||
| | [Implement](https://qodo-merge-docs.qodo.ai/tools/implement/) 💎 | ✅ | ✅ | ✅ | | |
|
||||
| | [Scan Repo Discussions](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/) 💎 | ✅ | | | | |
|
||||
| | [Auto-Approve](https://qodo-merge-docs.qodo.ai/tools/improve/?h=auto#auto-approval) 💎 | ✅ | ✅ | ✅ | | |
|
||||
| | | | | | | |
|
||||
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | | |
|
||||
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | | | | | | |
|
||||
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | | | |
|
||||
| | [Global and wiki configurations](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | | |
|
||||
| | [PR interactive actions](https://www.qodo.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | | |
|
||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | | |
|
||||
| | [Code Validation 💎](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | ✅ | ✅ | ✅ | ✅ | |
|
||||
| | [Auto Best Practices 💎](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | ✅ | | | | |
|
||||
- 💎 means this feature is available only in [Qodo Merge](https://www.qodo.ai/pricing/)
|
||||
|
||||
[//]: # (- Support for additional git providers is described in [here](./docs/Full_environments.md))
|
||||
|
@ -33,6 +33,11 @@ FROM base AS azure_devops_webhook
|
||||
ADD pr_agent pr_agent
|
||||
CMD ["python", "pr_agent/servers/azuredevops_server_webhook.py"]
|
||||
|
||||
FROM base AS gitea_app
|
||||
ADD pr_agent pr_agent
|
||||
CMD ["python", "-m", "gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-c", "pr_agent/servers/gunicorn_config.py","pr_agent.servers.gitea_app:app"]
|
||||
|
||||
|
||||
FROM base AS test
|
||||
ADD requirements-dev.txt .
|
||||
RUN pip install --no-cache-dir -r requirements-dev.txt && rm requirements-dev.txt
|
||||
|
55
docs/docs/core-abilities/chat_on_code_suggestions.md
Normal file
55
docs/docs/core-abilities/chat_on_code_suggestions.md
Normal file
@ -0,0 +1,55 @@
|
||||
# Chat on code suggestions 💎
|
||||
|
||||
`Supported Git Platforms: GitHub, GitLab`
|
||||
|
||||
## Overview
|
||||
|
||||
Qodo Merge implements an orchestrator agent that enables interactive code discussions, listening and responding to comments without requiring explicit tool calls.
|
||||
The orchestrator intelligently analyzes your responses to determine if you want to implement a suggestion, ask a question, or request help, then delegates to the appropriate specialized tool.
|
||||
|
||||
To minimize unnecessary notifications and maintain focused discussions, the orchestrator agent will only respond to comments made directly within the inline code suggestion discussions it has created (`/improve`) or within discussions initiated by the `/implement` command.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Setup
|
||||
|
||||
Enable interactive code discussions by adding the following to your configuration file (default is `True`):
|
||||
|
||||
```toml
|
||||
[pr_code_suggestions]
|
||||
enable_chat_in_code_suggestions = true
|
||||
```
|
||||
|
||||
|
||||
### Activation
|
||||
|
||||
#### `/improve`
|
||||
|
||||
To obtain dynamic responses, the following steps are required:
|
||||
|
||||
1. Run the `/improve` command (mostly automatic)
|
||||
2. Check the `/improve` recommendation checkboxes (_Apply this suggestion_) to have Qodo Merge generate a new inline code suggestion discussion
|
||||
3. The orchestrator agent will then automatically listen to and reply to comments within the discussion without requiring additional commands
|
||||
|
||||
#### `/implement`
|
||||
|
||||
To obtain dynamic responses, the following steps are required:
|
||||
|
||||
1. Select code lines in the PR diff and run the `/implement` command
|
||||
2. Wait for Qodo Merge to generate a new inline code suggestion
|
||||
3. The orchestrator agent will then automatically listen to and reply to comments within the discussion without requiring additional commands
|
||||
|
||||
|
||||
## Explore the available interaction patterns
|
||||
|
||||
!!! tip "Tip: Direct the agent with keywords"
|
||||
Use "implement" or "apply" for code generation. Use "explain", "why", or "how" for information and help.
|
||||
|
||||
=== "Asking for Details"
|
||||
{width=512}
|
||||
|
||||
=== "Implementing Suggestions"
|
||||
{width=512}
|
||||
|
||||
=== "Providing Additional Help"
|
||||
{width=512}
|
@ -9,8 +9,9 @@ This integration enriches the review process by automatically surfacing relevant
|
||||
|
||||
**Ticket systems supported**:
|
||||
|
||||
- GitHub
|
||||
- Jira (💎)
|
||||
- [GitHub](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#github-issues-integration)
|
||||
- [Jira (💎)](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration)
|
||||
- [Linear (💎)](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#linear-integration)
|
||||
|
||||
**Ticket data fetched:**
|
||||
|
||||
@ -75,13 +76,17 @@ The recommended way to authenticate with Jira Cloud is to install the Qodo Merge
|
||||
|
||||
Installation steps:
|
||||
|
||||
1. Click [here](https://auth.atlassian.com/authorize?audience=api.atlassian.com&client_id=8krKmA4gMD8mM8z24aRCgPCSepZNP1xf&scope=read%3Ajira-work%20offline_access&redirect_uri=https%3A%2F%2Fregister.jira.pr-agent.codium.ai&state=qodomerge&response_type=code&prompt=consent) to install the Qodo Merge app in your Jira Cloud instance, click the `accept` button.<br>
|
||||
1. Go to the [Qodo Merge integrations page](https://app.qodo.ai/qodo-merge/integrations)
|
||||
|
||||
2. Click on the Connect **Jira Cloud** button to connect the Jira Cloud app
|
||||
|
||||
3. Click the `accept` button.<br>
|
||||
{width=384}
|
||||
|
||||
2. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.<br>
|
||||
4. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.<br>
|
||||
{width=384}
|
||||
|
||||
3. Now Qodo Merge will be able to fetch Jira ticket context for your PRs.
|
||||
5. Now Qodo Merge will be able to fetch Jira ticket context for your PRs.
|
||||
|
||||
**2) Email/Token Authentication**
|
||||
|
||||
@ -300,3 +305,45 @@ Name your branch with the ticket ID as a prefix (e.g., `ISSUE-123-feature-descri
|
||||
[jira]
|
||||
jira_base_url = "https://<JIRA_ORG>.atlassian.net"
|
||||
```
|
||||
|
||||
## Linear Integration 💎
|
||||
|
||||
### Linear App Authentication
|
||||
|
||||
The recommended way to authenticate with Linear is to connect the Linear app through the Qodo Merge portal.
|
||||
|
||||
Installation steps:
|
||||
|
||||
1. Go to the [Qodo Merge integrations page](https://app.qodo.ai/qodo-merge/integrations)
|
||||
|
||||
2. Navigate to the **Integrations** tab
|
||||
|
||||
3. Click on the **Linear** button to connect the Linear app
|
||||
|
||||
4. Follow the authentication flow to authorize Qodo Merge to access your Linear workspace
|
||||
|
||||
5. Once connected, Qodo Merge will be able to fetch Linear ticket context for your PRs
|
||||
|
||||
### How to link a PR to a Linear ticket
|
||||
|
||||
Qodo Merge will automatically detect Linear tickets using either of these methods:
|
||||
|
||||
**Method 1: Description Reference:**
|
||||
|
||||
Include a ticket reference in your PR description using either:
|
||||
- The complete Linear ticket URL: `https://linear.app/[ORG_ID]/issue/[TICKET_ID]`
|
||||
- The shortened ticket ID: `[TICKET_ID]` (e.g., `ABC-123`) - requires linear_base_url configuration (see below).
|
||||
|
||||
**Method 2: Branch Name Detection:**
|
||||
|
||||
Name your branch with the ticket ID as a prefix (e.g., `ABC-123-feature-description` or `feature/ABC-123/feature-description`).
|
||||
|
||||
!!! note "Linear Base URL"
|
||||
For shortened ticket IDs or branch detection (method 2), you must configure the Linear base URL in your configuration file under the [linear] section:
|
||||
|
||||
```toml
|
||||
[linear]
|
||||
linear_base_url = "https://linear.app/[ORG_ID]"
|
||||
```
|
||||
|
||||
Replace `[ORG_ID]` with your Linear organization identifier.
|
33
docs/docs/core-abilities/incremental_update.md
Normal file
33
docs/docs/core-abilities/incremental_update.md
Normal file
@ -0,0 +1,33 @@
|
||||
# Incremental Update 💎
|
||||
|
||||
`Supported Git Platforms: GitHub`
|
||||
|
||||
## Overview
|
||||
The Incremental Update feature helps users focus on feedback for their newest changes, making large PRs more manageable.
|
||||
|
||||
### How it works
|
||||
|
||||
=== "Update Option on Subsequent Commits"
|
||||
{width=512}
|
||||
|
||||
=== "Generation of Incremental Update"
|
||||
{width=512}
|
||||
|
||||
___
|
||||
|
||||
Whenever new commits are pushed following a recent code suggestions report for this PR, an Update button appears (as seen above).
|
||||
|
||||
Once the user clicks on the button:
|
||||
|
||||
- The `improve` tool identifies the new changes (the "delta")
|
||||
- Provides suggestions on these recent changes
|
||||
- Combines these suggestions with the overall PR feedback, prioritizing delta-related comments
|
||||
- Marks delta-related comments with a textual indication followed by an asterisk (*) with a link to this page, so they can easily be identified
|
||||
|
||||
### Benefits for Developers
|
||||
|
||||
- Focus on what matters: See feedback on newest code first
|
||||
- Clearer organization: Comments on recent changes are clearly marked
|
||||
- Better workflow: Address feedback more systematically, starting with recent changes
|
||||
|
||||
|
@ -3,11 +3,13 @@
|
||||
Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
|
||||
|
||||
- [Auto best practices](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/)
|
||||
- [Chat on code suggestions](https://qodo-merge-docs.qodo.ai/core-abilities/chat_on_code_suggestions/)
|
||||
- [Code validation](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/)
|
||||
- [Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/)
|
||||
- [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/)
|
||||
- [Fetching ticket context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
||||
- [Impact evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/)
|
||||
- [Incremental Update](https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/)
|
||||
- [Interactivity](https://qodo-merge-docs.qodo.ai/core-abilities/interactivity/)
|
||||
- [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/)
|
||||
- [RAG context enrichment](https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/)
|
||||
|
@ -67,6 +67,7 @@ PR-Agent and Qodo Merge offers extensive pull request functionalities across var
|
||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
|
||||
| | [Code Validation 💎](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Auto Best Practices 💎](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | ✅ | | | |
|
||||
| | [Incremental Update 💎](https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/) | ✅ | | | |
|
||||
!!! note "💎 means Qodo Merge only"
|
||||
All along the documentation, 💎 marks a feature available only in [Qodo Merge](https://www.codium.ai/pricing/){:target="_blank"}, and not in the open-source version.
|
||||
|
||||
|
46
docs/docs/installation/gitea.md
Normal file
46
docs/docs/installation/gitea.md
Normal file
@ -0,0 +1,46 @@
|
||||
## Run a Gitea webhook server
|
||||
|
||||
1. In Gitea create a new user and give it "Reporter" role ("Developer" if using Pro version of the agent) for the intended group or project.
|
||||
|
||||
2. For the user from step 1. generate a `personal_access_token` with `api` access.
|
||||
|
||||
3. Generate a random secret for your app, and save it for later (`webhook_secret`). For example, you can use:
|
||||
|
||||
```bash
|
||||
WEBHOOK_SECRET=$(python -c "import secrets; print(secrets.token_hex(10))")
|
||||
```
|
||||
|
||||
4. Clone this repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/qodo-ai/pr-agent.git
|
||||
```
|
||||
|
||||
5. Prepare variables and secrets. Skip this step if you plan on setting these as environment variables when running the agent:
|
||||
1. In the configuration file/variables:
|
||||
- Set `config.git_provider` to "gitea"
|
||||
|
||||
2. In the secrets file/variables:
|
||||
- Set your AI model key in the respective section
|
||||
- In the [Gitea] section, set `personal_access_token` (with token from step 2) and `webhook_secret` (with secret from step 3)
|
||||
|
||||
6. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
|
||||
|
||||
```bash
|
||||
docker build -f /docker/Dockerfile -t pr-agent:gitea_app --target gitea_app .
|
||||
docker push codiumai/pr-agent:gitea_webhook # Push to your Docker repository
|
||||
```
|
||||
|
||||
7. Set the environmental variables, the method depends on your docker runtime. Skip this step if you included your secrets/configuration directly in the Docker image.
|
||||
|
||||
```bash
|
||||
CONFIG__GIT_PROVIDER=gitea
|
||||
GITEA__PERSONAL_ACCESS_TOKEN=<personal_access_token>
|
||||
GITEA__WEBHOOK_SECRET=<webhook_secret>
|
||||
GITEA__URL=https://gitea.com # Or self host
|
||||
OPENAI__KEY=<your_openai_api_key>
|
||||
```
|
||||
|
||||
8. Create a webhook in your Gitea project. Set the URL to `http[s]://<PR_AGENT_HOSTNAME>/api/v1/gitea_webhooks`, the secret token to the generated secret from step 3, and enable the triggers `push`, `comments` and `merge request events`.
|
||||
|
||||
9. Test your installation by opening a merge request or commenting on a merge request using one of PR Agent's commands.
|
@ -193,9 +193,8 @@ For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
|
||||
3. Push image to ECR
|
||||
|
||||
```shell
|
||||
|
||||
docker tag codiumai/pr-agent:serverless <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
||||
docker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
||||
docker tag codiumai/pr-agent:serverless <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
||||
docker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
||||
```
|
||||
|
||||
4. Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.
|
||||
|
@ -9,6 +9,7 @@ There are several ways to use self-hosted PR-Agent:
|
||||
- [GitLab integration](./gitlab.md)
|
||||
- [BitBucket integration](./bitbucket.md)
|
||||
- [Azure DevOps integration](./azure.md)
|
||||
- [Gitea integration](./gitea.md)
|
||||
|
||||
## Qodo Merge 💎
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
To run PR-Agent locally, you first need to acquire two keys:
|
||||
|
||||
1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o4-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer).
|
||||
2. A personal access token from your Git platform (GitHub, GitLab, BitBucket) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}
|
||||
2. A personal access token from your Git platform (GitHub, GitLab, BitBucket,Gitea) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}
|
||||
|
||||
## Using Docker image
|
||||
|
||||
@ -40,6 +40,19 @@ To invoke a tool (for example `review`), you can run PR-Agent directly from the
|
||||
docker run --rm -it -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=<pr_url> review
|
||||
```
|
||||
|
||||
- For Gitea:
|
||||
|
||||
```bash
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e CONFIG.GIT_PROVIDER=gitea -e GITEA.PERSONAL_ACCESS_TOKEN=<your token> codiumai/pr-agent:latest --pr_url <pr_url> review
|
||||
```
|
||||
|
||||
If you have a dedicated Gitea instance, you need to specify the custom url as variable:
|
||||
|
||||
```bash
|
||||
-e GITEA.URL=<your gitea instance url>
|
||||
```
|
||||
|
||||
|
||||
For other git providers, update `CONFIG.GIT_PROVIDER` accordingly and check the [`pr_agent/settings/.secrets_template.toml`](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/.secrets_template.toml) file for environment variables expected names and values.
|
||||
|
||||
### Utilizing environment variables
|
||||
|
@ -7,15 +7,16 @@ This page summarizes recent enhancements to Qodo Merge (last three months).
|
||||
It also outlines our development roadmap for the upcoming three months. Please note that the roadmap is subject to change, and features may be adjusted, added, or reprioritized.
|
||||
|
||||
=== "Recent Updates"
|
||||
- **Qodo Merge Pull Request Benchmark** - evaluating the performance of LLMs in analyzing pull request code ([Learn more](https://qodo-merge-docs.qodo.ai/pr_benchmark/))
|
||||
- **Chat on Suggestions**: Users can now chat with Qodo Merge code suggestions ([Learn more](https://qodo-merge-docs.qodo.ai/tools/improve/#chat-on-code-suggestions))
|
||||
- **Scan Repo Discussions Tool**: A new tool that analyzes past code discussions to generate a `best_practices.md` file, distilling key insights and recommendations. ([Learn more](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/))
|
||||
- **Enhanced Models**: Qodo Merge now defaults to a combination of top models (Claude Sonnet 3.7 and Gemini 2.5 Pro) and incorporates dedicated code validation logic for improved results. ([Details 1](https://qodo-merge-docs.qodo.ai/usage-guide/qodo_merge_models/), [Details 2](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/))
|
||||
- **Chrome Extension Update**: Qodo Merge Chrome extension now supports single-tenant users. ([Learn more](https://qodo-merge-docs.qodo.ai/chrome-extension/options/#configuration-options/))
|
||||
- **Help Docs Tool**: The help_docs tool can answer free-text questions based on any git documentation folder. ([Learn more](https://qodo-merge-docs.qodo.ai/tools/help_docs/))
|
||||
- **Installation Metrics**: Upon installation, Qodo Merge analyzes past PRs for key metrics (e.g., time to merge, time to first reviewer feedback), enabling pre/post-installation comparison to calculate ROI.
|
||||
|
||||
|
||||
=== "Future Roadmap"
|
||||
- **Smart Update**: Upon PR updates, Qodo Merge will offer tailored code suggestions, addressing both the entire PR and the specific incremental changes since the last feedback.
|
||||
- **CLI Endpoint**: A new Qodo Merge endpoint will accept lists of before/after code changes, execute Qodo Merge commands, and return the results.
|
||||
- **Simplified Free Tier**: We plan to transition from a two-week free trial to a free tier offering a limited number of suggestions per month per organization.
|
||||
- **Best Practices Hierarchy**: Introducing support for structured best practices, such as for folders in monorepos or a unified best practice file for a group of repositories.
|
||||
- **Installation Metrics**: Upon installation, Qodo Merge will analyze past PRs for key metrics (e.g., time to merge, time to first reviewer feedback), enabling pre/post-installation comparison to calculate ROI.
|
@ -56,6 +56,21 @@ Everything below this marker is treated as previously auto-generated content and
|
||||
|
||||
{width=512}
|
||||
|
||||
### Sequence Diagram Support
|
||||
When the `enable_pr_diagram` option is enabled in your configuration, the `/describe` tool will include a `Mermaid` sequence diagram in the PR description.
|
||||
|
||||
This diagram represents interactions between components/functions based on the diff content.
|
||||
|
||||
### How to enable
|
||||
|
||||
In your configuration:
|
||||
|
||||
```
|
||||
toml
|
||||
[pr_description]
|
||||
enable_pr_diagram = true
|
||||
```
|
||||
|
||||
## Configuration options
|
||||
|
||||
!!! example "Possible configurations"
|
||||
@ -109,6 +124,10 @@ Everything below this marker is treated as previously auto-generated content and
|
||||
<td><b>enable_help_text</b></td>
|
||||
<td>If set to true, the tool will display a help text in the comment. Default is false.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>enable_pr_diagram</b></td>
|
||||
<td>If set to true, the tool will generate a horizontal Mermaid flowchart summarizing the main pull request changes. This field remains empty if not applicable. Default is false.</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## Inline file summary 💎
|
||||
|
@ -26,6 +26,29 @@ You can state a name of a specific component in the PR to get documentation only
|
||||
/add_docs component_name
|
||||
```
|
||||
|
||||
## Manual triggering
|
||||
|
||||
Comment `/add_docs` on a PR to invoke it manually.
|
||||
|
||||
## Automatic triggering
|
||||
|
||||
To automatically run the `add_docs` tool when a pull request is opened, define in a [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/):
|
||||
|
||||
|
||||
```toml
|
||||
[github_app]
|
||||
pr_commands = [
|
||||
"/add_docs",
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
The `pr_commands` list defines commands that run automatically when a PR is opened.
|
||||
Since this is under the [github_app] section, it only applies when using the Qodo Merge GitHub App in GitHub environments.
|
||||
|
||||
!!! note
|
||||
By default, /add_docs is not triggered automatically. You must explicitly include it in pr_commands to enable this behavior.
|
||||
|
||||
## Configuration options
|
||||
|
||||
- `docs_style`: The exact style of the documentation (for python docstring). you can choose between: `google`, `numpy`, `sphinx`, `restructuredtext`, `plain`. Default is `sphinx`.
|
||||
|
@ -7,50 +7,50 @@ It leverages LLM technology to transform PR comments and review suggestions into
|
||||
|
||||
## Usage Scenarios
|
||||
|
||||
### For Reviewers
|
||||
=== "For Reviewers"
|
||||
|
||||
Reviewers can request code changes by:
|
||||
Reviewers can request code changes by:
|
||||
|
||||
1. Selecting the code block to be modified.
|
||||
2. Adding a comment with the syntax:
|
||||
1. Selecting the code block to be modified.
|
||||
2. Adding a comment with the syntax:
|
||||
|
||||
```
|
||||
/implement <code-change-description>
|
||||
```
|
||||
```
|
||||
/implement <code-change-description>
|
||||
```
|
||||
|
||||
{width=640}
|
||||
{width=640}
|
||||
|
||||
### For PR Authors
|
||||
=== "For PR Authors"
|
||||
|
||||
PR authors can implement suggested changes by replying to a review comment using either: <br>
|
||||
PR authors can implement suggested changes by replying to a review comment using either:
|
||||
|
||||
1. Add specific implementation details as described above
|
||||
1. Add specific implementation details as described above
|
||||
|
||||
```
|
||||
/implement <code-change-description>
|
||||
```
|
||||
```
|
||||
/implement <code-change-description>
|
||||
```
|
||||
|
||||
2. Use the original review comment as instructions
|
||||
2. Use the original review comment as instructions
|
||||
|
||||
```
|
||||
/implement
|
||||
```
|
||||
```
|
||||
/implement
|
||||
```
|
||||
|
||||
{width=640}
|
||||
{width=640}
|
||||
|
||||
### For Referencing Comments
|
||||
=== "For Referencing Comments"
|
||||
|
||||
You can reference and implement changes from any comment by:
|
||||
You can reference and implement changes from any comment by:
|
||||
|
||||
```
|
||||
/implement <link-to-review-comment>
|
||||
```
|
||||
```
|
||||
/implement <link-to-review-comment>
|
||||
```
|
||||
|
||||
{width=640}
|
||||
{width=640}
|
||||
|
||||
Note that the implementation will occur within the review discussion thread.
|
||||
Note that the implementation will occur within the review discussion thread.
|
||||
|
||||
**Configuration options**
|
||||
## Configuration options
|
||||
|
||||
- Use `/implement` to implement code change within and based on the review discussion.
|
||||
- Use `/implement <code-change-description>` inside a review discussion to implement specific instructions.
|
||||
|
@ -288,45 +288,6 @@ We advise users to apply critical analysis and judgment when implementing the pr
|
||||
In addition to mistakes (which may happen, but are rare), sometimes the presented code modification may serve more as an _illustrative example_ than a directly applicable solution.
|
||||
In such cases, we recommend prioritizing the suggestion's detailed description, using the diff snippet primarily as a supporting reference.
|
||||
|
||||
|
||||
### Chat on code suggestions
|
||||
|
||||
> `💎 feature` Platforms supported: GitHub, GitLab
|
||||
|
||||
Qodo Merge implements an orchestrator agent that enables interactive code discussions, listening and responding to comments without requiring explicit tool calls.
|
||||
The orchestrator intelligently analyzes your responses to determine if you want to implement a suggestion, ask a question, or request help, then delegates to the appropriate specialized tool.
|
||||
|
||||
#### Setup and Activation
|
||||
|
||||
Enable interactive code discussions by adding the following to your configuration file (default is `True`):
|
||||
|
||||
```toml
|
||||
[pr_code_suggestions]
|
||||
enable_chat_in_code_suggestions = true
|
||||
```
|
||||
|
||||
!!! info "Activating Dynamic Responses"
|
||||
To obtain dynamic responses, the following steps are required:
|
||||
|
||||
1. Run the `/improve` command (mostly automatic)
|
||||
2. Tick the `/improve` recommendation checkboxes (_Apply this suggestion_) to have Qodo Merge generate a new inline code suggestion discussion
|
||||
3. The orchestrator agent will then automatically listen and reply to comments within the discussion without requiring additional commands
|
||||
|
||||
#### Explore the available interaction patterns:
|
||||
|
||||
!!! tip "Tip: Direct the agent with keywords"
|
||||
Use "implement" or "apply" for code generation. Use "explain", "why", or "how" for information and help.
|
||||
|
||||
=== "Asking for Details"
|
||||
{width=512}
|
||||
|
||||
=== "Implementing Suggestions"
|
||||
{width=512}
|
||||
|
||||
=== "Providing Additional Help"
|
||||
{width=512}
|
||||
|
||||
|
||||
### Dual publishing mode
|
||||
|
||||
Our recommended approach for presenting code suggestions is through a [table](https://qodo-merge-docs.qodo.ai/tools/improve/#overview) (`--pr_code_suggestions.commitable_code_suggestions=false`).
|
||||
@ -435,7 +396,7 @@ To enable auto-approval based on specific criteria, first, you need to enable th
|
||||
enable_auto_approval = true
|
||||
```
|
||||
|
||||
There are two criteria that can be set for auto-approval:
|
||||
There are several criteria that can be set for auto-approval:
|
||||
|
||||
- **Review effort score**
|
||||
|
||||
@ -457,7 +418,19 @@ enable_auto_approval = true
|
||||
auto_approve_for_no_suggestions = true
|
||||
```
|
||||
|
||||
When no [code suggestion](https://www.qodo.ai/images/pr_agent/code_suggestions_as_comment_closed.png) were found for the PR, the PR will be auto-approved.
|
||||
When no [code suggestions](https://www.qodo.ai/images/pr_agent/code_suggestions_as_comment_closed.png) were found for the PR, the PR will be auto-approved.
|
||||
|
||||
___
|
||||
|
||||
- **Ticket Compliance**
|
||||
|
||||
```toml
|
||||
[config]
|
||||
enable_auto_approval = true
|
||||
ensure_ticket_compliance = true # Default is false
|
||||
```
|
||||
|
||||
If `ensure_ticket_compliance` is set to `true`, auto-approval will be disabled if a ticket is linked to the PR and the ticket is not compliant (e.g., the `review` tool did not mark the PR as fully compliant with the ticket). This ensures that PRs are only auto-approved if their associated tickets are properly resolved.
|
||||
|
||||
### How many code suggestions are generated?
|
||||
|
||||
|
@ -70,6 +70,10 @@ extra_instructions = "..."
|
||||
<td><b>enable_help_text</b></td>
|
||||
<td>If set to true, the tool will display a help text in the comment. Default is true.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>num_max_findings</b></td>
|
||||
<td>Number of maximum returned findings. Default is 3.</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
!!! example "Enable\\disable specific sub-sections"
|
||||
@ -112,13 +116,15 @@ extra_instructions = "..."
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>enable_review_labels_effort</b></td>
|
||||
<td>If set to true, the tool will publish a 'Review effort [1-5]: x' label. Default is true.</td>
|
||||
<td>If set to true, the tool will publish a 'Review effort x/5' label (1–5 scale). Default is true.</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## Usage Tips
|
||||
|
||||
!!! tip "General guidelines"
|
||||
### General guidelines
|
||||
|
||||
!!! tip ""
|
||||
|
||||
The `review` tool provides a collection of configurable feedbacks about a PR.
|
||||
It is recommended to review the [Configuration options](#configuration-options) section, and choose the relevant options for your use case.
|
||||
@ -128,7 +134,9 @@ extra_instructions = "..."
|
||||
|
||||
On the other hand, if you find one of the enabled features to be irrelevant for your use case, disable it. No default configuration can fit all use cases.
|
||||
|
||||
!!! tip "Automation"
|
||||
### Automation
|
||||
|
||||
!!! tip ""
|
||||
When you first install Qodo Merge app, the [default mode](../usage-guide/automations_and_usage.md#github-app-automatic-tools-when-a-new-pr-is-opened) for the `review` tool is:
|
||||
```
|
||||
pr_commands = ["/review", ...]
|
||||
@ -136,16 +144,30 @@ extra_instructions = "..."
|
||||
Meaning the `review` tool will run automatically on every PR, without any additional configurations.
|
||||
Edit this field to enable/disable the tool, or to change the configurations used.
|
||||
|
||||
!!! tip "Possible labels from the review tool"
|
||||
### Auto-generated PR labels by the Review Tool
|
||||
|
||||
The `review` tool can auto-generate two specific types of labels for a PR:
|
||||
!!! tip ""
|
||||
|
||||
- a `possible security issue` label that detects if a possible [security issue](https://github.com/Codium-ai/pr-agent/blob/tr/user_description/pr_agent/settings/pr_reviewer_prompts.toml#L136) exists in the PR code (`enable_review_labels_security` flag)
|
||||
- a `Review effort [1-5]: x` label, where x is the estimated effort to review the PR (`enable_review_labels_effort` flag)
|
||||
The `review` can tool automatically add labels to your Pull Requests:
|
||||
|
||||
Both modes are useful, and we recommended to enable them.
|
||||
- **`possible security issue`**: This label is applied if the tool detects a potential [security vulnerability](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml#L103) in the PR's code. This feedback is controlled by the 'enable_review_labels_security' flag (default is true).
|
||||
- **`review effort [x/5]`**: This label estimates the [effort](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml#L90) required to review the PR on a relative scale of 1 to 5, where 'x' represents the assessed effort. This feedback is controlled by the 'enable_review_labels_effort' flag (default is true).
|
||||
- **`ticket compliance`**: Adds a label indicating code compliance level ("Fully compliant" | "PR Code Verified" | "Partially compliant" | "Not compliant") to any GitHub/Jira/Linea ticket linked in the PR. Controlled by the 'require_ticket_labels' flag (default: false). If 'require_no_ticket_labels' is also enabled, PRs without ticket links will receive a "No ticket found" label.
|
||||
|
||||
!!! tip "Extra instructions"
|
||||
|
||||
### Blocking PRs from merging based on the generated labels
|
||||
|
||||
!!! tip ""
|
||||
|
||||
You can configure a CI/CD Action to prevent merging PRs with specific labels. For example, implement a dedicated [GitHub Action](https://medium.com/sequra-tech/quick-tip-block-pull-request-merge-using-labels-6cc326936221).
|
||||
|
||||
This approach helps ensure PRs with potential security issues or ticket compliance problems will not be merged without further review.
|
||||
|
||||
Since AI may make mistakes or lack complete context, use this feature judiciously. For flexibility, users with appropriate permissions can remove generated labels when necessary. When a label is removed, this action will be automatically documented in the PR discussion, clearly indicating it was a deliberate override by an authorized user to allow the merge.
|
||||
|
||||
### Extra instructions
|
||||
|
||||
!!! tip ""
|
||||
|
||||
Extra instructions are important.
|
||||
The `review` tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.
|
||||
@ -164,7 +186,3 @@ extra_instructions = "..."
|
||||
"""
|
||||
```
|
||||
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
|
||||
|
||||
!!! tip "Code suggestions"
|
||||
|
||||
The `review` tool previously included a legacy feature for providing code suggestions (controlled by `--pr_reviewer.num_code_suggestion`). This functionality has been deprecated and replaced by the [`improve`](./improve.md) tool, which offers higher quality and more actionable code suggestions.
|
||||
|
@ -50,7 +50,7 @@ glob = ['*.py']
|
||||
And to ignore Python files in all PRs using `regex` pattern, set in a configuration file:
|
||||
|
||||
```
|
||||
[regex]
|
||||
[ignore]
|
||||
regex = ['.*\.py$']
|
||||
```
|
||||
|
||||
|
@ -30,7 +30,7 @@ verbosity_level=2
|
||||
This is useful for debugging or experimenting with different tools.
|
||||
|
||||
3. **git provider**: The [git_provider](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L5) field in a configuration file determines the GIT provider that will be used by Qodo Merge. Currently, the following providers are supported:
|
||||
`github` **(default)**, `gitlab`, `bitbucket`, `azure`, `codecommit`, `local`, and `gerrit`.
|
||||
`github` **(default)**, `gitlab`, `bitbucket`, `azure`, `codecommit`, `local`,`gitea`, and `gerrit`.
|
||||
|
||||
### CLI Health Check
|
||||
|
||||
@ -312,3 +312,16 @@ pr_commands = [
|
||||
"/improve",
|
||||
]
|
||||
```
|
||||
|
||||
### Gitea Webhook
|
||||
|
||||
After setting up a Gitea webhook, to control which commands will run automatically when a new MR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
|
||||
|
||||
```toml
|
||||
[gitea]
|
||||
pr_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
"/improve",
|
||||
]
|
||||
```
|
||||
|
@ -1,20 +1,22 @@
|
||||
The different tools and sub-tools used by Qodo Merge are adjustable via the **[configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml)**.
|
||||
The different tools and sub-tools used by Qodo Merge are adjustable via a Git configuration file.
|
||||
There are three main ways to set persistent configurations:
|
||||
|
||||
In addition to general configuration options, each tool has its own configurations. For example, the `review` tool will use parameters from the [pr_reviewer](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L16) section in the configuration file.
|
||||
See the [Tools Guide](https://qodo-merge-docs.qodo.ai/tools/) for a detailed description of the different tools and their configurations.
|
||||
|
||||
There are three ways to set persistent configurations:
|
||||
|
||||
1. Wiki configuration page 💎
|
||||
2. Local configuration file
|
||||
3. Global configuration file 💎
|
||||
1. [Wiki](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#wiki-configuration-file) configuration page 💎
|
||||
2. [Local](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#local-configuration-file) configuration file
|
||||
3. [Global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration file 💎
|
||||
|
||||
In terms of precedence, wiki configurations will override local configurations, and local configurations will override global configurations.
|
||||
|
||||
!!! tip "Tip1: edit only what you need"
|
||||
|
||||
For a list of all possible configurations, see the [configuration options](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml/) page.
|
||||
In addition to general configuration options, each tool has its own configurations. For example, the `review` tool will use parameters from the [pr_reviewer](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L16) section in the configuration file.
|
||||
|
||||
!!! tip "Tip1: Edit only what you need"
|
||||
Your configuration file should be minimal, and edit only the relevant values. Don't copy the entire configuration options, since it can lead to legacy problems when something changes.
|
||||
!!! tip "Tip2: show relevant configurations"
|
||||
If you set `config.output_relevant_configurations=true`, each tool will also output in a collapsible section its relevant configurations. This can be useful for debugging, or getting to know the configurations better.
|
||||
!!! tip "Tip2: Show relevant configurations"
|
||||
If you set `config.output_relevant_configurations` to True, each tool will also output in a collapsible section its relevant configurations. This can be useful for debugging, or getting to know the configurations better.
|
||||
|
||||
|
||||
|
||||
## Wiki configuration file 💎
|
||||
|
||||
|
@ -12,6 +12,7 @@ It includes information on how to adjust Qodo Merge configurations, define which
|
||||
- [GitHub App](./automations_and_usage.md#github-app)
|
||||
- [GitHub Action](./automations_and_usage.md#github-action)
|
||||
- [GitLab Webhook](./automations_and_usage.md#gitlab-webhook)
|
||||
- [Gitea Webhook](./automations_and_usage.md#gitea-webhook)
|
||||
- [BitBucket App](./automations_and_usage.md#bitbucket-app)
|
||||
- [Azure DevOps Provider](./automations_and_usage.md#azure-devops-provider)
|
||||
- [Managing Mail Notifications](./mail_notifications.md)
|
||||
|
@ -44,11 +44,13 @@ nav:
|
||||
- Core Abilities:
|
||||
- 'core-abilities/index.md'
|
||||
- Auto best practices: 'core-abilities/auto_best_practices.md'
|
||||
- Chat on code suggestions: 'core-abilities/chat_on_code_suggestions.md'
|
||||
- Code validation: 'core-abilities/code_validation.md'
|
||||
- Compression strategy: 'core-abilities/compression_strategy.md'
|
||||
- Dynamic context: 'core-abilities/dynamic_context.md'
|
||||
- Fetching ticket context: 'core-abilities/fetching_ticket_context.md'
|
||||
- Impact evaluation: 'core-abilities/impact_evaluation.md'
|
||||
- Incremental Update: 'core-abilities/incremental_update.md'
|
||||
- Interactivity: 'core-abilities/interactivity.md'
|
||||
- Local and global metadata: 'core-abilities/metadata.md'
|
||||
- RAG context enrichment: 'core-abilities/rag_context_enrichment.md'
|
||||
|
@ -53,19 +53,24 @@ MAX_TOKENS = {
|
||||
'vertex_ai/claude-3-5-haiku@20241022': 100000,
|
||||
'vertex_ai/claude-3-sonnet@20240229': 100000,
|
||||
'vertex_ai/claude-3-opus@20240229': 100000,
|
||||
'vertex_ai/claude-opus-4@20250514': 200000,
|
||||
'vertex_ai/claude-3-5-sonnet@20240620': 100000,
|
||||
'vertex_ai/claude-3-5-sonnet-v2@20241022': 100000,
|
||||
'vertex_ai/claude-3-7-sonnet@20250219': 200000,
|
||||
'vertex_ai/claude-sonnet-4@20250514': 200000,
|
||||
'vertex_ai/gemini-1.5-pro': 1048576,
|
||||
'vertex_ai/gemini-2.5-pro-preview-03-25': 1048576,
|
||||
'vertex_ai/gemini-2.5-pro-preview-05-06': 1048576,
|
||||
'vertex_ai/gemini-1.5-flash': 1048576,
|
||||
'vertex_ai/gemini-2.0-flash': 1048576,
|
||||
'vertex_ai/gemini-2.5-flash-preview-04-17': 1048576,
|
||||
'vertex_ai/gemini-2.5-flash-preview-05-20': 1048576,
|
||||
'vertex_ai/gemma2': 8200,
|
||||
'gemini/gemini-1.5-pro': 1048576,
|
||||
'gemini/gemini-1.5-flash': 1048576,
|
||||
'gemini/gemini-2.0-flash': 1048576,
|
||||
'gemini/gemini-2.5-flash-preview-04-17': 1048576,
|
||||
'gemini/gemini-2.5-flash-preview-05-20': 1048576,
|
||||
'gemini/gemini-2.5-pro-preview-03-25': 1048576,
|
||||
'gemini/gemini-2.5-pro-preview-05-06': 1048576,
|
||||
'codechat-bison': 6144,
|
||||
@ -74,22 +79,28 @@ MAX_TOKENS = {
|
||||
'anthropic.claude-v1': 100000,
|
||||
'anthropic.claude-v2': 100000,
|
||||
'anthropic/claude-3-opus-20240229': 100000,
|
||||
'anthropic/claude-opus-4-20250514': 200000,
|
||||
'anthropic/claude-3-5-sonnet-20240620': 100000,
|
||||
'anthropic/claude-3-5-sonnet-20241022': 100000,
|
||||
'anthropic/claude-3-7-sonnet-20250219': 200000,
|
||||
'anthropic/claude-sonnet-4-20250514': 200000,
|
||||
'claude-3-7-sonnet-20250219': 200000,
|
||||
'anthropic/claude-3-5-haiku-20241022': 100000,
|
||||
'bedrock/anthropic.claude-instant-v1': 100000,
|
||||
'bedrock/anthropic.claude-v2': 100000,
|
||||
'bedrock/anthropic.claude-v2:1': 100000,
|
||||
'bedrock/anthropic.claude-3-sonnet-20240229-v1:0': 100000,
|
||||
'bedrock/anthropic.claude-opus-4-20250514-v1:0': 200000,
|
||||
'bedrock/anthropic.claude-3-haiku-20240307-v1:0': 100000,
|
||||
'bedrock/anthropic.claude-3-5-haiku-20241022-v1:0': 100000,
|
||||
'bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0': 100000,
|
||||
'bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0': 100000,
|
||||
'bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0': 200000,
|
||||
'bedrock/anthropic.claude-sonnet-4-20250514-v1:0': 200000,
|
||||
"bedrock/us.anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||
"bedrock/us.anthropic.claude-3-5-sonnet-20241022-v2:0": 100000,
|
||||
"bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0": 200000,
|
||||
"bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||
'claude-3-5-sonnet': 100000,
|
||||
'groq/meta-llama/llama-4-scout-17b-16e-instruct': 131072,
|
||||
'groq/meta-llama/llama-4-maverick-17b-128e-instruct': 131072,
|
||||
@ -102,9 +113,13 @@ MAX_TOKENS = {
|
||||
'xai/grok-2': 131072,
|
||||
'xai/grok-2-1212': 131072,
|
||||
'xai/grok-2-latest': 131072,
|
||||
'xai/grok-3': 131072,
|
||||
'xai/grok-3-beta': 131072,
|
||||
'xai/grok-3-fast': 131072,
|
||||
'xai/grok-3-fast-beta': 131072,
|
||||
'xai/grok-3-mini': 131072,
|
||||
'xai/grok-3-mini-beta': 131072,
|
||||
'xai/grok-3-mini-fast': 131072,
|
||||
'xai/grok-3-mini-fast-beta': 131072,
|
||||
'ollama/llama3': 4096,
|
||||
'watsonx/meta-llama/llama-3-8b-instruct': 4096,
|
||||
|
@ -1,13 +1,17 @@
|
||||
_LANGCHAIN_INSTALLED = False
|
||||
|
||||
try:
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import AzureChatOpenAI, ChatOpenAI
|
||||
_LANGCHAIN_INSTALLED = True
|
||||
except: # we don't enforce langchain as a dependency, so if it's not installed, just move on
|
||||
pass
|
||||
|
||||
import functools
|
||||
|
||||
from openai import APIError, RateLimitError, Timeout
|
||||
from retry import retry
|
||||
import openai
|
||||
from tenacity import retry, retry_if_exception_type, retry_if_not_exception_type, stop_after_attempt
|
||||
from langchain_core.runnables import Runnable
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
@ -18,17 +22,14 @@ OPENAI_RETRIES = 5
|
||||
|
||||
class LangChainOpenAIHandler(BaseAiHandler):
|
||||
def __init__(self):
|
||||
# Initialize OpenAIHandler specific attributes here
|
||||
if not _LANGCHAIN_INSTALLED:
|
||||
error_msg = "LangChain is not installed. Please install it with `pip install langchain`."
|
||||
get_logger().error(error_msg)
|
||||
raise ImportError(error_msg)
|
||||
|
||||
super().__init__()
|
||||
self.azure = get_settings().get("OPENAI.API_TYPE", "").lower() == "azure"
|
||||
|
||||
# Create a default unused chat object to trigger early validation
|
||||
self._create_chat(self.deployment_id)
|
||||
|
||||
def chat(self, messages: list, model: str, temperature: float):
|
||||
chat = self._create_chat(self.deployment_id)
|
||||
return chat.invoke(input=messages, model=model, temperature=temperature)
|
||||
|
||||
@property
|
||||
def deployment_id(self):
|
||||
"""
|
||||
@ -36,26 +37,10 @@ class LangChainOpenAIHandler(BaseAiHandler):
|
||||
"""
|
||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||
|
||||
@retry(exceptions=(APIError, Timeout, AttributeError, RateLimitError),
|
||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||
try:
|
||||
messages = [SystemMessage(content=system), HumanMessage(content=user)]
|
||||
|
||||
# get a chat completion from the formatted messages
|
||||
resp = self.chat(messages, model=model, temperature=temperature)
|
||||
finish_reason = "completed"
|
||||
return resp.content, finish_reason
|
||||
|
||||
except (Exception) as e:
|
||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
||||
raise e
|
||||
|
||||
def _create_chat(self, deployment_id=None):
|
||||
async def _create_chat_async(self, deployment_id=None):
|
||||
try:
|
||||
if self.azure:
|
||||
# using a partial function so we can set the deployment_id later to support fallback_deployments
|
||||
# but still need to access the other settings now so we can raise a proper exception if they're missing
|
||||
# Using Azure OpenAI service
|
||||
return AzureChatOpenAI(
|
||||
openai_api_key=get_settings().openai.key,
|
||||
openai_api_version=get_settings().openai.api_version,
|
||||
@ -63,14 +48,64 @@ class LangChainOpenAIHandler(BaseAiHandler):
|
||||
azure_endpoint=get_settings().openai.api_base,
|
||||
)
|
||||
else:
|
||||
# for llms that compatible with openai, should use custom api base
|
||||
# Using standard OpenAI or other LLM services
|
||||
openai_api_base = get_settings().get("OPENAI.API_BASE", None)
|
||||
if openai_api_base is None or len(openai_api_base) == 0:
|
||||
return ChatOpenAI(openai_api_key=get_settings().openai.key)
|
||||
else:
|
||||
return ChatOpenAI(openai_api_key=get_settings().openai.key, openai_api_base=openai_api_base)
|
||||
return ChatOpenAI(
|
||||
openai_api_key=get_settings().openai.key,
|
||||
openai_api_base=openai_api_base
|
||||
)
|
||||
except AttributeError as e:
|
||||
if getattr(e, "name"):
|
||||
raise ValueError(f"OpenAI {e.name} is required") from e
|
||||
# Handle configuration errors
|
||||
error_msg = f"OpenAI {e.name} is required" if getattr(e, "name") else str(e)
|
||||
get_logger().error(error_msg)
|
||||
raise ValueError(error_msg) from e
|
||||
|
||||
@retry(
|
||||
retry=retry_if_exception_type(openai.APIError) & retry_if_not_exception_type(openai.RateLimitError),
|
||||
stop=stop_after_attempt(OPENAI_RETRIES),
|
||||
)
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
|
||||
if img_path:
|
||||
get_logger().warning(f"Image path is not supported for LangChainOpenAIHandler. Ignoring image path: {img_path}")
|
||||
try:
|
||||
messages = [SystemMessage(content=system), HumanMessage(content=user)]
|
||||
llm = await self._create_chat_async(deployment_id=self.deployment_id)
|
||||
|
||||
if not isinstance(llm, Runnable):
|
||||
error_message = (
|
||||
f"The Langchain LLM object ({type(llm)}) does not implement the Runnable interface. "
|
||||
f"Please update your Langchain library to the latest version or "
|
||||
f"check your LLM configuration to support async calls. "
|
||||
f"PR-Agent is designed to utilize Langchain's async capabilities."
|
||||
)
|
||||
get_logger().error(error_message)
|
||||
raise NotImplementedError(error_message)
|
||||
|
||||
# Handle parameters based on LLM type
|
||||
if isinstance(llm, (ChatOpenAI, AzureChatOpenAI)):
|
||||
# OpenAI models support all parameters
|
||||
resp = await llm.ainvoke(
|
||||
input=messages,
|
||||
model=model,
|
||||
temperature=temperature
|
||||
)
|
||||
else:
|
||||
raise e
|
||||
# Other LLMs (like Gemini) only support input parameter
|
||||
get_logger().info(f"Using simplified ainvoke for {type(llm)}")
|
||||
resp = await llm.ainvoke(input=messages)
|
||||
|
||||
finish_reason = "completed"
|
||||
return resp.content, finish_reason
|
||||
|
||||
except openai.RateLimitError as e:
|
||||
get_logger().error(f"Rate limit error during LLM inference: {e}")
|
||||
raise
|
||||
except openai.APIError as e:
|
||||
get_logger().warning(f"Error during LLM inference: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
get_logger().warning(f"Unknown error during LLM inference: {e}")
|
||||
raise openai.APIError from e
|
||||
|
@ -3,7 +3,7 @@ import litellm
|
||||
import openai
|
||||
import requests
|
||||
from litellm import acompletion
|
||||
from tenacity import retry, retry_if_exception_type, stop_after_attempt
|
||||
from tenacity import retry, retry_if_exception_type, retry_if_not_exception_type, stop_after_attempt
|
||||
|
||||
from pr_agent.algo import CLAUDE_EXTENDED_THINKING_MODELS, NO_SUPPORT_TEMPERATURE_MODELS, SUPPORT_REASONING_EFFORT_MODELS, USER_MESSAGE_ONLY_MODELS
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
@ -274,8 +274,8 @@ class LiteLLMAIHandler(BaseAiHandler):
|
||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||
|
||||
@retry(
|
||||
retry=retry_if_exception_type((openai.APIError, openai.APIConnectionError, openai.APITimeoutError)), # No retry on RateLimitError
|
||||
stop=stop_after_attempt(OPENAI_RETRIES)
|
||||
retry=retry_if_exception_type(openai.APIError) & retry_if_not_exception_type(openai.RateLimitError),
|
||||
stop=stop_after_attempt(OPENAI_RETRIES),
|
||||
)
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
|
||||
try:
|
||||
@ -371,13 +371,13 @@ class LiteLLMAIHandler(BaseAiHandler):
|
||||
get_logger().info(f"\nUser prompt:\n{user}")
|
||||
|
||||
response = await acompletion(**kwargs)
|
||||
except (openai.APIError, openai.APITimeoutError) as e:
|
||||
get_logger().warning(f"Error during LLM inference: {e}")
|
||||
raise
|
||||
except (openai.RateLimitError) as e:
|
||||
except openai.RateLimitError as e:
|
||||
get_logger().error(f"Rate limit error during LLM inference: {e}")
|
||||
raise
|
||||
except (Exception) as e:
|
||||
except openai.APIError as e:
|
||||
get_logger().warning(f"Error during LLM inference: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
get_logger().warning(f"Unknown error during LLM inference: {e}")
|
||||
raise openai.APIError from e
|
||||
if response is None or len(response["choices"]) == 0:
|
||||
|
@ -1,8 +1,8 @@
|
||||
from os import environ
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
import openai
|
||||
from openai import APIError, AsyncOpenAI, RateLimitError, Timeout
|
||||
from retry import retry
|
||||
from openai import AsyncOpenAI
|
||||
from tenacity import retry, retry_if_exception_type, retry_if_not_exception_type, stop_after_attempt
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
@ -38,10 +38,14 @@ class OpenAIHandler(BaseAiHandler):
|
||||
"""
|
||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||
|
||||
@retry(exceptions=(APIError, Timeout, AttributeError, RateLimitError),
|
||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||
@retry(
|
||||
retry=retry_if_exception_type(openai.APIError) & retry_if_not_exception_type(openai.RateLimitError),
|
||||
stop=stop_after_attempt(OPENAI_RETRIES),
|
||||
)
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
|
||||
try:
|
||||
if img_path:
|
||||
get_logger().warning(f"Image path is not supported for OpenAIHandler. Ignoring image path: {img_path}")
|
||||
get_logger().info("System: ", system)
|
||||
get_logger().info("User: ", user)
|
||||
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
||||
@ -57,12 +61,12 @@ class OpenAIHandler(BaseAiHandler):
|
||||
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
|
||||
model=model, usage=usage)
|
||||
return resp, finish_reason
|
||||
except (APIError, Timeout) as e:
|
||||
get_logger().error("Error during OpenAI inference: ", e)
|
||||
except openai.RateLimitError as e:
|
||||
get_logger().error(f"Rate limit error during LLM inference: {e}")
|
||||
raise
|
||||
except (RateLimitError) as e:
|
||||
get_logger().error("Rate limit error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (Exception) as e:
|
||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
||||
except openai.APIError as e:
|
||||
get_logger().warning(f"Error during LLM inference: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
get_logger().warning(f"Unknown error during LLM inference: {e}")
|
||||
raise openai.APIError from e
|
||||
|
@ -58,6 +58,9 @@ def filter_ignored(files, platform = 'github'):
|
||||
files = files_o
|
||||
elif platform == 'azure':
|
||||
files = [f for f in files if not r.match(f)]
|
||||
elif platform == 'gitea':
|
||||
files = [f for f in files if not r.match(f.get("filename", ""))]
|
||||
|
||||
|
||||
except Exception as e:
|
||||
print(f"Could not filter file list: {e}")
|
||||
|
@ -1,4 +1,6 @@
|
||||
from threading import Lock
|
||||
from math import ceil
|
||||
import re
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
from tiktoken import encoding_for_model, get_encoding
|
||||
@ -7,6 +9,16 @@ from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class ModelTypeValidator:
|
||||
@staticmethod
|
||||
def is_openai_model(model_name: str) -> bool:
|
||||
return 'gpt' in model_name or re.match(r"^o[1-9](-mini|-preview)?$", model_name)
|
||||
|
||||
@staticmethod
|
||||
def is_anthropic_model(model_name: str) -> bool:
|
||||
return 'claude' in model_name
|
||||
|
||||
|
||||
class TokenEncoder:
|
||||
_encoder_instance = None
|
||||
_model = None
|
||||
@ -40,6 +52,10 @@ class TokenHandler:
|
||||
method.
|
||||
"""
|
||||
|
||||
# Constants
|
||||
CLAUDE_MODEL = "claude-3-7-sonnet-20250219"
|
||||
CLAUDE_MAX_CONTENT_SIZE = 9_000_000 # Maximum allowed content size (9MB) for Claude API
|
||||
|
||||
def __init__(self, pr=None, vars: dict = {}, system="", user=""):
|
||||
"""
|
||||
Initializes the TokenHandler object.
|
||||
@ -51,6 +67,7 @@ class TokenHandler:
|
||||
- user: The user string.
|
||||
"""
|
||||
self.encoder = TokenEncoder.get_token_encoder()
|
||||
|
||||
if pr is not None:
|
||||
self.prompt_tokens = self._get_system_user_tokens(pr, self.encoder, vars, system, user)
|
||||
|
||||
@ -79,22 +96,22 @@ class TokenHandler:
|
||||
get_logger().error(f"Error in _get_system_user_tokens: {e}")
|
||||
return 0
|
||||
|
||||
def calc_claude_tokens(self, patch):
|
||||
def _calc_claude_tokens(self, patch: str) -> int:
|
||||
try:
|
||||
import anthropic
|
||||
from pr_agent.algo import MAX_TOKENS
|
||||
|
||||
client = anthropic.Anthropic(api_key=get_settings(use_context=False).get('anthropic.key'))
|
||||
MaxTokens = MAX_TOKENS[get_settings().config.model]
|
||||
max_tokens = MAX_TOKENS[get_settings().config.model]
|
||||
|
||||
# Check if the content size is too large (9MB limit)
|
||||
if len(patch.encode('utf-8')) > 9_000_000:
|
||||
if len(patch.encode('utf-8')) > self.CLAUDE_MAX_CONTENT_SIZE:
|
||||
get_logger().warning(
|
||||
"Content too large for Anthropic token counting API, falling back to local tokenizer"
|
||||
)
|
||||
return MaxTokens
|
||||
return max_tokens
|
||||
|
||||
response = client.messages.count_tokens(
|
||||
model="claude-3-7-sonnet-20250219",
|
||||
model=self.CLAUDE_MODEL,
|
||||
system="system",
|
||||
messages=[{
|
||||
"role": "user",
|
||||
@ -104,42 +121,51 @@ class TokenHandler:
|
||||
return response.input_tokens
|
||||
|
||||
except Exception as e:
|
||||
get_logger().error( f"Error in Anthropic token counting: {e}")
|
||||
return MaxTokens
|
||||
get_logger().error(f"Error in Anthropic token counting: {e}")
|
||||
return max_tokens
|
||||
|
||||
def estimate_token_count_for_non_anth_claude_models(self, model, default_encoder_estimate):
|
||||
from math import ceil
|
||||
import re
|
||||
def _apply_estimation_factor(self, model_name: str, default_estimate: int) -> int:
|
||||
factor = 1 + get_settings().get('config.model_token_count_estimate_factor', 0)
|
||||
get_logger().warning(f"{model_name}'s token count cannot be accurately estimated. Using factor of {factor}")
|
||||
|
||||
return ceil(factor * default_estimate)
|
||||
|
||||
model_is_from_o_series = re.match(r"^o[1-9](-mini|-preview)?$", model)
|
||||
if ('gpt' in get_settings().config.model.lower() or model_is_from_o_series) and get_settings(use_context=False).get('openai.key'):
|
||||
return default_encoder_estimate
|
||||
#else: Model is not an OpenAI one - therefore, cannot provide an accurate token count and instead, return a higher number as best effort.
|
||||
def _get_token_count_by_model_type(self, patch: str, default_estimate: int) -> int:
|
||||
"""
|
||||
Get token count based on model type.
|
||||
|
||||
elbow_factor = 1 + get_settings().get('config.model_token_count_estimate_factor', 0)
|
||||
get_logger().warning(f"{model}'s expected token count cannot be accurately estimated. Using {elbow_factor} of encoder output as best effort estimate")
|
||||
return ceil(elbow_factor * default_encoder_estimate)
|
||||
Args:
|
||||
patch: The text to count tokens for.
|
||||
default_estimate: The default token count estimate.
|
||||
|
||||
def count_tokens(self, patch: str, force_accurate=False) -> int:
|
||||
Returns:
|
||||
int: The calculated token count.
|
||||
"""
|
||||
model_name = get_settings().config.model.lower()
|
||||
|
||||
if ModelTypeValidator.is_openai_model(model_name) and get_settings(use_context=False).get('openai.key'):
|
||||
return default_estimate
|
||||
|
||||
if ModelTypeValidator.is_anthropic_model(model_name) and get_settings(use_context=False).get('anthropic.key'):
|
||||
return self._calc_claude_tokens(patch)
|
||||
|
||||
return self._apply_estimation_factor(model_name, default_estimate)
|
||||
|
||||
def count_tokens(self, patch: str, force_accurate: bool = False) -> int:
|
||||
"""
|
||||
Counts the number of tokens in a given patch string.
|
||||
|
||||
Args:
|
||||
- patch: The patch string.
|
||||
- force_accurate: If True, uses a more precise calculation method.
|
||||
|
||||
Returns:
|
||||
The number of tokens in the patch string.
|
||||
"""
|
||||
encoder_estimate = len(self.encoder.encode(patch, disallowed_special=()))
|
||||
|
||||
#If an estimate is enough (for example, in cases where the maximal allowed tokens is way below the known limits), return it.
|
||||
# If an estimate is enough (for example, in cases where the maximal allowed tokens is way below the known limits), return it.
|
||||
if not force_accurate:
|
||||
return encoder_estimate
|
||||
|
||||
#else, force_accurate==True: User requested providing an accurate estimation:
|
||||
model = get_settings().config.model.lower()
|
||||
if 'claude' in model and get_settings(use_context=False).get('anthropic.key'):
|
||||
return self.calc_claude_tokens(patch) # API call to Anthropic for accurate token counting for Claude models
|
||||
|
||||
#else: Non Anthropic provided model:
|
||||
return self.estimate_token_count_for_non_anth_claude_models(model, encoder_estimate)
|
||||
return self._get_token_count_by_model_type(patch, encoder_estimate)
|
||||
|
@ -945,12 +945,66 @@ def clip_tokens(text: str, max_tokens: int, add_three_dots=True, num_input_token
|
||||
"""
|
||||
Clip the number of tokens in a string to a maximum number of tokens.
|
||||
|
||||
This function limits text to a specified token count by calculating the approximate
|
||||
character-to-token ratio and truncating the text accordingly. A safety factor of 0.9
|
||||
(10% reduction) is applied to ensure the result stays within the token limit.
|
||||
|
||||
Args:
|
||||
text (str): The string to clip.
|
||||
text (str): The string to clip. If empty or None, returns the input unchanged.
|
||||
max_tokens (int): The maximum number of tokens allowed in the string.
|
||||
add_three_dots (bool, optional): A boolean indicating whether to add three dots at the end of the clipped
|
||||
If negative, returns an empty string.
|
||||
add_three_dots (bool, optional): Whether to add "\\n...(truncated)" at the end
|
||||
of the clipped text to indicate truncation.
|
||||
Defaults to True.
|
||||
num_input_tokens (int, optional): Pre-computed number of tokens in the input text.
|
||||
If provided, skips token encoding step for efficiency.
|
||||
If None, tokens will be counted using TokenEncoder.
|
||||
Defaults to None.
|
||||
delete_last_line (bool, optional): Whether to remove the last line from the
|
||||
clipped content before adding truncation indicator.
|
||||
Useful for ensuring clean breaks at line boundaries.
|
||||
Defaults to False.
|
||||
|
||||
Returns:
|
||||
str: The clipped string.
|
||||
str: The clipped string. Returns original text if:
|
||||
- Text is empty/None
|
||||
- Token count is within limit
|
||||
- An error occurs during processing
|
||||
|
||||
Returns empty string if max_tokens <= 0.
|
||||
|
||||
Examples:
|
||||
Basic usage:
|
||||
>>> text = "This is a sample text that might be too long"
|
||||
>>> result = clip_tokens(text, max_tokens=10)
|
||||
>>> print(result)
|
||||
This is a sample...
|
||||
(truncated)
|
||||
|
||||
Without truncation indicator:
|
||||
>>> result = clip_tokens(text, max_tokens=10, add_three_dots=False)
|
||||
>>> print(result)
|
||||
This is a sample
|
||||
|
||||
With pre-computed token count:
|
||||
>>> result = clip_tokens(text, max_tokens=5, num_input_tokens=15)
|
||||
>>> print(result)
|
||||
This...
|
||||
(truncated)
|
||||
|
||||
With line deletion:
|
||||
>>> multiline_text = "Line 1\\nLine 2\\nLine 3"
|
||||
>>> result = clip_tokens(multiline_text, max_tokens=3, delete_last_line=True)
|
||||
>>> print(result)
|
||||
Line 1
|
||||
Line 2
|
||||
...
|
||||
(truncated)
|
||||
|
||||
Notes:
|
||||
The function uses a safety factor of 0.9 (10% reduction) to ensure the
|
||||
result stays within the token limit, as character-to-token ratios can vary.
|
||||
If token encoding fails, the original text is returned with a warning logged.
|
||||
"""
|
||||
if not text:
|
||||
return text
|
||||
|
@ -8,9 +8,11 @@ from pr_agent.git_providers.bitbucket_server_provider import \
|
||||
from pr_agent.git_providers.codecommit_provider import CodeCommitProvider
|
||||
from pr_agent.git_providers.gerrit_provider import GerritProvider
|
||||
from pr_agent.git_providers.git_provider import GitProvider
|
||||
from pr_agent.git_providers.gitea_provider import GiteaProvider
|
||||
from pr_agent.git_providers.github_provider import GithubProvider
|
||||
from pr_agent.git_providers.gitlab_provider import GitLabProvider
|
||||
from pr_agent.git_providers.local_git_provider import LocalGitProvider
|
||||
from pr_agent.git_providers.gitea_provider import GiteaProvider
|
||||
|
||||
_GIT_PROVIDERS = {
|
||||
'github': GithubProvider,
|
||||
@ -21,6 +23,7 @@ _GIT_PROVIDERS = {
|
||||
'codecommit': CodeCommitProvider,
|
||||
'local': LocalGitProvider,
|
||||
'gerrit': GerritProvider,
|
||||
'gitea': GiteaProvider
|
||||
}
|
||||
|
||||
|
||||
|
@ -350,7 +350,8 @@ class AzureDevopsProvider(GitProvider):
|
||||
get_logger().debug(f"Skipping publish_comment for temporary comment: {pr_comment}")
|
||||
return None
|
||||
comment = Comment(content=pr_comment)
|
||||
thread = CommentThread(comments=[comment], thread_context=thread_context, status="closed")
|
||||
# Set status to 'active' to prevent auto-resolve (see CommentThreadStatus docs)
|
||||
thread = CommentThread(comments=[comment], thread_context=thread_context, status='active')
|
||||
thread_response = self.azure_devops_client.create_thread(
|
||||
comment_thread=thread,
|
||||
project=self.workspace_slug,
|
||||
@ -618,7 +619,7 @@ class AzureDevopsProvider(GitProvider):
|
||||
return pr_id
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Failed to get pr id, error: {e}")
|
||||
get_logger().info(f"Failed to get PR id, error: {e}")
|
||||
return ""
|
||||
|
||||
def publish_file_comments(self, file_comments: list) -> bool:
|
||||
|
992
pr_agent/git_providers/gitea_provider.py
Normal file
992
pr_agent/git_providers/gitea_provider.py
Normal file
@ -0,0 +1,992 @@
|
||||
import hashlib
|
||||
import json
|
||||
from typing import Any, Dict, List, Optional, Set, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import giteapy
|
||||
from giteapy.rest import ApiException
|
||||
|
||||
from pr_agent.algo.file_filter import filter_ignored
|
||||
from pr_agent.algo.language_handler import is_valid_file
|
||||
from pr_agent.algo.types import EDIT_TYPE
|
||||
from pr_agent.algo.utils import (clip_tokens,
|
||||
find_line_number_of_relevant_line_in_file)
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.git_provider import (MAX_FILES_ALLOWED_FULL,
|
||||
FilePatchInfo, GitProvider,
|
||||
IncrementalPR)
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class GiteaProvider(GitProvider):
|
||||
def __init__(self, url: Optional[str] = None):
|
||||
super().__init__()
|
||||
self.logger = get_logger()
|
||||
|
||||
if not url:
|
||||
self.logger.error("PR URL not provided.")
|
||||
raise ValueError("PR URL not provided.")
|
||||
|
||||
self.base_url = get_settings().get("GITEA.URL", "https://gitea.com").rstrip("/")
|
||||
self.pr_url = ""
|
||||
self.issue_url = ""
|
||||
|
||||
gitea_access_token = get_settings().get("GITEA.PERSONAL_ACCESS_TOKEN", None)
|
||||
if not gitea_access_token:
|
||||
self.logger.error("Gitea access token not found in settings.")
|
||||
raise ValueError("Gitea access token not found in settings.")
|
||||
|
||||
self.repo_settings = get_settings().get("GITEA.REPO_SETTING", None)
|
||||
configuration = giteapy.Configuration()
|
||||
configuration.host = "{}/api/v1".format(self.base_url)
|
||||
configuration.api_key['Authorization'] = f'token {gitea_access_token}'
|
||||
|
||||
client = giteapy.ApiClient(configuration)
|
||||
self.repo_api = RepoApi(client)
|
||||
self.owner = None
|
||||
self.repo = None
|
||||
self.pr_number = None
|
||||
self.issue_number = None
|
||||
self.max_comment_chars = 65000
|
||||
self.enabled_pr = False
|
||||
self.enabled_issue = False
|
||||
self.temp_comments = []
|
||||
self.pr = None
|
||||
self.git_files = []
|
||||
self.file_contents = {}
|
||||
self.file_diffs = {}
|
||||
self.sha = None
|
||||
self.diff_files = []
|
||||
self.incremental = IncrementalPR(False)
|
||||
self.comments_list = []
|
||||
self.unreviewed_files_set = dict()
|
||||
|
||||
if "pulls" in url:
|
||||
self.pr_url = url
|
||||
self.__set_repo_and_owner_from_pr()
|
||||
self.enabled_pr = True
|
||||
self.pr = self.repo_api.get_pull_request(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
pr_number=self.pr_number
|
||||
)
|
||||
self.git_files = self.repo_api.get_change_file_pull_request(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
pr_number=self.pr_number
|
||||
)
|
||||
# Optional ignore with user custom
|
||||
self.git_files = filter_ignored(self.git_files, platform="gitea")
|
||||
|
||||
self.sha = self.pr.head.sha if self.pr.head.sha else ""
|
||||
self.__add_file_content()
|
||||
self.__add_file_diff()
|
||||
self.pr_commits = self.repo_api.list_all_commits(
|
||||
owner=self.owner,
|
||||
repo=self.repo
|
||||
)
|
||||
self.last_commit = self.pr_commits[-1]
|
||||
self.base_sha = self.pr.base.sha if self.pr.base.sha else ""
|
||||
self.base_ref = self.pr.base.ref if self.pr.base.ref else ""
|
||||
elif "issues" in url:
|
||||
self.issue_url = url
|
||||
self.__set_repo_and_owner_from_issue()
|
||||
self.enabled_issue = True
|
||||
else:
|
||||
self.pr_commits = None
|
||||
|
||||
def __add_file_content(self):
|
||||
for file in self.git_files:
|
||||
file_path = file.get("filename")
|
||||
# Ignore file from default settings
|
||||
if not is_valid_file(file_path):
|
||||
continue
|
||||
|
||||
if file_path and self.sha:
|
||||
try:
|
||||
content = self.repo_api.get_file_content(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
commit_sha=self.sha,
|
||||
filepath=file_path
|
||||
)
|
||||
self.file_contents[file_path] = content
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error getting file content for {file_path}: {str(e)}")
|
||||
self.file_contents[file_path] = ""
|
||||
|
||||
def __add_file_diff(self):
|
||||
try:
|
||||
diff_contents = self.repo_api.get_pull_request_diff(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
pr_number=self.pr_number
|
||||
)
|
||||
|
||||
lines = diff_contents.splitlines()
|
||||
current_file = None
|
||||
current_patch = []
|
||||
file_patches = {}
|
||||
for line in lines:
|
||||
if line.startswith('diff --git'):
|
||||
if current_file and current_patch:
|
||||
file_patches[current_file] = '\n'.join(current_patch)
|
||||
current_patch = []
|
||||
current_file = line.split(' b/')[-1]
|
||||
elif line.startswith('@@'):
|
||||
current_patch = [line]
|
||||
elif current_patch:
|
||||
current_patch.append(line)
|
||||
|
||||
if current_file and current_patch:
|
||||
file_patches[current_file] = '\n'.join(current_patch)
|
||||
|
||||
self.file_diffs = file_patches
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error getting diff content: {str(e)}")
|
||||
|
||||
def _parse_pr_url(self, pr_url: str) -> Tuple[str, str, int]:
|
||||
parsed_url = urlparse(pr_url)
|
||||
|
||||
if parsed_url.path.startswith('/api/v1'):
|
||||
parsed_url = urlparse(pr_url.replace("/api/v1", ""))
|
||||
|
||||
path_parts = parsed_url.path.strip('/').split('/')
|
||||
if len(path_parts) < 4 or path_parts[2] != 'pulls':
|
||||
raise ValueError("The provided URL does not appear to be a Gitea PR URL")
|
||||
|
||||
try:
|
||||
pr_number = int(path_parts[3])
|
||||
except ValueError as e:
|
||||
raise ValueError("Unable to convert PR number to integer") from e
|
||||
|
||||
owner = path_parts[0]
|
||||
repo = path_parts[1]
|
||||
|
||||
return owner, repo, pr_number
|
||||
|
||||
def _parse_issue_url(self, issue_url: str) -> Tuple[str, str, int]:
|
||||
parsed_url = urlparse(issue_url)
|
||||
|
||||
if parsed_url.path.startswith('/api/v1'):
|
||||
parsed_url = urlparse(issue_url.replace("/api/v1", ""))
|
||||
|
||||
path_parts = parsed_url.path.strip('/').split('/')
|
||||
if len(path_parts) < 4 or path_parts[2] != 'issues':
|
||||
raise ValueError("The provided URL does not appear to be a Gitea issue URL")
|
||||
|
||||
try:
|
||||
issue_number = int(path_parts[3])
|
||||
except ValueError as e:
|
||||
raise ValueError("Unable to convert issue number to integer") from e
|
||||
|
||||
owner = path_parts[0]
|
||||
repo = path_parts[1]
|
||||
|
||||
return owner, repo, issue_number
|
||||
|
||||
def __set_repo_and_owner_from_pr(self):
|
||||
"""Extract owner and repo from the PR URL"""
|
||||
try:
|
||||
owner, repo, pr_number = self._parse_pr_url(self.pr_url)
|
||||
self.owner = owner
|
||||
self.repo = repo
|
||||
self.pr_number = pr_number
|
||||
self.logger.info(f"Owner: {self.owner}, Repo: {self.repo}, PR Number: {self.pr_number}")
|
||||
except ValueError as e:
|
||||
self.logger.error(f"Error parsing PR URL: {str(e)}")
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {str(e)}")
|
||||
|
||||
def __set_repo_and_owner_from_issue(self):
|
||||
"""Extract owner and repo from the issue URL"""
|
||||
try:
|
||||
owner, repo, issue_number = self._parse_issue_url(self.issue_url)
|
||||
self.owner = owner
|
||||
self.repo = repo
|
||||
self.issue_number = issue_number
|
||||
self.logger.info(f"Owner: {self.owner}, Repo: {self.repo}, Issue Number: {self.issue_number}")
|
||||
except ValueError as e:
|
||||
self.logger.error(f"Error parsing issue URL: {str(e)}")
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {str(e)}")
|
||||
|
||||
def get_pr_url(self) -> str:
|
||||
return self.pr_url
|
||||
|
||||
def get_issue_url(self) -> str:
|
||||
return self.issue_url
|
||||
|
||||
def publish_comment(self, comment: str,is_temporary: bool = False) -> None:
|
||||
"""Publish a comment to the pull request"""
|
||||
if is_temporary and not get_settings().config.publish_output_progress:
|
||||
get_logger().debug(f"Skipping publish_comment for temporary comment")
|
||||
return None
|
||||
|
||||
if self.enabled_issue:
|
||||
index = self.issue_number
|
||||
elif self.enabled_pr:
|
||||
index = self.pr_number
|
||||
else:
|
||||
self.logger.error("Neither PR nor issue URL provided.")
|
||||
return None
|
||||
|
||||
comment = self.limit_output_characters(comment, self.max_comment_chars)
|
||||
response = self.repo_api.create_comment(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
index=index,
|
||||
comment=comment
|
||||
)
|
||||
|
||||
if not response:
|
||||
self.logger.error("Failed to publish comment")
|
||||
return None
|
||||
|
||||
if is_temporary:
|
||||
self.temp_comments.append(comment)
|
||||
|
||||
comment_obj = {
|
||||
"is_temporary": is_temporary,
|
||||
"comment": comment,
|
||||
"comment_id": response.id if isinstance(response, tuple) else response.id
|
||||
}
|
||||
self.comments_list.append(comment_obj)
|
||||
self.logger.info("Comment published")
|
||||
return comment_obj
|
||||
|
||||
def edit_comment(self, comment, body : str):
|
||||
body = self.limit_output_characters(body, self.max_comment_chars)
|
||||
try:
|
||||
self.repo_api.edit_comment(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
comment_id=comment.get("comment_id") if isinstance(comment, dict) else comment.id,
|
||||
comment=body
|
||||
)
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error editing comment: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def publish_inline_comment(self,body: str, relevant_file: str, relevant_line_in_file: str, original_suggestion=None):
|
||||
"""Publish an inline comment on a specific line"""
|
||||
body = self.limit_output_characters(body, self.max_comment_chars)
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files,
|
||||
relevant_file.strip('`'),
|
||||
relevant_line_in_file,
|
||||
)
|
||||
if position == -1:
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
subject_type = "FILE"
|
||||
else:
|
||||
subject_type = "LINE"
|
||||
|
||||
path = relevant_file.strip()
|
||||
payload = dict(body=body, path=path, old_position=position,new_position = absolute_position) if subject_type == "LINE" else {}
|
||||
self.publish_inline_comments([payload])
|
||||
|
||||
|
||||
def publish_inline_comments(self, comments: List[Dict[str, Any]],body : str = "Inline comment") -> None:
|
||||
response = self.repo_api.create_inline_comment(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
pr_number=self.pr_number if self.enabled_pr else self.issue_number,
|
||||
body=body,
|
||||
commit_id=self.last_commit.sha if self.last_commit else "",
|
||||
comments=comments
|
||||
)
|
||||
|
||||
if not response:
|
||||
self.logger.error("Failed to publish inline comment")
|
||||
return None
|
||||
|
||||
self.logger.info("Inline comment published")
|
||||
|
||||
def publish_code_suggestions(self, suggestions: List[Dict[str, Any]]):
|
||||
"""Publish code suggestions"""
|
||||
for suggestion in suggestions:
|
||||
body = suggestion.get("body","")
|
||||
if not body:
|
||||
self.logger.error("No body provided for the suggestion")
|
||||
continue
|
||||
|
||||
path = suggestion.get("relevant_file","")
|
||||
new_position = suggestion.get("relevant_lines_start",0)
|
||||
old_position = suggestion.get("relevant_lines_start",0) if "original_suggestion" not in suggestion else suggestion["original_suggestion"].get("relevant_lines_start",0)
|
||||
title_body = suggestion["original_suggestion"].get("suggestion_content","") if "original_suggestion" in suggestion else ""
|
||||
payload = dict(body=body, path=path, old_position=old_position,new_position = new_position)
|
||||
if title_body:
|
||||
title_body = f"**Suggestion:** {title_body}"
|
||||
self.publish_inline_comments([payload],title_body)
|
||||
else:
|
||||
self.publish_inline_comments([payload])
|
||||
|
||||
def add_eyes_reaction(self, issue_comment_id: int, disable_eyes: bool = False) -> Optional[int]:
|
||||
"""Add eyes reaction to a comment"""
|
||||
try:
|
||||
if disable_eyes:
|
||||
return None
|
||||
|
||||
comments = self.repo_api.list_all_comments(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
index=self.pr_number if self.enabled_pr else self.issue_number
|
||||
)
|
||||
|
||||
comment_ids = [comment.id for comment in comments]
|
||||
if issue_comment_id not in comment_ids:
|
||||
self.logger.error(f"Comment ID {issue_comment_id} not found. Available IDs: {comment_ids}")
|
||||
return None
|
||||
|
||||
response = self.repo_api.add_reaction_comment(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
comment_id=issue_comment_id,
|
||||
reaction="eyes"
|
||||
)
|
||||
|
||||
if not response:
|
||||
self.logger.error("Failed to add eyes reaction")
|
||||
return None
|
||||
|
||||
return response[0].id if isinstance(response, tuple) else response.id
|
||||
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error adding eyes reaction: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {e}")
|
||||
return None
|
||||
|
||||
def remove_reaction(self, comment_id: int) -> None:
|
||||
"""Remove reaction from a comment"""
|
||||
try:
|
||||
response = self.repo_api.remove_reaction_comment(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
comment_id=comment_id
|
||||
)
|
||||
if not response:
|
||||
self.logger.error("Failed to remove reaction")
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error removing reaction: {e}")
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {e}")
|
||||
|
||||
def get_commit_messages(self)-> str:
|
||||
"""Get commit messages for the PR"""
|
||||
max_tokens = get_settings().get("CONFIG.MAX_COMMITS_TOKENS", None)
|
||||
pr_commits = self.repo_api.get_pr_commits(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
pr_number=self.pr_number
|
||||
)
|
||||
|
||||
if not pr_commits:
|
||||
self.logger.error("Failed to get commit messages")
|
||||
return ""
|
||||
|
||||
try:
|
||||
commit_messages = [commit["commit"]["message"] for commit in pr_commits if commit]
|
||||
|
||||
if not commit_messages:
|
||||
self.logger.error("No commit messages found")
|
||||
return ""
|
||||
|
||||
commit_message = "".join(commit_messages)
|
||||
if max_tokens:
|
||||
commit_message = clip_tokens(commit_message, max_tokens)
|
||||
|
||||
return commit_message
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error processing commit messages: {str(e)}")
|
||||
return ""
|
||||
|
||||
def _get_file_content_from_base(self, filename: str) -> str:
|
||||
return self.repo_api.get_file_content(
|
||||
owner=self.owner,
|
||||
repo=self.base_ref,
|
||||
commit_sha=self.base_sha,
|
||||
filepath=filename
|
||||
)
|
||||
|
||||
def _get_file_content_from_latest_commit(self, filename: str) -> str:
|
||||
return self.repo_api.get_file_content(
|
||||
owner=self.owner,
|
||||
repo=self.base_ref,
|
||||
commit_sha=self.last_commit.sha,
|
||||
filepath=filename
|
||||
)
|
||||
|
||||
def get_diff_files(self) -> List[FilePatchInfo]:
|
||||
"""Get files that were modified in the PR"""
|
||||
if self.diff_files:
|
||||
return self.diff_files
|
||||
|
||||
invalid_files_names = []
|
||||
counter_valid = 0
|
||||
diff_files = []
|
||||
for file in self.git_files:
|
||||
filename = file.get("filename")
|
||||
if not filename:
|
||||
continue
|
||||
|
||||
if not is_valid_file(filename):
|
||||
invalid_files_names.append(filename)
|
||||
continue
|
||||
|
||||
counter_valid += 1
|
||||
avoid_load = False
|
||||
patch = self.file_diffs.get(filename,"")
|
||||
head_file = ""
|
||||
base_file = ""
|
||||
|
||||
if counter_valid >= MAX_FILES_ALLOWED_FULL and patch and not self.incremental.is_incremental:
|
||||
avoid_load = True
|
||||
if counter_valid == MAX_FILES_ALLOWED_FULL:
|
||||
self.logger.info("Too many files in PR, will avoid loading full content for rest of files")
|
||||
|
||||
if avoid_load:
|
||||
head_file = ""
|
||||
else:
|
||||
# Get file content from this pr
|
||||
head_file = self.file_contents.get(filename,"")
|
||||
|
||||
if self.incremental.is_incremental and self.unreviewed_files_set:
|
||||
base_file = self._get_file_content_from_latest_commit(filename)
|
||||
self.unreviewed_files_set[filename] = patch
|
||||
else:
|
||||
if avoid_load:
|
||||
base_file = ""
|
||||
else:
|
||||
base_file = self._get_file_content_from_base(filename)
|
||||
|
||||
num_plus_lines = file.get("additions",0)
|
||||
num_minus_lines = file.get("deletions",0)
|
||||
status = file.get("status","")
|
||||
|
||||
if status == 'added':
|
||||
edit_type = EDIT_TYPE.ADDED
|
||||
elif status == 'removed':
|
||||
edit_type = EDIT_TYPE.DELETED
|
||||
elif status == 'renamed':
|
||||
edit_type = EDIT_TYPE.RENAMED
|
||||
elif status == 'modified':
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
else:
|
||||
self.logger.error(f"Unknown edit type: {status}")
|
||||
edit_type = EDIT_TYPE.UNKNOWN
|
||||
|
||||
file_patch_info = FilePatchInfo(
|
||||
base_file=base_file,
|
||||
head_file=head_file,
|
||||
patch=patch,
|
||||
filename=filename,
|
||||
num_minus_lines=num_minus_lines,
|
||||
num_plus_lines=num_plus_lines,
|
||||
edit_type=edit_type
|
||||
)
|
||||
diff_files.append(file_patch_info)
|
||||
|
||||
if invalid_files_names:
|
||||
self.logger.info(f"Filtered out files with invalid extensions: {invalid_files_names}")
|
||||
|
||||
self.diff_files = diff_files
|
||||
return diff_files
|
||||
|
||||
def get_line_link(self, relevant_file, relevant_line_start, relevant_line_end = None) -> str:
|
||||
if relevant_line_start == -1:
|
||||
link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}"
|
||||
elif relevant_line_end:
|
||||
link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}#L{relevant_line_start}-L{relevant_line_end}"
|
||||
else:
|
||||
link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}#L{relevant_line_start}"
|
||||
|
||||
self.logger.info(f"Generated link: {link}")
|
||||
return link
|
||||
|
||||
def get_files(self) -> List[Dict[str, Any]]:
|
||||
"""Get all files in the PR"""
|
||||
return [file.get("filename","") for file in self.git_files]
|
||||
|
||||
def get_num_of_files(self) -> int:
|
||||
"""Get number of files changed in the PR"""
|
||||
return len(self.git_files)
|
||||
|
||||
def get_issue_comments(self) -> List[Dict[str, Any]]:
|
||||
"""Get all comments in the PR"""
|
||||
index = self.issue_number if self.enabled_issue else self.pr_number
|
||||
comments = self.repo_api.list_all_comments(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
index=index
|
||||
)
|
||||
if not comments:
|
||||
self.logger.error("Failed to get comments")
|
||||
return []
|
||||
|
||||
return comments
|
||||
|
||||
def get_languages(self) -> Set[str]:
|
||||
"""Get programming languages used in the repository"""
|
||||
languages = self.repo_api.get_languages(
|
||||
owner=self.owner,
|
||||
repo=self.repo
|
||||
)
|
||||
|
||||
return languages
|
||||
|
||||
def get_pr_branch(self) -> str:
|
||||
"""Get the branch name of the PR"""
|
||||
if not self.pr:
|
||||
self.logger.error("Failed to get PR branch")
|
||||
return ""
|
||||
|
||||
if not self.pr.head:
|
||||
self.logger.error("PR head not found")
|
||||
return ""
|
||||
|
||||
return self.pr.head.ref if self.pr.head.ref else ""
|
||||
|
||||
def get_pr_description_full(self) -> str:
|
||||
"""Get full PR description with metadata"""
|
||||
if not self.pr:
|
||||
self.logger.error("Failed to get PR description")
|
||||
return ""
|
||||
|
||||
return self.pr.body if self.pr.body else ""
|
||||
|
||||
def get_pr_labels(self,update=False) -> List[str]:
|
||||
"""Get labels assigned to the PR"""
|
||||
if not update:
|
||||
if not self.pr.labels:
|
||||
self.logger.error("Failed to get PR labels")
|
||||
return []
|
||||
return [label.name for label in self.pr.labels]
|
||||
|
||||
labels = self.repo_api.get_issue_labels(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
issue_number=self.pr_number
|
||||
)
|
||||
if not labels:
|
||||
self.logger.error("Failed to get PR labels")
|
||||
return []
|
||||
|
||||
return [label.name for label in labels]
|
||||
|
||||
def get_repo_settings(self) -> str:
|
||||
"""Get repository settings"""
|
||||
if not self.repo_settings:
|
||||
self.logger.error("Repository settings not found")
|
||||
return ""
|
||||
|
||||
response = self.repo_api.get_file_content(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
commit_sha=self.sha,
|
||||
filepath=self.repo_settings
|
||||
)
|
||||
if not response:
|
||||
self.logger.error("Failed to get repository settings")
|
||||
return ""
|
||||
|
||||
return response
|
||||
|
||||
def get_user_id(self) -> str:
|
||||
"""Get the ID of the authenticated user"""
|
||||
return f"{self.pr.user.id}" if self.pr else ""
|
||||
|
||||
def is_supported(self, capability) -> bool:
|
||||
"""Check if the provider is supported"""
|
||||
return True
|
||||
|
||||
def publish_description(self, pr_title: str, pr_body: str) -> None:
|
||||
"""Publish PR description"""
|
||||
response = self.repo_api.edit_pull_request(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
pr_number=self.pr_number if self.enabled_pr else self.issue_number,
|
||||
title=pr_title,
|
||||
body=pr_body
|
||||
)
|
||||
|
||||
if not response:
|
||||
self.logger.error("Failed to publish PR description")
|
||||
return None
|
||||
|
||||
self.logger.info("PR description published successfully")
|
||||
if self.enabled_pr:
|
||||
self.pr = self.repo_api.get_pull_request(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
pr_number=self.pr_number
|
||||
)
|
||||
|
||||
def publish_labels(self, labels: List[int]) -> None:
|
||||
"""Publish labels to the PR"""
|
||||
if not labels:
|
||||
self.logger.error("No labels provided to publish")
|
||||
return None
|
||||
|
||||
response = self.repo_api.add_labels(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
issue_number=self.pr_number if self.enabled_pr else self.issue_number,
|
||||
labels=labels
|
||||
)
|
||||
|
||||
if response:
|
||||
self.logger.info("Labels added successfully")
|
||||
|
||||
def remove_comment(self, comment) -> None:
|
||||
"""Remove a specific comment"""
|
||||
if not comment:
|
||||
return
|
||||
|
||||
try:
|
||||
comment_id = comment.get("comment_id") if isinstance(comment, dict) else comment.id
|
||||
if not comment_id:
|
||||
self.logger.error("Comment ID not found")
|
||||
return None
|
||||
self.repo_api.remove_comment(
|
||||
owner=self.owner,
|
||||
repo=self.repo,
|
||||
comment_id=comment_id
|
||||
)
|
||||
|
||||
if self.comments_list and comment in self.comments_list:
|
||||
self.comments_list.remove(comment)
|
||||
|
||||
self.logger.info(f"Comment removed successfully: {comment}")
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error removing comment: {e}")
|
||||
raise e
|
||||
|
||||
def remove_initial_comment(self) -> None:
|
||||
"""Remove the initial comment"""
|
||||
for comment in self.comments_list:
|
||||
try:
|
||||
if not comment.get("is_temporary"):
|
||||
continue
|
||||
self.remove_comment(comment)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error removing comment: {e}")
|
||||
continue
|
||||
self.logger.info(f"Removed initial comment: {comment.get('comment_id')}")
|
||||
|
||||
|
||||
class RepoApi(giteapy.RepositoryApi):
|
||||
def __init__(self, client: giteapy.ApiClient):
|
||||
self.repository = giteapy.RepositoryApi(client)
|
||||
self.issue = giteapy.IssueApi(client)
|
||||
self.logger = get_logger()
|
||||
super().__init__(client)
|
||||
|
||||
def create_inline_comment(self, owner: str, repo: str, pr_number: int, body : str ,commit_id : str, comments: List[Dict[str, Any]]) -> None:
|
||||
body = {
|
||||
"body": body,
|
||||
"comments": comments,
|
||||
"commit_id": commit_id,
|
||||
}
|
||||
return self.api_client.call_api(
|
||||
'/repos/{owner}/{repo}/pulls/{pr_number}/reviews',
|
||||
'POST',
|
||||
path_params={'owner': owner, 'repo': repo, 'pr_number': pr_number},
|
||||
body=body,
|
||||
response_type='Repository',
|
||||
auth_settings=['AuthorizationHeaderToken']
|
||||
)
|
||||
|
||||
def create_comment(self, owner: str, repo: str, index: int, comment: str):
|
||||
body = {
|
||||
"body": comment
|
||||
}
|
||||
return self.issue.issue_create_comment(
|
||||
owner=owner,
|
||||
repo=repo,
|
||||
index=index,
|
||||
body=body
|
||||
)
|
||||
|
||||
def edit_comment(self, owner: str, repo: str, comment_id: int, comment: str):
|
||||
body = {
|
||||
"body": comment
|
||||
}
|
||||
return self.issue.issue_edit_comment(
|
||||
owner=owner,
|
||||
repo=repo,
|
||||
id=comment_id,
|
||||
body=body
|
||||
)
|
||||
|
||||
def remove_comment(self, owner: str, repo: str, comment_id: int):
|
||||
return self.issue.issue_delete_comment(
|
||||
owner=owner,
|
||||
repo=repo,
|
||||
id=comment_id
|
||||
)
|
||||
|
||||
def list_all_comments(self, owner: str, repo: str, index: int):
|
||||
return self.issue.issue_get_comments(
|
||||
owner=owner,
|
||||
repo=repo,
|
||||
index=index
|
||||
)
|
||||
|
||||
def get_pull_request_diff(self, owner: str, repo: str, pr_number: int) -> str:
|
||||
"""Get the diff content of a pull request using direct API call"""
|
||||
try:
|
||||
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||
url = f'/repos/{owner}/{repo}/pulls/{pr_number}.diff'
|
||||
if token:
|
||||
url = f'{url}?token={token}'
|
||||
|
||||
response = self.api_client.call_api(
|
||||
url,
|
||||
'GET',
|
||||
path_params={},
|
||||
response_type=None,
|
||||
_return_http_data_only=False,
|
||||
_preload_content=False
|
||||
)
|
||||
|
||||
if hasattr(response, 'data'):
|
||||
raw_data = response.data.read()
|
||||
return raw_data.decode('utf-8')
|
||||
elif isinstance(response, tuple):
|
||||
raw_data = response[0].read()
|
||||
return raw_data.decode('utf-8')
|
||||
else:
|
||||
error_msg = f"Unexpected response format received from API: {type(response)}"
|
||||
self.logger.error(error_msg)
|
||||
raise RuntimeError(error_msg)
|
||||
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error getting diff: {str(e)}")
|
||||
raise e
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {str(e)}")
|
||||
raise e
|
||||
|
||||
def get_pull_request(self, owner: str, repo: str, pr_number: int):
|
||||
"""Get pull request details including description"""
|
||||
return self.repository.repo_get_pull_request(
|
||||
owner=owner,
|
||||
repo=repo,
|
||||
index=pr_number
|
||||
)
|
||||
|
||||
def edit_pull_request(self, owner: str, repo: str, pr_number: int,title : str, body: str):
|
||||
"""Edit pull request description"""
|
||||
body = {
|
||||
"body": body,
|
||||
"title" : title
|
||||
}
|
||||
return self.repository.repo_edit_pull_request(
|
||||
owner=owner,
|
||||
repo=repo,
|
||||
index=pr_number,
|
||||
body=body
|
||||
)
|
||||
|
||||
def get_change_file_pull_request(self, owner: str, repo: str, pr_number: int):
|
||||
"""Get changed files in the pull request"""
|
||||
try:
|
||||
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||
url = f'/repos/{owner}/{repo}/pulls/{pr_number}/files'
|
||||
if token:
|
||||
url = f'{url}?token={token}'
|
||||
|
||||
response = self.api_client.call_api(
|
||||
url,
|
||||
'GET',
|
||||
path_params={},
|
||||
response_type=None,
|
||||
_return_http_data_only=False,
|
||||
_preload_content=False
|
||||
)
|
||||
|
||||
if hasattr(response, 'data'):
|
||||
raw_data = response.data.read()
|
||||
diff_content = raw_data.decode('utf-8')
|
||||
return json.loads(diff_content) if isinstance(diff_content, str) else diff_content
|
||||
elif isinstance(response, tuple):
|
||||
raw_data = response[0].read()
|
||||
diff_content = raw_data.decode('utf-8')
|
||||
return json.loads(diff_content) if isinstance(diff_content, str) else diff_content
|
||||
|
||||
return []
|
||||
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error getting changed files: {e}")
|
||||
return []
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {e}")
|
||||
return []
|
||||
|
||||
def get_languages(self, owner: str, repo: str):
|
||||
"""Get programming languages used in the repository"""
|
||||
try:
|
||||
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||
url = f'/repos/{owner}/{repo}/languages'
|
||||
if token:
|
||||
url = f'{url}?token={token}'
|
||||
|
||||
response = self.api_client.call_api(
|
||||
url,
|
||||
'GET',
|
||||
path_params={},
|
||||
response_type=None,
|
||||
_return_http_data_only=False,
|
||||
_preload_content=False
|
||||
)
|
||||
|
||||
if hasattr(response, 'data'):
|
||||
raw_data = response.data.read()
|
||||
return json.loads(raw_data.decode('utf-8'))
|
||||
elif isinstance(response, tuple):
|
||||
raw_data = response[0].read()
|
||||
return json.loads(raw_data.decode('utf-8'))
|
||||
|
||||
return {}
|
||||
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error getting languages: {e}")
|
||||
return {}
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {e}")
|
||||
return {}
|
||||
|
||||
def get_file_content(self, owner: str, repo: str, commit_sha: str, filepath: str) -> str:
|
||||
"""Get raw file content from a specific commit"""
|
||||
|
||||
try:
|
||||
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||
url = f'/repos/{owner}/{repo}/raw/{filepath}'
|
||||
if token:
|
||||
url = f'{url}?token={token}&ref={commit_sha}'
|
||||
|
||||
response = self.api_client.call_api(
|
||||
url,
|
||||
'GET',
|
||||
path_params={},
|
||||
response_type=None,
|
||||
_return_http_data_only=False,
|
||||
_preload_content=False
|
||||
)
|
||||
|
||||
if hasattr(response, 'data'):
|
||||
raw_data = response.data.read()
|
||||
return raw_data.decode('utf-8')
|
||||
elif isinstance(response, tuple):
|
||||
raw_data = response[0].read()
|
||||
return raw_data.decode('utf-8')
|
||||
|
||||
return ""
|
||||
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error getting file: {filepath}, content: {e}")
|
||||
return ""
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {e}")
|
||||
return ""
|
||||
|
||||
def get_issue_labels(self, owner: str, repo: str, issue_number: int):
|
||||
"""Get labels assigned to the issue"""
|
||||
return self.issue.issue_get_labels(
|
||||
owner=owner,
|
||||
repo=repo,
|
||||
index=issue_number
|
||||
)
|
||||
|
||||
def list_all_commits(self, owner: str, repo: str):
|
||||
return self.repository.repo_get_all_commits(
|
||||
owner=owner,
|
||||
repo=repo
|
||||
)
|
||||
|
||||
def add_reviewer(self, owner: str, repo: str, pr_number: int, reviewers: List[str]):
|
||||
body = {
|
||||
"reviewers": reviewers
|
||||
}
|
||||
return self.api_client.call_api(
|
||||
'/repos/{owner}/{repo}/pulls/{pr_number}/requested_reviewers',
|
||||
'POST',
|
||||
path_params={'owner': owner, 'repo': repo, 'pr_number': pr_number},
|
||||
body=body,
|
||||
response_type='Repository',
|
||||
auth_settings=['AuthorizationHeaderToken']
|
||||
)
|
||||
|
||||
def add_reaction_comment(self, owner: str, repo: str, comment_id: int, reaction: str):
|
||||
body = {
|
||||
"content": reaction
|
||||
}
|
||||
return self.api_client.call_api(
|
||||
'/repos/{owner}/{repo}/issues/comments/{id}/reactions',
|
||||
'POST',
|
||||
path_params={'owner': owner, 'repo': repo, 'id': comment_id},
|
||||
body=body,
|
||||
response_type='Repository',
|
||||
auth_settings=['AuthorizationHeaderToken']
|
||||
)
|
||||
|
||||
def remove_reaction_comment(self, owner: str, repo: str, comment_id: int):
|
||||
return self.api_client.call_api(
|
||||
'/repos/{owner}/{repo}/issues/comments/{id}/reactions',
|
||||
'DELETE',
|
||||
path_params={'owner': owner, 'repo': repo, 'id': comment_id},
|
||||
response_type='Repository',
|
||||
auth_settings=['AuthorizationHeaderToken']
|
||||
)
|
||||
|
||||
def add_labels(self, owner: str, repo: str, issue_number: int, labels: List[int]):
|
||||
body = {
|
||||
"labels": labels
|
||||
}
|
||||
return self.issue.issue_add_label(
|
||||
owner=owner,
|
||||
repo=repo,
|
||||
index=issue_number,
|
||||
body=body
|
||||
)
|
||||
|
||||
def get_pr_commits(self, owner: str, repo: str, pr_number: int):
|
||||
"""Get all commits in a pull request"""
|
||||
try:
|
||||
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||
url = f'/repos/{owner}/{repo}/pulls/{pr_number}/commits'
|
||||
if token:
|
||||
url = f'{url}?token={token}'
|
||||
|
||||
response = self.api_client.call_api(
|
||||
url,
|
||||
'GET',
|
||||
path_params={},
|
||||
response_type=None,
|
||||
_return_http_data_only=False,
|
||||
_preload_content=False
|
||||
)
|
||||
|
||||
if hasattr(response, 'data'):
|
||||
raw_data = response.data.read()
|
||||
commits_data = json.loads(raw_data.decode('utf-8'))
|
||||
return commits_data
|
||||
elif isinstance(response, tuple):
|
||||
raw_data = response[0].read()
|
||||
commits_data = json.loads(raw_data.decode('utf-8'))
|
||||
return commits_data
|
||||
|
||||
return []
|
||||
|
||||
except ApiException as e:
|
||||
self.logger.error(f"Error getting PR commits: {e}")
|
||||
return []
|
||||
except Exception as e:
|
||||
self.logger.error(f"Unexpected error: {e}")
|
||||
return []
|
@ -96,7 +96,7 @@ class GithubProvider(GitProvider):
|
||||
parsed_url = urlparse(given_url)
|
||||
repo_path = (parsed_url.path.split('.git')[0])[1:] # /<owner>/<repo>.git -> <owner>/<repo>
|
||||
if not repo_path:
|
||||
get_logger().error(f"url is neither an issues url nor a pr url nor a valid git url: {given_url}. Returning empty result.")
|
||||
get_logger().error(f"url is neither an issues url nor a PR url nor a valid git url: {given_url}. Returning empty result.")
|
||||
return ""
|
||||
return repo_path
|
||||
except Exception as e:
|
||||
|
128
pr_agent/servers/gitea_app.py
Normal file
128
pr_agent/servers/gitea_app.py
Normal file
@ -0,0 +1,128 @@
|
||||
import asyncio
|
||||
import copy
|
||||
import os
|
||||
from typing import Any, Dict
|
||||
|
||||
from fastapi import APIRouter, FastAPI, HTTPException, Request, Response
|
||||
from starlette.background import BackgroundTasks
|
||||
from starlette.middleware import Middleware
|
||||
from starlette_context import context
|
||||
from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.servers.utils import verify_signature
|
||||
|
||||
# Setup logging and router
|
||||
setup_logger(fmt=LoggingFormat.JSON, level=get_settings().get("CONFIG.LOG_LEVEL", "DEBUG"))
|
||||
router = APIRouter()
|
||||
|
||||
@router.post("/api/v1/gitea_webhooks")
|
||||
async def handle_gitea_webhooks(background_tasks: BackgroundTasks, request: Request, response: Response):
|
||||
"""Handle incoming Gitea webhook requests"""
|
||||
get_logger().debug("Received a Gitea webhook")
|
||||
|
||||
body = await get_body(request)
|
||||
|
||||
# Set context for the request
|
||||
context["settings"] = copy.deepcopy(global_settings)
|
||||
context["git_provider"] = {}
|
||||
|
||||
# Handle the webhook in background
|
||||
background_tasks.add_task(handle_request, body, event=request.headers.get("X-Gitea-Event", None))
|
||||
return {}
|
||||
|
||||
async def get_body(request: Request):
|
||||
"""Parse and verify webhook request body"""
|
||||
try:
|
||||
body = await request.json()
|
||||
except Exception as e:
|
||||
get_logger().error("Error parsing request body", artifact={'error': e})
|
||||
raise HTTPException(status_code=400, detail="Error parsing request body") from e
|
||||
|
||||
|
||||
# Verify webhook signature
|
||||
webhook_secret = getattr(get_settings().gitea, 'webhook_secret', None)
|
||||
if webhook_secret:
|
||||
body_bytes = await request.body()
|
||||
signature_header = request.headers.get('x-gitea-signature', None)
|
||||
if not signature_header:
|
||||
get_logger().error("Missing signature header")
|
||||
raise HTTPException(status_code=400, detail="Missing signature header")
|
||||
|
||||
try:
|
||||
verify_signature(body_bytes, webhook_secret, f"sha256={signature_header}")
|
||||
except Exception as ex:
|
||||
get_logger().error(f"Invalid signature: {ex}")
|
||||
raise HTTPException(status_code=401, detail="Invalid signature")
|
||||
|
||||
return body
|
||||
|
||||
async def handle_request(body: Dict[str, Any], event: str):
|
||||
"""Process Gitea webhook events"""
|
||||
action = body.get("action")
|
||||
if not action:
|
||||
get_logger().debug("No action found in request body")
|
||||
return {}
|
||||
|
||||
agent = PRAgent()
|
||||
|
||||
# Handle different event types
|
||||
if event == "pull_request":
|
||||
if action in ["opened", "reopened", "synchronized"]:
|
||||
await handle_pr_event(body, event, action, agent)
|
||||
elif event == "issue_comment":
|
||||
if action == "created":
|
||||
await handle_comment_event(body, event, action, agent)
|
||||
|
||||
return {}
|
||||
|
||||
async def handle_pr_event(body: Dict[str, Any], event: str, action: str, agent: PRAgent):
|
||||
"""Handle pull request events"""
|
||||
pr = body.get("pull_request", {})
|
||||
if not pr:
|
||||
return
|
||||
|
||||
api_url = pr.get("url")
|
||||
if not api_url:
|
||||
return
|
||||
|
||||
# Handle PR based on action
|
||||
if action in ["opened", "reopened"]:
|
||||
commands = get_settings().get("gitea.pr_commands", [])
|
||||
for command in commands:
|
||||
await agent.handle_request(api_url, command)
|
||||
elif action == "synchronized":
|
||||
# Handle push to PR
|
||||
await agent.handle_request(api_url, "/review --incremental")
|
||||
|
||||
async def handle_comment_event(body: Dict[str, Any], event: str, action: str, agent: PRAgent):
|
||||
"""Handle comment events"""
|
||||
comment = body.get("comment", {})
|
||||
if not comment:
|
||||
return
|
||||
|
||||
comment_body = comment.get("body", "")
|
||||
if not comment_body or not comment_body.startswith("/"):
|
||||
return
|
||||
|
||||
pr_url = body.get("pull_request", {}).get("url")
|
||||
if not pr_url:
|
||||
return
|
||||
|
||||
await agent.handle_request(pr_url, comment_body)
|
||||
|
||||
# FastAPI app setup
|
||||
middleware = [Middleware(RawContextMiddleware)]
|
||||
app = FastAPI(middleware=middleware)
|
||||
app.include_router(router)
|
||||
|
||||
def start():
|
||||
"""Start the Gitea webhook server"""
|
||||
port = int(os.environ.get("PORT", "3000"))
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=port)
|
||||
|
||||
if __name__ == "__main__":
|
||||
start()
|
@ -68,6 +68,11 @@ webhook_secret = "<WEBHOOK SECRET>" # Optional, may be commented out.
|
||||
personal_access_token = ""
|
||||
shared_secret = "" # webhook secret
|
||||
|
||||
[gitea]
|
||||
# Gitea personal access token
|
||||
personal_access_token=""
|
||||
webhook_secret="" # webhook secret
|
||||
|
||||
[bitbucket]
|
||||
# For Bitbucket authentication
|
||||
auth_type = "bearer" # "bearer" or "basic"
|
||||
@ -111,4 +116,9 @@ api_base = "" # Your Azure OpenAI service base URL (e.g., https://openai.xyz.co
|
||||
|
||||
[openrouter]
|
||||
key = ""
|
||||
api_base = ""
|
||||
api_base = ""
|
||||
|
||||
[aws]
|
||||
AWS_ACCESS_KEY_ID = ""
|
||||
AWS_SECRET_ACCESS_KEY = ""
|
||||
AWS_REGION_NAME = ""
|
@ -64,6 +64,7 @@ reasoning_effort = "medium" # "low", "medium", "high"
|
||||
enable_auto_approval=false # Set to true to enable auto-approval of PRs under certain conditions
|
||||
auto_approve_for_low_review_effort=-1 # -1 to disable, [1-5] to set the threshold for auto-approval
|
||||
auto_approve_for_no_suggestions=false # If true, the PR will be auto-approved if there are no suggestions
|
||||
ensure_ticket_compliance=false # Set to true to disable auto-approval of PRs if the ticket is not compliant
|
||||
# extended thinking for Claude reasoning models
|
||||
enable_claude_extended_thinking = false # Set to true to enable extended thinking feature
|
||||
extended_thinking_budget_tokens = 2048
|
||||
@ -81,6 +82,7 @@ require_ticket_analysis_review=true
|
||||
# general options
|
||||
persistent_comment=true
|
||||
extra_instructions = ""
|
||||
num_max_findings = 3
|
||||
final_update_message = true
|
||||
# review labels
|
||||
enable_review_labels_security=true
|
||||
@ -102,6 +104,7 @@ enable_pr_type=true
|
||||
final_update_message = true
|
||||
enable_help_text=false
|
||||
enable_help_comment=true
|
||||
enable_pr_diagram=false # adds a section with a diagram of the PR changes
|
||||
# describe as comment
|
||||
publish_description_as_comment=false
|
||||
publish_description_as_comment_persistent=true
|
||||
@ -278,6 +281,15 @@ push_commands = [
|
||||
"/review",
|
||||
]
|
||||
|
||||
[gitea_app]
|
||||
url = "https://gitea.com"
|
||||
handle_push_trigger = false
|
||||
pr_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
"/improve",
|
||||
]
|
||||
|
||||
[bitbucket_app]
|
||||
pr_commands = [
|
||||
"/describe --pr_description.final_update_message=false",
|
||||
|
@ -46,6 +46,9 @@ class PRDescription(BaseModel):
|
||||
type: List[PRType] = Field(description="one or more types that describe the PR content. Return the label member value (e.g. 'Bug fix', not 'bug_fix')")
|
||||
description: str = Field(description="summarize the PR changes in up to four bullet points, each up to 8 words. For large PRs, add sub-bullets if needed. Order bullets by importance, with each bullet highlighting a key change group.")
|
||||
title: str = Field(description="a concise and descriptive title that captures the PR's main theme")
|
||||
{%- if enable_pr_diagram %}
|
||||
changes_diagram: str = Field(description="a horizontal diagram that represents the main PR changes, in the format of a valid mermaid LR flowchart. The diagram should be concise and easy to read. Leave empty if no diagram is relevant. To create robust Mermaid diagrams, follow this two-step process: (1) Declare the nodes: nodeID["node description"]. (2) Then define the links: nodeID1 -- "link text" --> nodeID2 ")
|
||||
{%- endif %}
|
||||
{%- if enable_semantic_files_types %}
|
||||
pr_files: List[FileDescription] = Field(max_items=20, description="a list of all the files that were changed in the PR, and summary of their changes. Each file must be analyzed regardless of change size.")
|
||||
{%- endif %}
|
||||
@ -62,6 +65,13 @@ description: |
|
||||
...
|
||||
title: |
|
||||
...
|
||||
{%- if enable_pr_diagram %}
|
||||
changes_diagram: |
|
||||
```mermaid
|
||||
flowchart LR
|
||||
...
|
||||
```
|
||||
{%- endif %}
|
||||
{%- if enable_semantic_files_types %}
|
||||
pr_files:
|
||||
- filename: |
|
||||
@ -143,6 +153,13 @@ description: |
|
||||
...
|
||||
title: |
|
||||
...
|
||||
{%- if enable_pr_diagram %}
|
||||
changes_diagram: |
|
||||
```mermaid
|
||||
flowchart LR
|
||||
...
|
||||
```
|
||||
{%- endif %}
|
||||
{%- if enable_semantic_files_types %}
|
||||
pr_files:
|
||||
- filename: |
|
||||
@ -164,4 +181,4 @@ pr_files:
|
||||
|
||||
Response (should be a valid YAML, and nothing else):
|
||||
```yaml
|
||||
"""
|
||||
"""
|
@ -98,7 +98,7 @@ class Review(BaseModel):
|
||||
{%- if question_str %}
|
||||
insights_from_user_answers: str = Field(description="shortly summarize the insights you gained from the user's answers to the questions")
|
||||
{%- endif %}
|
||||
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A short and diverse list (0-3 issues) of high-priority bugs, problems or performance concerns introduced in the PR code, which the PR reviewer should further focus on and validate during the review process.")
|
||||
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A short and diverse list (0-{{ num_max_findings }} issues) of high-priority bugs, problems or performance concerns introduced in the PR code, which the PR reviewer should further focus on and validate during the review process.")
|
||||
{%- if require_security_review %}
|
||||
security_concerns: str = Field(description="Does this PR code introduce possible vulnerabilities such as exposure of sensitive information (e.g., API keys, secrets, passwords), or security concerns like SQL injection, XSS, CSRF, and others ? Answer 'No' (without explaining why) if there are no possible issues. If there are security concerns or issues, start your answer with a short header, such as: 'Sensitive information exposure: ...', 'SQL injection: ...' etc. Explain your answer. Be specific and give examples if possible")
|
||||
{%- endif %}
|
||||
|
@ -72,7 +72,8 @@ class PRDescription:
|
||||
"enable_semantic_files_types": get_settings().pr_description.enable_semantic_files_types,
|
||||
"related_tickets": "",
|
||||
"include_file_summary_changes": len(self.git_provider.get_diff_files()) <= self.COLLAPSIBLE_FILE_LIST_THRESHOLD,
|
||||
'duplicate_prompt_examples': get_settings().config.get('duplicate_prompt_examples', False),
|
||||
"duplicate_prompt_examples": get_settings().config.get("duplicate_prompt_examples", False),
|
||||
"enable_pr_diagram": get_settings().pr_description.get("enable_pr_diagram", False),
|
||||
}
|
||||
|
||||
self.user_description = self.git_provider.get_user_description()
|
||||
@ -199,7 +200,7 @@ class PRDescription:
|
||||
|
||||
async def _prepare_prediction(self, model: str) -> None:
|
||||
if get_settings().pr_description.use_description_markers and 'pr_agent:' not in self.user_description:
|
||||
get_logger().info("Markers were enabled, but user description does not contain markers. skipping AI prediction")
|
||||
get_logger().info("Markers were enabled, but user description does not contain markers. Skipping AI prediction")
|
||||
return None
|
||||
|
||||
large_pr_handling = get_settings().pr_description.enable_large_pr_handling and "pr_description_only_files_prompts" in get_settings()
|
||||
@ -456,6 +457,12 @@ class PRDescription:
|
||||
self.data['labels'] = self.data.pop('labels')
|
||||
if 'description' in self.data:
|
||||
self.data['description'] = self.data.pop('description')
|
||||
if 'changes_diagram' in self.data:
|
||||
changes_diagram = self.data.pop('changes_diagram').strip()
|
||||
if changes_diagram.startswith('```'):
|
||||
if not changes_diagram.endswith('```'): # fallback for missing closing
|
||||
changes_diagram += '\n```'
|
||||
self.data['changes_diagram'] = '\n'+ changes_diagram
|
||||
if 'pr_files' in self.data:
|
||||
self.data['pr_files'] = self.data.pop('pr_files')
|
||||
|
||||
@ -707,7 +714,7 @@ class PRDescription:
|
||||
pr_body += """</tr></tbody></table>"""
|
||||
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error processing pr files to markdown {self.pr_id}: {str(e)}")
|
||||
get_logger().error(f"Error processing PR files to markdown {self.pr_id}: {str(e)}")
|
||||
pass
|
||||
return pr_body, pr_comments
|
||||
|
||||
@ -820,4 +827,4 @@ def replace_code_tags(text):
|
||||
parts = text.split('`')
|
||||
for i in range(1, len(parts), 2):
|
||||
parts[i] = '<code>' + parts[i] + '</code>'
|
||||
return ''.join(parts)
|
||||
return ''.join(parts)
|
@ -81,6 +81,7 @@ class PRReviewer:
|
||||
"language": self.main_language,
|
||||
"diff": "", # empty diff for initial calculation
|
||||
"num_pr_files": self.git_provider.get_num_of_files(),
|
||||
"num_max_findings": get_settings().pr_reviewer.num_max_findings,
|
||||
"require_score": get_settings().pr_reviewer.require_score_review,
|
||||
"require_tests": get_settings().pr_reviewer.require_tests_review,
|
||||
"require_estimate_effort_to_review": get_settings().pr_reviewer.require_estimate_effort_to_review,
|
||||
@ -316,7 +317,9 @@ class PRReviewer:
|
||||
get_logger().exception(f"Failed to remove previous review comment, error: {e}")
|
||||
|
||||
def _can_run_incremental_review(self) -> bool:
|
||||
"""Checks if we can run incremental review according the various configurations and previous review"""
|
||||
"""
|
||||
Checks if we can run incremental review according the various configurations and previous review.
|
||||
"""
|
||||
# checking if running is auto mode but there are no new commits
|
||||
if self.is_auto and not self.incremental.first_new_commit_sha:
|
||||
get_logger().info(f"Incremental review is enabled for {self.pr_url} but there are no new commits")
|
||||
|
@ -1,5 +1,5 @@
|
||||
aiohttp==3.9.5
|
||||
anthropic>=0.48
|
||||
anthropic>=0.52.0
|
||||
#anthropic[vertex]==0.47.1
|
||||
atlassian-python-api==3.41.4
|
||||
azure-devops==7.1.0b3
|
||||
@ -13,7 +13,7 @@ google-cloud-aiplatform==1.38.0
|
||||
google-generativeai==0.8.3
|
||||
google-cloud-storage==2.10.0
|
||||
Jinja2==3.1.2
|
||||
litellm==1.69.3
|
||||
litellm==1.70.4
|
||||
loguru==0.7.2
|
||||
msrest==0.7.1
|
||||
openai>=1.55.3
|
||||
@ -31,6 +31,7 @@ gunicorn==22.0.0
|
||||
pytest-cov==5.0.0
|
||||
pydantic==2.8.2
|
||||
html2text==2024.2.26
|
||||
giteapy==1.0.8
|
||||
# Uncomment the following lines to enable the 'similar issue' tool
|
||||
# pinecone-client
|
||||
# pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
|
||||
|
90
tests/e2e_tests/langchain_ai_handler.py
Normal file
90
tests/e2e_tests/langchain_ai_handler.py
Normal file
@ -0,0 +1,90 @@
|
||||
import asyncio
|
||||
import os
|
||||
import time
|
||||
from pr_agent.algo.ai_handlers.langchain_ai_handler import LangChainOpenAIHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
|
||||
def check_settings():
|
||||
print('Checking settings...')
|
||||
settings = get_settings()
|
||||
|
||||
# Check OpenAI settings
|
||||
if not hasattr(settings, 'openai'):
|
||||
print('OpenAI settings not found')
|
||||
return False
|
||||
|
||||
if not hasattr(settings.openai, 'key'):
|
||||
print('OpenAI API key not found')
|
||||
return False
|
||||
|
||||
print('OpenAI API key found')
|
||||
return True
|
||||
|
||||
async def measure_performance(handler, num_requests=3):
|
||||
print(f'\nRunning performance test with {num_requests} requests...')
|
||||
start_time = time.time()
|
||||
|
||||
# Create multiple requests
|
||||
tasks = [
|
||||
handler.chat_completion(
|
||||
model='gpt-3.5-turbo',
|
||||
system='You are a helpful assistant',
|
||||
user=f'Test message {i}',
|
||||
temperature=0.2
|
||||
) for i in range(num_requests)
|
||||
]
|
||||
|
||||
# Execute requests concurrently
|
||||
responses = await asyncio.gather(*tasks)
|
||||
|
||||
end_time = time.time()
|
||||
total_time = end_time - start_time
|
||||
avg_time = total_time / num_requests
|
||||
|
||||
print(f'Performance results:')
|
||||
print(f'Total time: {total_time:.2f} seconds')
|
||||
print(f'Average time per request: {avg_time:.2f} seconds')
|
||||
print(f'Requests per second: {num_requests/total_time:.2f}')
|
||||
|
||||
return responses
|
||||
|
||||
async def test():
|
||||
print('Starting test...')
|
||||
|
||||
# Check settings first
|
||||
if not check_settings():
|
||||
print('Please set up your environment variables or configuration file')
|
||||
print('Required: OPENAI_API_KEY')
|
||||
return
|
||||
|
||||
try:
|
||||
handler = LangChainOpenAIHandler()
|
||||
print('Handler created')
|
||||
|
||||
# Basic functionality test
|
||||
response = await handler.chat_completion(
|
||||
model='gpt-3.5-turbo',
|
||||
system='You are a helpful assistant',
|
||||
user='Hello',
|
||||
temperature=0.2,
|
||||
img_path='test.jpg'
|
||||
)
|
||||
print('Response:', response)
|
||||
|
||||
# Performance test
|
||||
await measure_performance(handler)
|
||||
|
||||
except Exception as e:
|
||||
print('Error:', str(e))
|
||||
print('Error type:', type(e))
|
||||
print('Error details:', e.__dict__ if hasattr(e, '__dict__') else 'No additional details')
|
||||
|
||||
if __name__ == '__main__':
|
||||
print('Environment variables:')
|
||||
print('OPENAI_API_KEY:', 'Set' if os.getenv('OPENAI_API_KEY') else 'Not set')
|
||||
print('OPENAI_API_TYPE:', os.getenv('OPENAI_API_TYPE', 'Not set'))
|
||||
print('OPENAI_API_BASE:', os.getenv('OPENAI_API_BASE', 'Not set'))
|
||||
|
||||
asyncio.run(test())
|
||||
|
||||
|
185
tests/e2e_tests/test_gitea_app.py
Normal file
185
tests/e2e_tests/test_gitea_app.py
Normal file
@ -0,0 +1,185 @@
|
||||
import os
|
||||
import time
|
||||
import requests
|
||||
from datetime import datetime
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger, setup_logger
|
||||
from tests.e2e_tests.e2e_utils import (FILE_PATH,
|
||||
IMPROVE_START_WITH_REGEX_PATTERN,
|
||||
NEW_FILE_CONTENT, NUM_MINUTES,
|
||||
PR_HEADER_START_WITH, REVIEW_START_WITH)
|
||||
|
||||
log_level = os.environ.get("LOG_LEVEL", "INFO")
|
||||
setup_logger(log_level)
|
||||
logger = get_logger()
|
||||
|
||||
def test_e2e_run_gitea_app():
|
||||
repo_name = 'pr-agent-tests'
|
||||
owner = 'codiumai'
|
||||
base_branch = "main"
|
||||
new_branch = f"gitea_app_e2e_test-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}"
|
||||
get_settings().config.git_provider = "gitea"
|
||||
|
||||
headers = None
|
||||
pr_number = None
|
||||
|
||||
try:
|
||||
gitea_url = get_settings().get("GITEA.URL", None)
|
||||
gitea_token = get_settings().get("GITEA.TOKEN", None)
|
||||
|
||||
if not gitea_url:
|
||||
logger.error("GITEA.URL is not set in the configuration")
|
||||
logger.info("Please set GITEA.URL in .env file or environment variables")
|
||||
assert False, "GITEA.URL is not set in the configuration"
|
||||
|
||||
if not gitea_token:
|
||||
logger.error("GITEA.TOKEN is not set in the configuration")
|
||||
logger.info("Please set GITEA.TOKEN in .env file or environment variables")
|
||||
assert False, "GITEA.TOKEN is not set in the configuration"
|
||||
|
||||
headers = {
|
||||
'Authorization': f'token {gitea_token}',
|
||||
'Content-Type': 'application/json',
|
||||
'Accept': 'application/json'
|
||||
}
|
||||
|
||||
logger.info(f"Creating a new branch {new_branch} from {base_branch}")
|
||||
|
||||
response = requests.get(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/branches/{base_branch}",
|
||||
headers=headers
|
||||
)
|
||||
response.raise_for_status()
|
||||
base_branch_data = response.json()
|
||||
base_commit_sha = base_branch_data['commit']['id']
|
||||
|
||||
branch_data = {
|
||||
'ref': f"refs/heads/{new_branch}",
|
||||
'sha': base_commit_sha
|
||||
}
|
||||
response = requests.post(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/git/refs",
|
||||
headers=headers,
|
||||
json=branch_data
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
logger.info(f"Updating file {FILE_PATH} in branch {new_branch}")
|
||||
|
||||
import base64
|
||||
file_content_encoded = base64.b64encode(NEW_FILE_CONTENT.encode()).decode()
|
||||
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/contents/{FILE_PATH}?ref={new_branch}",
|
||||
headers=headers
|
||||
)
|
||||
response.raise_for_status()
|
||||
existing_file = response.json()
|
||||
file_sha = existing_file.get('sha')
|
||||
|
||||
file_data = {
|
||||
'message': 'Update cli_pip.py',
|
||||
'content': file_content_encoded,
|
||||
'sha': file_sha,
|
||||
'branch': new_branch
|
||||
}
|
||||
except:
|
||||
file_data = {
|
||||
'message': 'Add cli_pip.py',
|
||||
'content': file_content_encoded,
|
||||
'branch': new_branch
|
||||
}
|
||||
|
||||
response = requests.put(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/contents/{FILE_PATH}",
|
||||
headers=headers,
|
||||
json=file_data
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
logger.info(f"Creating a pull request from {new_branch} to {base_branch}")
|
||||
pr_data = {
|
||||
'title': f'Test PR from {new_branch}',
|
||||
'body': 'update cli_pip.py',
|
||||
'head': new_branch,
|
||||
'base': base_branch
|
||||
}
|
||||
response = requests.post(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/pulls",
|
||||
headers=headers,
|
||||
json=pr_data
|
||||
)
|
||||
response.raise_for_status()
|
||||
pr = response.json()
|
||||
pr_number = pr['number']
|
||||
|
||||
for i in range(NUM_MINUTES):
|
||||
logger.info(f"Waiting for the PR to get all the tool results...")
|
||||
time.sleep(60)
|
||||
|
||||
response = requests.get(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/issues/{pr_number}/comments",
|
||||
headers=headers
|
||||
)
|
||||
response.raise_for_status()
|
||||
comments = response.json()
|
||||
|
||||
if len(comments) >= 5:
|
||||
valid_review = False
|
||||
for comment in comments:
|
||||
if comment['body'].startswith('## PR Reviewer Guide 🔍'):
|
||||
valid_review = True
|
||||
break
|
||||
if valid_review:
|
||||
break
|
||||
else:
|
||||
logger.error("REVIEW feedback is invalid")
|
||||
raise Exception("REVIEW feedback is invalid")
|
||||
else:
|
||||
logger.info(f"Waiting for the PR to get all the tool results. {i + 1} minute(s) passed")
|
||||
else:
|
||||
assert False, f"After {NUM_MINUTES} minutes, the PR did not get all the tool results"
|
||||
|
||||
logger.info(f"Cleaning up: closing PR and deleting branch {new_branch}")
|
||||
|
||||
close_data = {'state': 'closed'}
|
||||
response = requests.patch(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/pulls/{pr_number}",
|
||||
headers=headers,
|
||||
json=close_data
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
response = requests.delete(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/git/refs/heads/{new_branch}",
|
||||
headers=headers
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
logger.info(f"Succeeded in running e2e test for Gitea app on the PR")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to run e2e test for Gitea app: {e}")
|
||||
raise
|
||||
finally:
|
||||
try:
|
||||
if headers is None or gitea_url is None:
|
||||
return
|
||||
|
||||
if pr_number is not None:
|
||||
requests.patch(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/pulls/{pr_number}",
|
||||
headers=headers,
|
||||
json={'state': 'closed'}
|
||||
)
|
||||
|
||||
requests.delete(
|
||||
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/git/refs/heads/{new_branch}",
|
||||
headers=headers
|
||||
)
|
||||
except Exception as cleanup_error:
|
||||
logger.error(f"Failed to clean up after test: {cleanup_error}")
|
||||
|
||||
if __name__ == '__main__':
|
||||
test_e2e_run_gitea_app()
|
@ -1,13 +1,302 @@
|
||||
|
||||
# Generated by CodiumAI
|
||||
|
||||
import pytest
|
||||
|
||||
from unittest.mock import patch, MagicMock
|
||||
from pr_agent.algo.utils import clip_tokens
|
||||
from pr_agent.algo.token_handler import TokenEncoder
|
||||
|
||||
|
||||
class TestClipTokens:
|
||||
def test_clip(self):
|
||||
"""Comprehensive test suite for the clip_tokens function."""
|
||||
|
||||
def test_empty_input_text(self):
|
||||
"""Test that empty input returns empty string."""
|
||||
assert clip_tokens("", 10) == ""
|
||||
assert clip_tokens(None, 10) is None
|
||||
|
||||
def test_text_under_token_limit(self):
|
||||
"""Test that text under the token limit is returned unchanged."""
|
||||
text = "Short text"
|
||||
max_tokens = 100
|
||||
result = clip_tokens(text, max_tokens)
|
||||
assert result == text
|
||||
|
||||
def test_text_exactly_at_token_limit(self):
|
||||
"""Test text that is exactly at the token limit."""
|
||||
text = "This is exactly at the limit"
|
||||
# Mock the token encoder to return exact limit
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 10 # Exactly 10 tokens
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, 10)
|
||||
assert result == text
|
||||
|
||||
def test_text_over_token_limit_with_three_dots(self):
|
||||
"""Test text over token limit with three dots addition."""
|
||||
text = "This is a longer text that should be clipped when it exceeds the token limit"
|
||||
max_tokens = 5
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, max_tokens)
|
||||
assert result.endswith("\n...(truncated)")
|
||||
assert len(result) < len(text)
|
||||
|
||||
def test_text_over_token_limit_without_three_dots(self):
|
||||
"""Test text over token limit without three dots addition."""
|
||||
text = "This is a longer text that should be clipped"
|
||||
max_tokens = 5
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, max_tokens, add_three_dots=False)
|
||||
assert not result.endswith("\n...(truncated)")
|
||||
assert len(result) < len(text)
|
||||
|
||||
def test_negative_max_tokens(self):
|
||||
"""Test that negative max_tokens returns empty string."""
|
||||
text = "Some text"
|
||||
result = clip_tokens(text, -1)
|
||||
assert result == ""
|
||||
|
||||
result = clip_tokens(text, -100)
|
||||
assert result == ""
|
||||
|
||||
def test_zero_max_tokens(self):
|
||||
"""Test that zero max_tokens returns empty string."""
|
||||
text = "Some text"
|
||||
result = clip_tokens(text, 0)
|
||||
assert result == ""
|
||||
|
||||
def test_delete_last_line_functionality(self):
|
||||
"""Test the delete_last_line parameter functionality."""
|
||||
text = "Line 1\nLine 2\nLine 3\nLine 4"
|
||||
max_tokens = 5
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
# Without delete_last_line
|
||||
result_normal = clip_tokens(text, max_tokens, delete_last_line=False)
|
||||
|
||||
# With delete_last_line
|
||||
result_deleted = clip_tokens(text, max_tokens, delete_last_line=True)
|
||||
|
||||
# The result with delete_last_line should be shorter or equal
|
||||
assert len(result_deleted) <= len(result_normal)
|
||||
|
||||
def test_pre_computed_num_input_tokens(self):
|
||||
"""Test using pre-computed num_input_tokens parameter."""
|
||||
text = "This is a test text"
|
||||
max_tokens = 10
|
||||
num_input_tokens = 15
|
||||
|
||||
# Should not call the encoder when num_input_tokens is provided
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_encoder.return_value = None # Should not be called
|
||||
|
||||
result = clip_tokens(text, max_tokens, num_input_tokens=num_input_tokens)
|
||||
assert result.endswith("\n...(truncated)")
|
||||
mock_encoder.assert_not_called()
|
||||
|
||||
def test_pre_computed_tokens_under_limit(self):
|
||||
"""Test pre-computed tokens under the limit."""
|
||||
text = "Short text"
|
||||
max_tokens = 20
|
||||
num_input_tokens = 5
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_encoder.return_value = None # Should not be called
|
||||
|
||||
result = clip_tokens(text, max_tokens, num_input_tokens=num_input_tokens)
|
||||
assert result == text
|
||||
mock_encoder.assert_not_called()
|
||||
|
||||
def test_special_characters_and_unicode(self):
|
||||
"""Test text with special characters and Unicode content."""
|
||||
text = "Special chars: @#$%^&*()_+ áéíóú 中문 🚀 emoji"
|
||||
max_tokens = 5
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, max_tokens)
|
||||
assert isinstance(result, str)
|
||||
assert len(result) < len(text)
|
||||
|
||||
def test_multiline_text_handling(self):
|
||||
"""Test handling of multiline text."""
|
||||
text = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5"
|
||||
max_tokens = 5
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, max_tokens)
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_very_long_text(self):
|
||||
"""Test with very long text."""
|
||||
text = "A" * 10000 # Very long text
|
||||
max_tokens = 10
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 5000 # Many tokens
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, max_tokens)
|
||||
assert len(result) < len(text)
|
||||
assert result.endswith("\n...(truncated)")
|
||||
|
||||
def test_encoder_exception_handling(self):
|
||||
"""Test handling of encoder exceptions."""
|
||||
text = "Test text"
|
||||
max_tokens = 10
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_encoder.side_effect = Exception("Encoder error")
|
||||
|
||||
# Should return original text when encoder fails
|
||||
result = clip_tokens(text, max_tokens)
|
||||
assert result == text
|
||||
|
||||
def test_zero_division_scenario(self):
|
||||
"""Test scenario that could lead to division by zero."""
|
||||
text = "Test"
|
||||
max_tokens = 10
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [] # Empty tokens (could cause division by zero)
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, max_tokens)
|
||||
# Should handle gracefully and return original text
|
||||
assert result == text
|
||||
|
||||
def test_various_edge_cases(self):
|
||||
"""Test various edge cases."""
|
||||
# Single character
|
||||
assert clip_tokens("A", 1000) == "A"
|
||||
|
||||
# Only whitespace
|
||||
text = " \n \t "
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 10
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, 5)
|
||||
assert isinstance(result, str)
|
||||
|
||||
# Text with only newlines
|
||||
text = "\n\n\n\n"
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 10
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, 2, delete_last_line=True)
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_parameter_combinations(self):
|
||||
"""Test different parameter combinations."""
|
||||
text = "Multi\nline\ntext\nfor\ntesting"
|
||||
max_tokens = 5
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 20
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
# Test all combinations
|
||||
combinations = [
|
||||
(True, True), # add_three_dots=True, delete_last_line=True
|
||||
(True, False), # add_three_dots=True, delete_last_line=False
|
||||
(False, True), # add_three_dots=False, delete_last_line=True
|
||||
(False, False), # add_three_dots=False, delete_last_line=False
|
||||
]
|
||||
|
||||
for add_dots, delete_line in combinations:
|
||||
result = clip_tokens(text, max_tokens,
|
||||
add_three_dots=add_dots,
|
||||
delete_last_line=delete_line)
|
||||
assert isinstance(result, str)
|
||||
if add_dots and len(result) > 0:
|
||||
assert result.endswith("\n...(truncated)") or result == text
|
||||
|
||||
def test_num_output_chars_zero_scenario(self):
|
||||
"""Test scenario where num_output_chars becomes zero or negative."""
|
||||
text = "Short"
|
||||
max_tokens = 1
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 1000 # Many tokens for short text
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, max_tokens)
|
||||
# When num_output_chars is 0 or negative, should return empty string
|
||||
assert result == ""
|
||||
|
||||
def test_logging_on_exception(self):
|
||||
"""Test that exceptions are properly logged."""
|
||||
text = "Test text"
|
||||
max_tokens = 10
|
||||
|
||||
# Patch the logger at the module level where it's imported
|
||||
with patch('pr_agent.algo.utils.get_logger') as mock_logger:
|
||||
mock_log_instance = MagicMock()
|
||||
mock_logger.return_value = mock_log_instance
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_encoder.side_effect = Exception("Test exception")
|
||||
|
||||
result = clip_tokens(text, max_tokens)
|
||||
|
||||
# Should log the warning
|
||||
mock_log_instance.warning.assert_called_once()
|
||||
# Should return original text
|
||||
assert result == text
|
||||
|
||||
def test_factor_safety_calculation(self):
|
||||
"""Test that the 0.9 factor (10% reduction) works correctly."""
|
||||
text = "Test text that should be reduced by 10 percent for safety"
|
||||
max_tokens = 10
|
||||
|
||||
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||
mock_tokenizer = MagicMock()
|
||||
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||
mock_encoder.return_value = mock_tokenizer
|
||||
|
||||
result = clip_tokens(text, max_tokens)
|
||||
|
||||
# The result should be shorter due to the 0.9 factor
|
||||
# Characters per token = len(text) / 20
|
||||
# Expected chars = int(0.9 * (len(text) / 20) * 10)
|
||||
expected_chars = int(0.9 * (len(text) / 20) * 10)
|
||||
|
||||
# Result should be around expected_chars length (plus truncation text)
|
||||
if result.endswith("\n...(truncated)"):
|
||||
actual_content = result[:-len("\n...(truncated)")]
|
||||
assert len(actual_content) <= expected_chars + 5 # Some tolerance
|
||||
|
||||
# Test the original basic functionality to ensure backward compatibility
|
||||
def test_clip_original_functionality(self):
|
||||
"""Test original functionality from the existing test."""
|
||||
text = "line1\nline2\nline3\nline4\nline5\nline6"
|
||||
max_tokens = 25
|
||||
result = clip_tokens(text, max_tokens)
|
||||
@ -16,4 +305,4 @@ class TestClipTokens:
|
||||
max_tokens = 10
|
||||
result = clip_tokens(text, max_tokens)
|
||||
expected_results = 'line1\nline2\nline3\n\n...(truncated)'
|
||||
assert result == expected_results
|
||||
assert result == expected_results
|
@ -1,4 +1,7 @@
|
||||
# Generated by CodiumAI
|
||||
import textwrap
|
||||
from unittest.mock import Mock
|
||||
|
||||
from pr_agent.algo.utils import PRReviewHeader, convert_to_markdown_v2
|
||||
from pr_agent.tools.pr_description import insert_br_after_x_chars
|
||||
|
||||
@ -48,9 +51,174 @@ class TestConvertToMarkdown:
|
||||
input_data = {'review': {
|
||||
'estimated_effort_to_review_[1-5]': '1, because the changes are minimal and straightforward, focusing on a single functionality addition.\n',
|
||||
'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}}
|
||||
|
||||
expected_output = textwrap.dedent(f"""\
|
||||
{PRReviewHeader.REGULAR.value} 🔍
|
||||
|
||||
Here are some key observations to aid the review process:
|
||||
|
||||
<table>
|
||||
<tr><td>⏱️ <strong>Estimated effort to review</strong>: 1 🔵⚪⚪⚪⚪</td></tr>
|
||||
<tr><td>🧪 <strong>No relevant tests</strong></td></tr>
|
||||
<tr><td> <strong>Possible issues</strong>: No
|
||||
</td></tr>
|
||||
<tr><td>🔒 <strong>No security concerns identified</strong></td></tr>
|
||||
</table>
|
||||
""")
|
||||
|
||||
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
|
||||
|
||||
def test_simple_dictionary_input_without_gfm_supported(self):
|
||||
input_data = {'review': {
|
||||
'estimated_effort_to_review_[1-5]': '1, because the changes are minimal and straightforward, focusing on a single functionality addition.\n',
|
||||
'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}}
|
||||
|
||||
expected_output = textwrap.dedent("""\
|
||||
## PR Reviewer Guide 🔍
|
||||
|
||||
Here are some key observations to aid the review process:
|
||||
|
||||
### ⏱️ Estimated effort to review: 1 🔵⚪⚪⚪⚪
|
||||
|
||||
### 🧪 No relevant tests
|
||||
|
||||
### Possible issues: No
|
||||
|
||||
|
||||
expected_output = f'{PRReviewHeader.REGULAR.value} 🔍\n\nHere are some key observations to aid the review process:\n\n<table>\n<tr><td>⏱️ <strong>Estimated effort to review</strong>: 1 🔵⚪⚪⚪⚪</td></tr>\n<tr><td>🧪 <strong>No relevant tests</strong></td></tr>\n<tr><td> <strong>Possible issues</strong>: No\n</td></tr>\n<tr><td>🔒 <strong>No security concerns identified</strong></td></tr>\n</table>'
|
||||
### 🔒 No security concerns identified
|
||||
""")
|
||||
|
||||
assert convert_to_markdown_v2(input_data, gfm_supported=False).strip() == expected_output.strip()
|
||||
|
||||
def test_key_issues_to_review(self):
|
||||
input_data = {'review': {
|
||||
'key_issues_to_review': [
|
||||
{
|
||||
'relevant_file' : 'src/utils.py',
|
||||
'issue_header' : 'Code Smell',
|
||||
'issue_content' : 'The function is too long and complex.',
|
||||
'start_line': 30,
|
||||
'end_line': 50,
|
||||
}
|
||||
]
|
||||
}}
|
||||
mock_git_provider = Mock()
|
||||
reference_link = 'https://github.com/qodo/pr-agent/pull/1/files#diff-hashvalue-R174'
|
||||
mock_git_provider.get_line_link.return_value = reference_link
|
||||
|
||||
expected_output = textwrap.dedent(f"""\
|
||||
## PR Reviewer Guide 🔍
|
||||
|
||||
Here are some key observations to aid the review process:
|
||||
|
||||
<table>
|
||||
<tr><td>⚡ <strong>Recommended focus areas for review</strong><br><br>
|
||||
|
||||
<a href='{reference_link}'><strong>Code Smell</strong></a><br>The function is too long and complex.
|
||||
|
||||
</td></tr>
|
||||
</table>
|
||||
""")
|
||||
|
||||
assert convert_to_markdown_v2(input_data, git_provider=mock_git_provider).strip() == expected_output.strip()
|
||||
mock_git_provider.get_line_link.assert_called_with('src/utils.py', 30, 50)
|
||||
|
||||
def test_ticket_compliance(self):
|
||||
input_data = {'review': {
|
||||
'ticket_compliance_check': [
|
||||
{
|
||||
'ticket_url': 'https://example.com/ticket/123',
|
||||
'ticket_requirements': '- Requirement 1\n- Requirement 2\n',
|
||||
'fully_compliant_requirements': '- Requirement 1\n- Requirement 2\n',
|
||||
'not_compliant_requirements': '',
|
||||
'requires_further_human_verification': '',
|
||||
}
|
||||
]
|
||||
}}
|
||||
|
||||
expected_output = textwrap.dedent("""\
|
||||
## PR Reviewer Guide 🔍
|
||||
|
||||
Here are some key observations to aid the review process:
|
||||
|
||||
<table>
|
||||
<tr><td>
|
||||
|
||||
**🎫 Ticket compliance analysis ✅**
|
||||
|
||||
|
||||
|
||||
**[123](https://example.com/ticket/123) - Fully compliant**
|
||||
|
||||
Compliant requirements:
|
||||
|
||||
- Requirement 1
|
||||
- Requirement 2
|
||||
|
||||
|
||||
|
||||
</td></tr>
|
||||
</table>
|
||||
""")
|
||||
|
||||
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
|
||||
|
||||
def test_can_be_split(self):
|
||||
input_data = {'review': {
|
||||
'can_be_split': [
|
||||
{
|
||||
'relevant_files': [
|
||||
'src/file1.py',
|
||||
'src/file2.py'
|
||||
],
|
||||
'title': 'Refactoring',
|
||||
},
|
||||
{
|
||||
'relevant_files': [
|
||||
'src/file3.py'
|
||||
],
|
||||
'title': 'Bug Fix',
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
expected_output = textwrap.dedent("""\
|
||||
## PR Reviewer Guide 🔍
|
||||
|
||||
Here are some key observations to aid the review process:
|
||||
|
||||
<table>
|
||||
<tr><td>🔀 <strong>Multiple PR themes</strong><br><br>
|
||||
|
||||
<details><summary>
|
||||
Sub-PR theme: <b>Refactoring</b></summary>
|
||||
|
||||
___
|
||||
|
||||
Relevant files:
|
||||
|
||||
- src/file1.py
|
||||
- src/file2.py
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
<details><summary>
|
||||
Sub-PR theme: <b>Bug Fix</b></summary>
|
||||
|
||||
___
|
||||
|
||||
Relevant files:
|
||||
|
||||
- src/file3.py
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
</td></tr>
|
||||
</table>
|
||||
""")
|
||||
|
||||
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
|
||||
|
||||
|
21
tests/unittest/test_fix_json_escape_char.py
Normal file
21
tests/unittest/test_fix_json_escape_char.py
Normal file
@ -0,0 +1,21 @@
|
||||
from pr_agent.algo.utils import fix_json_escape_char
|
||||
|
||||
|
||||
class TestFixJsonEscapeChar:
|
||||
def test_valid_json(self):
|
||||
"""Return unchanged when input JSON is already valid"""
|
||||
text = '{"a": 1, "b": "ok"}'
|
||||
expected_output = {"a": 1, "b": "ok"}
|
||||
assert fix_json_escape_char(text) == expected_output
|
||||
|
||||
def test_single_control_char(self):
|
||||
"""Remove a single ASCII control-character"""
|
||||
text = '{"msg": "hel\x01lo"}'
|
||||
expected_output = {"msg": "hel lo"}
|
||||
assert fix_json_escape_char(text) == expected_output
|
||||
|
||||
def test_multiple_control_chars(self):
|
||||
"""Remove multiple control-characters recursively"""
|
||||
text = '{"x": "A\x02B\x03C"}'
|
||||
expected_output = {"x": "A B C"}
|
||||
assert fix_json_escape_char(text) == expected_output
|
67
tests/unittest/test_get_max_tokens.py
Normal file
67
tests/unittest/test_get_max_tokens.py
Normal file
@ -0,0 +1,67 @@
|
||||
import pytest
|
||||
from pr_agent.algo.utils import get_max_tokens, MAX_TOKENS
|
||||
import pr_agent.algo.utils as utils
|
||||
|
||||
class TestGetMaxTokens:
|
||||
|
||||
# Test if the file is in MAX_TOKENS
|
||||
def test_model_max_tokens(self, monkeypatch):
|
||||
fake_settings = type('', (), {
|
||||
'config': type('', (), {
|
||||
'custom_model_max_tokens': 0,
|
||||
'max_model_tokens': 0
|
||||
})()
|
||||
})()
|
||||
|
||||
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
|
||||
|
||||
model = "gpt-3.5-turbo"
|
||||
expected = MAX_TOKENS[model]
|
||||
|
||||
assert get_max_tokens(model) == expected
|
||||
|
||||
# Test situations where the model is not registered and exists as a custom model
|
||||
def test_model_has_custom(self, monkeypatch):
|
||||
fake_settings = type('', (), {
|
||||
'config': type('', (), {
|
||||
'custom_model_max_tokens': 5000,
|
||||
'max_model_tokens': 0 # 제한 없음
|
||||
})()
|
||||
})()
|
||||
|
||||
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
|
||||
|
||||
model = "custom-model"
|
||||
expected = 5000
|
||||
|
||||
assert get_max_tokens(model) == expected
|
||||
|
||||
def test_model_not_max_tokens_and_not_has_custom(self, monkeypatch):
|
||||
fake_settings = type('', (), {
|
||||
'config': type('', (), {
|
||||
'custom_model_max_tokens': 0,
|
||||
'max_model_tokens': 0
|
||||
})()
|
||||
})()
|
||||
|
||||
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
|
||||
|
||||
model = "custom-model"
|
||||
|
||||
with pytest.raises(Exception):
|
||||
get_max_tokens(model)
|
||||
|
||||
def test_model_max_tokens_with__limit(self, monkeypatch):
|
||||
fake_settings = type('', (), {
|
||||
'config': type('', (), {
|
||||
'custom_model_max_tokens': 0,
|
||||
'max_model_tokens': 10000
|
||||
})()
|
||||
})()
|
||||
|
||||
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
|
||||
|
||||
model = "gpt-3.5-turbo" # this model setting is 160000
|
||||
expected = 10000
|
||||
|
||||
assert get_max_tokens(model) == expected
|
126
tests/unittest/test_gitea_provider.py
Normal file
126
tests/unittest/test_gitea_provider.py
Normal file
@ -0,0 +1,126 @@
|
||||
# from unittest.mock import MagicMock, patch
|
||||
#
|
||||
# import pytest
|
||||
#
|
||||
# from pr_agent.algo.types import EDIT_TYPE
|
||||
# from pr_agent.git_providers.gitea_provider import GiteaProvider
|
||||
#
|
||||
#
|
||||
# class TestGiteaProvider:
|
||||
# """Unit-tests for GiteaProvider following project style (explicit object construction, minimal patching)."""
|
||||
#
|
||||
# def _provider(self):
|
||||
# """Create provider instance with patched settings and avoid real HTTP calls."""
|
||||
# with patch('pr_agent.git_providers.gitea_provider.get_settings') as mock_get_settings, \
|
||||
# patch('requests.get') as mock_get:
|
||||
# settings = MagicMock()
|
||||
# settings.get.side_effect = lambda k, d=None: {
|
||||
# 'GITEA.URL': 'https://gitea.example.com',
|
||||
# 'GITEA.PERSONAL_ACCESS_TOKEN': 'test-token'
|
||||
# }.get(k, d)
|
||||
# mock_get_settings.return_value = settings
|
||||
# # Stub the PR fetch triggered during provider initialization
|
||||
# pr_resp = MagicMock()
|
||||
# pr_resp.json.return_value = {
|
||||
# 'title': 'stub',
|
||||
# 'body': 'stub',
|
||||
# 'head': {'ref': 'main'},
|
||||
# 'user': {'id': 1}
|
||||
# }
|
||||
# pr_resp.raise_for_status = MagicMock()
|
||||
# mock_get.return_value = pr_resp
|
||||
# return GiteaProvider('https://gitea.example.com/owner/repo/pulls/123')
|
||||
#
|
||||
# # ---------------- URL parsing ----------------
|
||||
# def test_parse_pr_url_valid(self):
|
||||
# owner, repo, pr_num = self._provider()._parse_pr_url('https://gitea.example.com/owner/repo/pulls/123')
|
||||
# assert (owner, repo, pr_num) == ('owner', 'repo', '123')
|
||||
#
|
||||
# def test_parse_pr_url_invalid(self):
|
||||
# with pytest.raises(ValueError):
|
||||
# GiteaProvider._parse_pr_url('https://gitea.example.com/owner/repo')
|
||||
#
|
||||
# # ---------------- simple getters ----------------
|
||||
# def test_get_files(self):
|
||||
# provider = self._provider()
|
||||
# mock_resp = MagicMock()
|
||||
# mock_resp.json.return_value = [{'filename': 'a.txt'}, {'filename': 'b.txt'}]
|
||||
# mock_resp.raise_for_status = MagicMock()
|
||||
# with patch('requests.get', return_value=mock_resp) as mock_get:
|
||||
# assert provider.get_files() == ['a.txt', 'b.txt']
|
||||
# mock_get.assert_called_once()
|
||||
#
|
||||
# def test_get_diff_files(self):
|
||||
# provider = self._provider()
|
||||
# mock_resp = MagicMock()
|
||||
# mock_resp.json.return_value = [
|
||||
# {'filename': 'f1', 'previous_filename': 'old_f1', 'status': 'renamed', 'patch': ''},
|
||||
# {'filename': 'f2', 'status': 'added', 'patch': ''},
|
||||
# {'filename': 'f3', 'status': 'deleted', 'patch': ''},
|
||||
# {'filename': 'f4', 'status': 'modified', 'patch': ''}
|
||||
# ]
|
||||
# mock_resp.raise_for_status = MagicMock()
|
||||
# with patch('requests.get', return_value=mock_resp):
|
||||
# res = provider.get_diff_files()
|
||||
# assert [f.edit_type for f in res] == [EDIT_TYPE.RENAMED, EDIT_TYPE.ADDED, EDIT_TYPE.DELETED,
|
||||
# EDIT_TYPE.MODIFIED]
|
||||
#
|
||||
# # ---------------- publishing methods ----------------
|
||||
# def test_publish_description(self):
|
||||
# provider = self._provider()
|
||||
# mock_resp = MagicMock();
|
||||
# mock_resp.raise_for_status = MagicMock()
|
||||
# with patch('requests.patch', return_value=mock_resp) as mock_patch:
|
||||
# provider.publish_description('t', 'b');
|
||||
# mock_patch.assert_called_once()
|
||||
#
|
||||
# def test_publish_comment(self):
|
||||
# provider = self._provider()
|
||||
# mock_resp = MagicMock();
|
||||
# mock_resp.raise_for_status = MagicMock()
|
||||
# with patch('requests.post', return_value=mock_resp) as mock_post:
|
||||
# provider.publish_comment('c');
|
||||
# mock_post.assert_called_once()
|
||||
#
|
||||
# def test_publish_inline_comment(self):
|
||||
# provider = self._provider()
|
||||
# mock_resp = MagicMock();
|
||||
# mock_resp.raise_for_status = MagicMock()
|
||||
# with patch('requests.post', return_value=mock_resp) as mock_post:
|
||||
# provider.publish_inline_comment('body', 'file', '10');
|
||||
# mock_post.assert_called_once()
|
||||
#
|
||||
# # ---------------- labels & reactions ----------------
|
||||
# def test_get_pr_labels(self):
|
||||
# provider = self._provider()
|
||||
# mock_resp = MagicMock();
|
||||
# mock_resp.raise_for_status = MagicMock();
|
||||
# mock_resp.json.return_value = [{'name': 'l1'}]
|
||||
# with patch('requests.get', return_value=mock_resp):
|
||||
# assert provider.get_pr_labels() == ['l1']
|
||||
#
|
||||
# def test_add_eyes_reaction(self):
|
||||
# provider = self._provider()
|
||||
# mock_resp = MagicMock();
|
||||
# mock_resp.raise_for_status = MagicMock();
|
||||
# mock_resp.json.return_value = {'id': 7}
|
||||
# with patch('requests.post', return_value=mock_resp):
|
||||
# assert provider.add_eyes_reaction(1) == 7
|
||||
#
|
||||
# # ---------------- commit messages & url helpers ----------------
|
||||
# def test_get_commit_messages(self):
|
||||
# provider = self._provider()
|
||||
# mock_resp = MagicMock();
|
||||
# mock_resp.raise_for_status = MagicMock()
|
||||
# mock_resp.json.return_value = [
|
||||
# {'commit': {'message': 'm1'}}, {'commit': {'message': 'm2'}}]
|
||||
# with patch('requests.get', return_value=mock_resp):
|
||||
# assert provider.get_commit_messages() == ['m1', 'm2']
|
||||
#
|
||||
# def test_git_url_helpers(self):
|
||||
# provider = self._provider()
|
||||
# issues_url = 'https://gitea.example.com/owner/repo/pulls/3'
|
||||
# assert provider.get_git_repo_url(issues_url) == 'https://gitea.example.com/owner/repo.git'
|
||||
# prefix, suffix = provider.get_canonical_url_parts('https://gitea.example.com/owner/repo.git', 'dev')
|
||||
# assert prefix == 'https://gitea.example.com/owner/repo/src/branch/dev'
|
||||
# assert suffix == ''
|
@ -79,13 +79,14 @@ class TestSortFilesByMainLanguages:
|
||||
files = [
|
||||
type('', (object,), {'filename': 'file1.py'})(),
|
||||
type('', (object,), {'filename': 'file2.java'})(),
|
||||
type('', (object,), {'filename': 'file3.cpp'})()
|
||||
type('', (object,), {'filename': 'file3.cpp'})(),
|
||||
type('', (object,), {'filename': 'file3.test'})()
|
||||
]
|
||||
expected_output = [
|
||||
{'language': 'Python', 'files': [files[0]]},
|
||||
{'language': 'Java', 'files': [files[1]]},
|
||||
{'language': 'C++', 'files': [files[2]]},
|
||||
{'language': 'Other', 'files': []}
|
||||
{'language': 'Other', 'files': [files[3]]}
|
||||
]
|
||||
assert sort_files_by_main_languages(languages, files) == expected_output
|
||||
|
||||
|
@ -32,11 +32,6 @@ age: 35
|
||||
expected_output = {'name': 'John Smith', 'age': 35}
|
||||
assert try_fix_yaml(review_text) == expected_output
|
||||
|
||||
# The function removes the last line(s) of the YAML string and successfully parses the YAML string.
|
||||
def test_remove_last_line(self):
|
||||
review_text = "key: value\nextra invalid line\n"
|
||||
expected_output = {"key": "value"}
|
||||
assert try_fix_yaml(review_text) == expected_output
|
||||
|
||||
# The YAML string is empty.
|
||||
def test_empty_yaml_fixed(self):
|
||||
@ -58,12 +53,12 @@ code_suggestions:
|
||||
- relevant_file: |
|
||||
src/index2.ts
|
||||
label: |
|
||||
enhancment
|
||||
enhancement
|
||||
```
|
||||
|
||||
We can further improve the code by using the `const` keyword instead of `var` in the `src/index.ts` file.
|
||||
'''
|
||||
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancment'}]}
|
||||
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancement'}]}
|
||||
|
||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
||||
|
||||
@ -81,10 +76,178 @@ code_suggestions:
|
||||
- relevant_file: |
|
||||
src/index2.ts
|
||||
label: |
|
||||
enhancment
|
||||
enhancement
|
||||
```
|
||||
|
||||
We can further improve the code by using the `const` keyword instead of `var` in the `src/index.ts` file.
|
||||
'''
|
||||
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancment'}]}
|
||||
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancement'}]}
|
||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
||||
|
||||
|
||||
def test_with_brackets_yaml_content(self):
|
||||
review_text = '''\
|
||||
{
|
||||
code_suggestions:
|
||||
- relevant_file: |
|
||||
src/index.ts
|
||||
label: |
|
||||
best practice
|
||||
|
||||
- relevant_file: |
|
||||
src/index2.ts
|
||||
label: |
|
||||
enhancement
|
||||
}
|
||||
'''
|
||||
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancement'}]}
|
||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
||||
|
||||
def test_tab_indent_yaml(self):
|
||||
review_text = '''\
|
||||
code_suggestions:
|
||||
- relevant_file: |
|
||||
src/index.ts
|
||||
label: |
|
||||
\tbest practice
|
||||
|
||||
- relevant_file: |
|
||||
src/index2.ts
|
||||
label: |
|
||||
enhancement
|
||||
'''
|
||||
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancement\n'}]}
|
||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
||||
|
||||
|
||||
def test_leading_plus_mark_code(self):
|
||||
review_text = '''\
|
||||
code_suggestions:
|
||||
- relevant_file: |
|
||||
src/index.ts
|
||||
label: |
|
||||
best practice
|
||||
existing_code: |
|
||||
+ var router = createBrowserRouter([
|
||||
improved_code: |
|
||||
+ const router = createBrowserRouter([
|
||||
'''
|
||||
expected_output = {'code_suggestions': [{
|
||||
'relevant_file': 'src/index.ts\n',
|
||||
'label': 'best practice\n',
|
||||
'existing_code': 'var router = createBrowserRouter([\n',
|
||||
'improved_code': 'const router = createBrowserRouter([\n'
|
||||
}]}
|
||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='improved_code') == expected_output
|
||||
|
||||
|
||||
def test_inconsistent_indentation_in_block_scalar_yaml(self):
|
||||
"""
|
||||
This test case represents a situation where the AI outputs the opening '{' with 5 spaces
|
||||
(resulting in an inferred indent level of 5), while the closing '}' is output with only 4 spaces.
|
||||
This inconsistency makes it impossible for the YAML parser to automatically determine the correct
|
||||
indent level, causing a parsing failure.
|
||||
|
||||
The root cause may be the LLM miscounting spaces or misunderstanding the active block scalar context
|
||||
while generating YAML output.
|
||||
"""
|
||||
|
||||
review_text = '''\
|
||||
code_suggestions:
|
||||
- relevant_file: |
|
||||
tsconfig.json
|
||||
existing_code: |
|
||||
{
|
||||
"key1": "value1",
|
||||
"key2": {
|
||||
"subkey": "value"
|
||||
}
|
||||
}
|
||||
'''
|
||||
expected_json = '''\
|
||||
{
|
||||
"key1": "value1",
|
||||
"key2": {
|
||||
"subkey": "value"
|
||||
}
|
||||
}
|
||||
'''
|
||||
expected_output = {
|
||||
'code_suggestions': [{
|
||||
'relevant_file': 'tsconfig.json\n',
|
||||
'existing_code': expected_json
|
||||
}]
|
||||
}
|
||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='existing_code') == expected_output
|
||||
|
||||
|
||||
def test_inconsistent_and_insufficient_indentation_in_block_scalar_yaml(self):
|
||||
"""
|
||||
This test case reproduces a YAML parsing failure where the block scalar content
|
||||
generated by the AI includes inconsistent and insufficient indentation levels.
|
||||
|
||||
The root cause may be the LLM miscounting spaces or misunderstanding the active block scalar context
|
||||
while generating YAML output.
|
||||
"""
|
||||
|
||||
review_text = '''\
|
||||
code_suggestions:
|
||||
- relevant_file: |
|
||||
tsconfig.json
|
||||
existing_code: |
|
||||
{
|
||||
"key1": "value1",
|
||||
"key2": {
|
||||
"subkey": "value"
|
||||
}
|
||||
}
|
||||
'''
|
||||
expected_json = '''\
|
||||
{
|
||||
"key1": "value1",
|
||||
"key2": {
|
||||
"subkey": "value"
|
||||
}
|
||||
}
|
||||
'''
|
||||
expected_output = {
|
||||
'code_suggestions': [{
|
||||
'relevant_file': 'tsconfig.json\n',
|
||||
'existing_code': expected_json
|
||||
}]
|
||||
}
|
||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='existing_code') == expected_output
|
||||
|
||||
|
||||
def test_wrong_indentation_code_block_scalar(self):
|
||||
review_text = '''\
|
||||
code_suggestions:
|
||||
- relevant_file: |
|
||||
a.c
|
||||
existing_code: |
|
||||
int sum(int a, int b) {
|
||||
return a + b;
|
||||
}
|
||||
|
||||
int sub(int a, int b) {
|
||||
return a - b;
|
||||
}
|
||||
'''
|
||||
expected_code_block = '''\
|
||||
int sum(int a, int b) {
|
||||
return a + b;
|
||||
}
|
||||
|
||||
int sub(int a, int b) {
|
||||
return a - b;
|
||||
}
|
||||
'''
|
||||
expected_output = {
|
||||
"code_suggestions": [
|
||||
{
|
||||
"relevant_file": "a.c\n",
|
||||
"existing_code": expected_code_block
|
||||
}
|
||||
]
|
||||
}
|
||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='existing_code') == expected_output
|
||||
|
Reference in New Issue
Block a user