mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-06 22:00:40 +08:00
Compare commits
212 Commits
of/repo-st
...
es/add_qm_
Author | SHA1 | Date | |
---|---|---|---|
af351cada2 | |||
c635887949 | |||
4fdbcdf86a | |||
4c006eba47 | |||
3b84203d4c | |||
ee4b847da0 | |||
fa435cfc1e | |||
37781c59e7 | |||
865798ef3f | |||
c190b93c4f | |||
1bbc673588 | |||
ac72d6f6c2 | |||
4977db5578 | |||
a2500633e2 | |||
4a7757be3f | |||
2c976fef72 | |||
2a84c841da | |||
a9d6a615fd | |||
e561f0358c | |||
9e238dd991 | |||
5917903a5a | |||
7d21c84c93 | |||
50e331476d | |||
48ab5a4018 | |||
be20c9ab3b | |||
ecb39856ee | |||
b6787fa6de | |||
c520a8658f | |||
e2867f3a19 | |||
5856a9e548 | |||
8b563b0143 | |||
9b06f46563 | |||
75d24791a4 | |||
0dba39566f | |||
d2caf0352c | |||
8be2e43a0f | |||
27479d87b7 | |||
76172bd3ec | |||
4893552f20 | |||
d5080a35f6 | |||
cc06da3b7f | |||
ac8aa9c2ef | |||
9c1f5ad497 | |||
8baf6dba93 | |||
403efcae22 | |||
d857132d1d | |||
e7f85cf858 | |||
c57f8aff9b | |||
2e75fa31bd | |||
cc1b1871d0 | |||
984d627300 | |||
d1e8d267f6 | |||
32b1fb91c3 | |||
b932b96e0c | |||
53e232d7f4 | |||
5e535a8868 | |||
cd96f6b911 | |||
1955157e9a | |||
8143f4b35b | |||
a17100e512 | |||
821227542a | |||
e9ce3ae869 | |||
b4cef661e6 | |||
2b614330ec | |||
b802b162d1 | |||
fd1a27c2ac | |||
95e4604abe | |||
d5f77560e3 | |||
6f27fc9271 | |||
ee516ed764 | |||
9f9548395f | |||
daf6c25f9a | |||
c6d6e08618 | |||
5abc4028c9 | |||
a540cd24e6 | |||
498e5ff0a7 | |||
495ac565b0 | |||
82c88a1cf7 | |||
3d5509b986 | |||
86102abf8e | |||
df6b00aa36 | |||
4baf52292d | |||
e8ace9fcf9 | |||
3ec66e6aec | |||
80b535f41a | |||
805734376e | |||
a128db8393 | |||
9cf62e8220 | |||
73cf69889a | |||
b18a509120 | |||
6063bf5978 | |||
5d105c64d2 | |||
f06ee951d7 | |||
b264f42e3d | |||
a975b32376 | |||
5e9c56b96c | |||
f78762cf2e | |||
4a019ba7c4 | |||
16d980ec76 | |||
68c0fd7e3a | |||
2eeb9b0411 | |||
f3cb4e8384 | |||
946657a6d1 | |||
d2194c7ed9 | |||
d5dead5c7f | |||
6aac41a0df | |||
2453508023 | |||
84f2f4fe3d | |||
aa3e5b79c8 | |||
d9f64e52e4 | |||
ff52ae9281 | |||
d791e9f3d1 | |||
2afc3d3437 | |||
511f1ba6ae | |||
415817b421 | |||
18a8a741fa | |||
113229b218 | |||
4cdaad1fc5 | |||
e57d3101e4 | |||
f58c40a6ae | |||
c346d784e3 | |||
32460fac57 | |||
d8aa61622f | |||
2b2818a435 | |||
cdca5a55b2 | |||
9f9397b2d8 | |||
3a385b62d6 | |||
94e1126b00 | |||
5a0affd6cb | |||
d62cbb2fc4 | |||
f5bb508736 | |||
4047e71268 | |||
16b9ccd025 | |||
43dbe24a7f | |||
f4a9bc3de7 | |||
ad4721f55b | |||
20b1a1f552 | |||
4c98cffd37 | |||
453f8e19f3 | |||
95c94b80a2 | |||
e2586cb64a | |||
1bc0d488d5 | |||
1f836e405d | |||
c4358d1ca0 | |||
c10be827a1 | |||
10703a9098 | |||
162cc9d833 | |||
0f893bc492 | |||
000f0ba93e | |||
48c29c9ffa | |||
f6a9d3c2cc | |||
930cd69909 | |||
684a438167 | |||
f10c389406 | |||
20e69c3530 | |||
069f36fc1f | |||
1c6958069a | |||
e79c34e039 | |||
e045617243 | |||
70428ebb21 | |||
466ec4ce90 | |||
facfb5f46b | |||
cc686ef26d | |||
ead7491ca9 | |||
df0355d827 | |||
c3ea048b71 | |||
648829b770 | |||
4e80f3999c | |||
3bced45248 | |||
dd17aadfe3 | |||
199b463eaa | |||
7821e71b17 | |||
b686a707a4 | |||
bd68a0de55 | |||
6405284461 | |||
9069c37a05 | |||
2d619564f2 | |||
1b74942919 | |||
97f2b6f736 | |||
eecf115b91 | |||
f198e6fa09 | |||
e72bb28c4e | |||
81fa22e4df | |||
8aa89ff8e6 | |||
6d9bb93f62 | |||
25b807f71c | |||
03fa5b7d92 | |||
4679dce3af | |||
94aa8e8638 | |||
f5a069d6b4 | |||
2a42d009af | |||
9464fd9696 | |||
95df26c973 | |||
a315779713 | |||
c97b49c373 | |||
5a8ce252f7 | |||
5e40b3962a | |||
3f4fac1232 | |||
e692dee66a | |||
31620a82c0 | |||
2dbcb3e5dc | |||
74b4488c7e | |||
d67d07acc7 | |||
12b1fe23da | |||
f857ea1f22 | |||
8b1abbcc2c | |||
a692a70027 | |||
fab8573c4d | |||
2d7636543c | |||
cf2b95b766 | |||
9ef0c451bf | |||
05ab5f699f |
797
LICENSE
797
LICENSE
@ -1,202 +1,661 @@
|
|||||||
|
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||||
|
Version 3, 19 November 2007
|
||||||
|
|
||||||
Apache License
|
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||||
Version 2.0, January 2004
|
Everyone is permitted to copy and distribute verbatim copies
|
||||||
http://www.apache.org/licenses/
|
of this license document, but changing it is not allowed.
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
Preamble
|
||||||
|
|
||||||
1. Definitions.
|
The GNU Affero General Public License is a free, copyleft license for
|
||||||
|
software and other kinds of works, specifically designed to ensure
|
||||||
|
cooperation with the community in the case of network server software.
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
The licenses for most software and other practical works are designed
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
to take away your freedom to share and change the works. By contrast,
|
||||||
|
our General Public Licenses are intended to guarantee your freedom to
|
||||||
|
share and change all versions of a program--to make sure it remains free
|
||||||
|
software for all its users.
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
When we speak of free software, we are referring to freedom, not
|
||||||
the copyright owner that is granting the License.
|
price. Our General Public Licenses are designed to make sure that you
|
||||||
|
have the freedom to distribute copies of free software (and charge for
|
||||||
|
them if you wish), that you receive source code or can get it if you
|
||||||
|
want it, that you can change the software or use pieces of it in new
|
||||||
|
free programs, and that you know you can do these things.
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
Developers that use our General Public Licenses protect your rights
|
||||||
other entities that control, are controlled by, or are under common
|
with two steps: (1) assert copyright on the software, and (2) offer
|
||||||
control with that entity. For the purposes of this definition,
|
you this License which gives you legal permission to copy, distribute
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
and/or modify the software.
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
A secondary benefit of defending all users' freedom is that
|
||||||
exercising permissions granted by this License.
|
improvements made in alternate versions of the program, if they
|
||||||
|
receive widespread use, become available for other developers to
|
||||||
|
incorporate. Many developers of free software are heartened and
|
||||||
|
encouraged by the resulting cooperation. However, in the case of
|
||||||
|
software used on network servers, this result may fail to come about.
|
||||||
|
The GNU General Public License permits making a modified version and
|
||||||
|
letting the public access it on a server without ever releasing its
|
||||||
|
source code to the public.
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
The GNU Affero General Public License is designed specifically to
|
||||||
including but not limited to software source code, documentation
|
ensure that, in such cases, the modified source code becomes available
|
||||||
source, and configuration files.
|
to the community. It requires the operator of a network server to
|
||||||
|
provide the source code of the modified version running there to the
|
||||||
|
users of that server. Therefore, public use of a modified version, on
|
||||||
|
a publicly accessible server, gives the public access to the source
|
||||||
|
code of the modified version.
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
An older license, called the Affero General Public License and
|
||||||
transformation or translation of a Source form, including but
|
published by Affero, was designed to accomplish similar goals. This is
|
||||||
not limited to compiled object code, generated documentation,
|
a different license, not a version of the Affero GPL, but Affero has
|
||||||
and conversions to other media types.
|
released a new version of the Affero GPL which permits relicensing under
|
||||||
|
this license.
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
The precise terms and conditions for copying, distribution and
|
||||||
Object form, made available under the License, as indicated by a
|
modification follow.
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
TERMS AND CONDITIONS
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
0. Definitions.
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
works, such as semiconductor masks.
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
"The Program" refers to any copyrightable work licensed under this
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
License. Each licensee is addressed as "you". "Licensees" and
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
"recipients" may be individuals or organizations.
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
To "modify" a work means to copy from or adapt all or part of the work
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
in a fashion requiring copyright permission, other than the making of an
|
||||||
modifications, and in Source or Object form, provided that You
|
exact copy. The resulting work is called a "modified version" of the
|
||||||
meet the following conditions:
|
earlier work or a work "based on" the earlier work.
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
A "covered work" means either the unmodified Program or a work based
|
||||||
Derivative Works a copy of this License; and
|
on the Program.
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
To "propagate" a work means to do anything with it that, without
|
||||||
stating that You changed the files; and
|
permission, would make you directly or secondarily liable for
|
||||||
|
infringement under applicable copyright law, except executing it on a
|
||||||
|
computer or modifying a private copy. Propagation includes copying,
|
||||||
|
distribution (with or without modification), making available to the
|
||||||
|
public, and in some countries other activities as well.
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
To "convey" a work means any kind of propagation that enables other
|
||||||
that You distribute, all copyright, patent, trademark, and
|
parties to make or receive copies. Mere interaction with a user through
|
||||||
attribution notices from the Source form of the Work,
|
a computer network, with no transfer of a copy, is not conveying.
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
An interactive user interface displays "Appropriate Legal Notices"
|
||||||
distribution, then any Derivative Works that You distribute must
|
to the extent that it includes a convenient and prominently visible
|
||||||
include a readable copy of the attribution notices contained
|
feature that (1) displays an appropriate copyright notice, and (2)
|
||||||
within such NOTICE file, excluding those notices that do not
|
tells the user that there is no warranty for the work (except to the
|
||||||
pertain to any part of the Derivative Works, in at least one
|
extent that warranties are provided), that licensees may convey the
|
||||||
of the following places: within a NOTICE text file distributed
|
work under this License, and how to view a copy of this License. If
|
||||||
as part of the Derivative Works; within the Source form or
|
the interface presents a list of user commands or options, such as a
|
||||||
documentation, if provided along with the Derivative Works; or,
|
menu, a prominent item in the list meets this criterion.
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
1. Source Code.
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
The "source code" for a work means the preferred form of the work
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
for making modifications to it. "Object code" means any non-source
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
form of a work.
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
A "Standard Interface" means an interface that either is an official
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
standard defined by a recognized standards body, or, in the case of
|
||||||
except as required for reasonable and customary use in describing the
|
interfaces specified for a particular programming language, one that
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
is widely used among developers working in that language.
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
The "System Libraries" of an executable work include anything, other
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
than the work as a whole, that (a) is included in the normal form of
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
packaging a Major Component, but which is not part of that Major
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
Component, and (b) serves only to enable use of the work with that
|
||||||
implied, including, without limitation, any warranties or conditions
|
Major Component, or to implement a Standard Interface for which an
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
implementation is available to the public in source code form. A
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
"Major Component", in this context, means a major essential component
|
||||||
appropriateness of using or redistributing the Work and assume any
|
(kernel, window system, and so on) of the specific operating system
|
||||||
risks associated with Your exercise of permissions under this License.
|
(if any) on which the executable work runs, or a compiler used to
|
||||||
|
produce the work, or an object code interpreter used to run it.
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
The "Corresponding Source" for a work in object code form means all
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
the source code needed to generate, install, and (for an executable
|
||||||
unless required by applicable law (such as deliberate and grossly
|
work) run the object code and to modify the work, including scripts to
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
control those activities. However, it does not include the work's
|
||||||
liable to You for damages, including any direct, indirect, special,
|
System Libraries, or general-purpose tools or generally available free
|
||||||
incidental, or consequential damages of any character arising as a
|
programs which are used unmodified in performing those activities but
|
||||||
result of this License or out of the use or inability to use the
|
which are not part of the work. For example, Corresponding Source
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
includes interface definition files associated with source files for
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
the work, and the source code for shared libraries and dynamically
|
||||||
other commercial damages or losses), even if such Contributor
|
linked subprograms that the work is specifically designed to require,
|
||||||
has been advised of the possibility of such damages.
|
such as by intimate data communication or control flow between those
|
||||||
|
subprograms and other parts of the work.
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
The Corresponding Source need not include anything that users
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
can regenerate automatically from other parts of the Corresponding
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
Source.
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
The Corresponding Source for a work in source code form is that
|
||||||
|
same work.
|
||||||
|
|
||||||
APPENDIX: How to apply the Apache License to your work.
|
2. Basic Permissions.
|
||||||
|
|
||||||
To apply the Apache License to your work, attach the following
|
All rights granted under this License are granted for the term of
|
||||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
copyright on the Program, and are irrevocable provided the stated
|
||||||
replaced with your own identifying information. (Don't include
|
conditions are met. This License explicitly affirms your unlimited
|
||||||
the brackets!) The text should be enclosed in the appropriate
|
permission to run the unmodified Program. The output from running a
|
||||||
comment syntax for the file format. We also recommend that a
|
covered work is covered by this License only if the output, given its
|
||||||
file or class name and description of purpose be included on the
|
content, constitutes a covered work. This License acknowledges your
|
||||||
same "printed page" as the copyright notice for easier
|
rights of fair use or other equivalent, as provided by copyright law.
|
||||||
identification within third-party archives.
|
|
||||||
|
|
||||||
Copyright [2023] [Codium ltd]
|
You may make, run and propagate covered works that you do not
|
||||||
|
convey, without conditions so long as your license otherwise remains
|
||||||
|
in force. You may convey covered works to others for the sole purpose
|
||||||
|
of having them make modifications exclusively for you, or provide you
|
||||||
|
with facilities for running those works, provided that you comply with
|
||||||
|
the terms of this License in conveying all material for which you do
|
||||||
|
not control copyright. Those thus making or running the covered works
|
||||||
|
for you must do so exclusively on your behalf, under your direction
|
||||||
|
and control, on terms that prohibit them from making any copies of
|
||||||
|
your copyrighted material outside their relationship with you.
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Conveying under any other circumstances is permitted solely under
|
||||||
you may not use this file except in compliance with the License.
|
the conditions stated below. Sublicensing is not allowed; section 10
|
||||||
You may obtain a copy of the License at
|
makes it unnecessary.
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
No covered work shall be deemed part of an effective technological
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
measure under any applicable law fulfilling obligations under article
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||||
See the License for the specific language governing permissions and
|
similar laws prohibiting or restricting circumvention of such
|
||||||
limitations under the License.
|
measures.
|
||||||
|
|
||||||
|
When you convey a covered work, you waive any legal power to forbid
|
||||||
|
circumvention of technological measures to the extent such circumvention
|
||||||
|
is effected by exercising rights under this License with respect to
|
||||||
|
the covered work, and you disclaim any intention to limit operation or
|
||||||
|
modification of the work as a means of enforcing, against the work's
|
||||||
|
users, your or third parties' legal rights to forbid circumvention of
|
||||||
|
technological measures.
|
||||||
|
|
||||||
|
4. Conveying Verbatim Copies.
|
||||||
|
|
||||||
|
You may convey verbatim copies of the Program's source code as you
|
||||||
|
receive it, in any medium, provided that you conspicuously and
|
||||||
|
appropriately publish on each copy an appropriate copyright notice;
|
||||||
|
keep intact all notices stating that this License and any
|
||||||
|
non-permissive terms added in accord with section 7 apply to the code;
|
||||||
|
keep intact all notices of the absence of any warranty; and give all
|
||||||
|
recipients a copy of this License along with the Program.
|
||||||
|
|
||||||
|
You may charge any price or no price for each copy that you convey,
|
||||||
|
and you may offer support or warranty protection for a fee.
|
||||||
|
|
||||||
|
5. Conveying Modified Source Versions.
|
||||||
|
|
||||||
|
You may convey a work based on the Program, or the modifications to
|
||||||
|
produce it from the Program, in the form of source code under the
|
||||||
|
terms of section 4, provided that you also meet all of these conditions:
|
||||||
|
|
||||||
|
a) The work must carry prominent notices stating that you modified
|
||||||
|
it, and giving a relevant date.
|
||||||
|
|
||||||
|
b) The work must carry prominent notices stating that it is
|
||||||
|
released under this License and any conditions added under section
|
||||||
|
7. This requirement modifies the requirement in section 4 to
|
||||||
|
"keep intact all notices".
|
||||||
|
|
||||||
|
c) You must license the entire work, as a whole, under this
|
||||||
|
License to anyone who comes into possession of a copy. This
|
||||||
|
License will therefore apply, along with any applicable section 7
|
||||||
|
additional terms, to the whole of the work, and all its parts,
|
||||||
|
regardless of how they are packaged. This License gives no
|
||||||
|
permission to license the work in any other way, but it does not
|
||||||
|
invalidate such permission if you have separately received it.
|
||||||
|
|
||||||
|
d) If the work has interactive user interfaces, each must display
|
||||||
|
Appropriate Legal Notices; however, if the Program has interactive
|
||||||
|
interfaces that do not display Appropriate Legal Notices, your
|
||||||
|
work need not make them do so.
|
||||||
|
|
||||||
|
A compilation of a covered work with other separate and independent
|
||||||
|
works, which are not by their nature extensions of the covered work,
|
||||||
|
and which are not combined with it such as to form a larger program,
|
||||||
|
in or on a volume of a storage or distribution medium, is called an
|
||||||
|
"aggregate" if the compilation and its resulting copyright are not
|
||||||
|
used to limit the access or legal rights of the compilation's users
|
||||||
|
beyond what the individual works permit. Inclusion of a covered work
|
||||||
|
in an aggregate does not cause this License to apply to the other
|
||||||
|
parts of the aggregate.
|
||||||
|
|
||||||
|
6. Conveying Non-Source Forms.
|
||||||
|
|
||||||
|
You may convey a covered work in object code form under the terms
|
||||||
|
of sections 4 and 5, provided that you also convey the
|
||||||
|
machine-readable Corresponding Source under the terms of this License,
|
||||||
|
in one of these ways:
|
||||||
|
|
||||||
|
a) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by the
|
||||||
|
Corresponding Source fixed on a durable physical medium
|
||||||
|
customarily used for software interchange.
|
||||||
|
|
||||||
|
b) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by a
|
||||||
|
written offer, valid for at least three years and valid for as
|
||||||
|
long as you offer spare parts or customer support for that product
|
||||||
|
model, to give anyone who possesses the object code either (1) a
|
||||||
|
copy of the Corresponding Source for all the software in the
|
||||||
|
product that is covered by this License, on a durable physical
|
||||||
|
medium customarily used for software interchange, for a price no
|
||||||
|
more than your reasonable cost of physically performing this
|
||||||
|
conveying of source, or (2) access to copy the
|
||||||
|
Corresponding Source from a network server at no charge.
|
||||||
|
|
||||||
|
c) Convey individual copies of the object code with a copy of the
|
||||||
|
written offer to provide the Corresponding Source. This
|
||||||
|
alternative is allowed only occasionally and noncommercially, and
|
||||||
|
only if you received the object code with such an offer, in accord
|
||||||
|
with subsection 6b.
|
||||||
|
|
||||||
|
d) Convey the object code by offering access from a designated
|
||||||
|
place (gratis or for a charge), and offer equivalent access to the
|
||||||
|
Corresponding Source in the same way through the same place at no
|
||||||
|
further charge. You need not require recipients to copy the
|
||||||
|
Corresponding Source along with the object code. If the place to
|
||||||
|
copy the object code is a network server, the Corresponding Source
|
||||||
|
may be on a different server (operated by you or a third party)
|
||||||
|
that supports equivalent copying facilities, provided you maintain
|
||||||
|
clear directions next to the object code saying where to find the
|
||||||
|
Corresponding Source. Regardless of what server hosts the
|
||||||
|
Corresponding Source, you remain obligated to ensure that it is
|
||||||
|
available for as long as needed to satisfy these requirements.
|
||||||
|
|
||||||
|
e) Convey the object code using peer-to-peer transmission, provided
|
||||||
|
you inform other peers where the object code and Corresponding
|
||||||
|
Source of the work are being offered to the general public at no
|
||||||
|
charge under subsection 6d.
|
||||||
|
|
||||||
|
A separable portion of the object code, whose source code is excluded
|
||||||
|
from the Corresponding Source as a System Library, need not be
|
||||||
|
included in conveying the object code work.
|
||||||
|
|
||||||
|
A "User Product" is either (1) a "consumer product", which means any
|
||||||
|
tangible personal property which is normally used for personal, family,
|
||||||
|
or household purposes, or (2) anything designed or sold for incorporation
|
||||||
|
into a dwelling. In determining whether a product is a consumer product,
|
||||||
|
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||||
|
product received by a particular user, "normally used" refers to a
|
||||||
|
typical or common use of that class of product, regardless of the status
|
||||||
|
of the particular user or of the way in which the particular user
|
||||||
|
actually uses, or expects or is expected to use, the product. A product
|
||||||
|
is a consumer product regardless of whether the product has substantial
|
||||||
|
commercial, industrial or non-consumer uses, unless such uses represent
|
||||||
|
the only significant mode of use of the product.
|
||||||
|
|
||||||
|
"Installation Information" for a User Product means any methods,
|
||||||
|
procedures, authorization keys, or other information required to install
|
||||||
|
and execute modified versions of a covered work in that User Product from
|
||||||
|
a modified version of its Corresponding Source. The information must
|
||||||
|
suffice to ensure that the continued functioning of the modified object
|
||||||
|
code is in no case prevented or interfered with solely because
|
||||||
|
modification has been made.
|
||||||
|
|
||||||
|
If you convey an object code work under this section in, or with, or
|
||||||
|
specifically for use in, a User Product, and the conveying occurs as
|
||||||
|
part of a transaction in which the right of possession and use of the
|
||||||
|
User Product is transferred to the recipient in perpetuity or for a
|
||||||
|
fixed term (regardless of how the transaction is characterized), the
|
||||||
|
Corresponding Source conveyed under this section must be accompanied
|
||||||
|
by the Installation Information. But this requirement does not apply
|
||||||
|
if neither you nor any third party retains the ability to install
|
||||||
|
modified object code on the User Product (for example, the work has
|
||||||
|
been installed in ROM).
|
||||||
|
|
||||||
|
The requirement to provide Installation Information does not include a
|
||||||
|
requirement to continue to provide support service, warranty, or updates
|
||||||
|
for a work that has been modified or installed by the recipient, or for
|
||||||
|
the User Product in which it has been modified or installed. Access to a
|
||||||
|
network may be denied when the modification itself materially and
|
||||||
|
adversely affects the operation of the network or violates the rules and
|
||||||
|
protocols for communication across the network.
|
||||||
|
|
||||||
|
Corresponding Source conveyed, and Installation Information provided,
|
||||||
|
in accord with this section must be in a format that is publicly
|
||||||
|
documented (and with an implementation available to the public in
|
||||||
|
source code form), and must require no special password or key for
|
||||||
|
unpacking, reading or copying.
|
||||||
|
|
||||||
|
7. Additional Terms.
|
||||||
|
|
||||||
|
"Additional permissions" are terms that supplement the terms of this
|
||||||
|
License by making exceptions from one or more of its conditions.
|
||||||
|
Additional permissions that are applicable to the entire Program shall
|
||||||
|
be treated as though they were included in this License, to the extent
|
||||||
|
that they are valid under applicable law. If additional permissions
|
||||||
|
apply only to part of the Program, that part may be used separately
|
||||||
|
under those permissions, but the entire Program remains governed by
|
||||||
|
this License without regard to the additional permissions.
|
||||||
|
|
||||||
|
When you convey a copy of a covered work, you may at your option
|
||||||
|
remove any additional permissions from that copy, or from any part of
|
||||||
|
it. (Additional permissions may be written to require their own
|
||||||
|
removal in certain cases when you modify the work.) You may place
|
||||||
|
additional permissions on material, added by you to a covered work,
|
||||||
|
for which you have or can give appropriate copyright permission.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, for material you
|
||||||
|
add to a covered work, you may (if authorized by the copyright holders of
|
||||||
|
that material) supplement the terms of this License with terms:
|
||||||
|
|
||||||
|
a) Disclaiming warranty or limiting liability differently from the
|
||||||
|
terms of sections 15 and 16 of this License; or
|
||||||
|
|
||||||
|
b) Requiring preservation of specified reasonable legal notices or
|
||||||
|
author attributions in that material or in the Appropriate Legal
|
||||||
|
Notices displayed by works containing it; or
|
||||||
|
|
||||||
|
c) Prohibiting misrepresentation of the origin of that material, or
|
||||||
|
requiring that modified versions of such material be marked in
|
||||||
|
reasonable ways as different from the original version; or
|
||||||
|
|
||||||
|
d) Limiting the use for publicity purposes of names of licensors or
|
||||||
|
authors of the material; or
|
||||||
|
|
||||||
|
e) Declining to grant rights under trademark law for use of some
|
||||||
|
trade names, trademarks, or service marks; or
|
||||||
|
|
||||||
|
f) Requiring indemnification of licensors and authors of that
|
||||||
|
material by anyone who conveys the material (or modified versions of
|
||||||
|
it) with contractual assumptions of liability to the recipient, for
|
||||||
|
any liability that these contractual assumptions directly impose on
|
||||||
|
those licensors and authors.
|
||||||
|
|
||||||
|
All other non-permissive additional terms are considered "further
|
||||||
|
restrictions" within the meaning of section 10. If the Program as you
|
||||||
|
received it, or any part of it, contains a notice stating that it is
|
||||||
|
governed by this License along with a term that is a further
|
||||||
|
restriction, you may remove that term. If a license document contains
|
||||||
|
a further restriction but permits relicensing or conveying under this
|
||||||
|
License, you may add to a covered work material governed by the terms
|
||||||
|
of that license document, provided that the further restriction does
|
||||||
|
not survive such relicensing or conveying.
|
||||||
|
|
||||||
|
If you add terms to a covered work in accord with this section, you
|
||||||
|
must place, in the relevant source files, a statement of the
|
||||||
|
additional terms that apply to those files, or a notice indicating
|
||||||
|
where to find the applicable terms.
|
||||||
|
|
||||||
|
Additional terms, permissive or non-permissive, may be stated in the
|
||||||
|
form of a separately written license, or stated as exceptions;
|
||||||
|
the above requirements apply either way.
|
||||||
|
|
||||||
|
8. Termination.
|
||||||
|
|
||||||
|
You may not propagate or modify a covered work except as expressly
|
||||||
|
provided under this License. Any attempt otherwise to propagate or
|
||||||
|
modify it is void, and will automatically terminate your rights under
|
||||||
|
this License (including any patent licenses granted under the third
|
||||||
|
paragraph of section 11).
|
||||||
|
|
||||||
|
However, if you cease all violation of this License, then your
|
||||||
|
license from a particular copyright holder is reinstated (a)
|
||||||
|
provisionally, unless and until the copyright holder explicitly and
|
||||||
|
finally terminates your license, and (b) permanently, if the copyright
|
||||||
|
holder fails to notify you of the violation by some reasonable means
|
||||||
|
prior to 60 days after the cessation.
|
||||||
|
|
||||||
|
Moreover, your license from a particular copyright holder is
|
||||||
|
reinstated permanently if the copyright holder notifies you of the
|
||||||
|
violation by some reasonable means, this is the first time you have
|
||||||
|
received notice of violation of this License (for any work) from that
|
||||||
|
copyright holder, and you cure the violation prior to 30 days after
|
||||||
|
your receipt of the notice.
|
||||||
|
|
||||||
|
Termination of your rights under this section does not terminate the
|
||||||
|
licenses of parties who have received copies or rights from you under
|
||||||
|
this License. If your rights have been terminated and not permanently
|
||||||
|
reinstated, you do not qualify to receive new licenses for the same
|
||||||
|
material under section 10.
|
||||||
|
|
||||||
|
9. Acceptance Not Required for Having Copies.
|
||||||
|
|
||||||
|
You are not required to accept this License in order to receive or
|
||||||
|
run a copy of the Program. Ancillary propagation of a covered work
|
||||||
|
occurring solely as a consequence of using peer-to-peer transmission
|
||||||
|
to receive a copy likewise does not require acceptance. However,
|
||||||
|
nothing other than this License grants you permission to propagate or
|
||||||
|
modify any covered work. These actions infringe copyright if you do
|
||||||
|
not accept this License. Therefore, by modifying or propagating a
|
||||||
|
covered work, you indicate your acceptance of this License to do so.
|
||||||
|
|
||||||
|
10. Automatic Licensing of Downstream Recipients.
|
||||||
|
|
||||||
|
Each time you convey a covered work, the recipient automatically
|
||||||
|
receives a license from the original licensors, to run, modify and
|
||||||
|
propagate that work, subject to this License. You are not responsible
|
||||||
|
for enforcing compliance by third parties with this License.
|
||||||
|
|
||||||
|
An "entity transaction" is a transaction transferring control of an
|
||||||
|
organization, or substantially all assets of one, or subdividing an
|
||||||
|
organization, or merging organizations. If propagation of a covered
|
||||||
|
work results from an entity transaction, each party to that
|
||||||
|
transaction who receives a copy of the work also receives whatever
|
||||||
|
licenses to the work the party's predecessor in interest had or could
|
||||||
|
give under the previous paragraph, plus a right to possession of the
|
||||||
|
Corresponding Source of the work from the predecessor in interest, if
|
||||||
|
the predecessor has it or can get it with reasonable efforts.
|
||||||
|
|
||||||
|
You may not impose any further restrictions on the exercise of the
|
||||||
|
rights granted or affirmed under this License. For example, you may
|
||||||
|
not impose a license fee, royalty, or other charge for exercise of
|
||||||
|
rights granted under this License, and you may not initiate litigation
|
||||||
|
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||||
|
any patent claim is infringed by making, using, selling, offering for
|
||||||
|
sale, or importing the Program or any portion of it.
|
||||||
|
|
||||||
|
11. Patents.
|
||||||
|
|
||||||
|
A "contributor" is a copyright holder who authorizes use under this
|
||||||
|
License of the Program or a work on which the Program is based. The
|
||||||
|
work thus licensed is called the contributor's "contributor version".
|
||||||
|
|
||||||
|
A contributor's "essential patent claims" are all patent claims
|
||||||
|
owned or controlled by the contributor, whether already acquired or
|
||||||
|
hereafter acquired, that would be infringed by some manner, permitted
|
||||||
|
by this License, of making, using, or selling its contributor version,
|
||||||
|
but do not include claims that would be infringed only as a
|
||||||
|
consequence of further modification of the contributor version. For
|
||||||
|
purposes of this definition, "control" includes the right to grant
|
||||||
|
patent sublicenses in a manner consistent with the requirements of
|
||||||
|
this License.
|
||||||
|
|
||||||
|
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||||
|
patent license under the contributor's essential patent claims, to
|
||||||
|
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||||
|
propagate the contents of its contributor version.
|
||||||
|
|
||||||
|
In the following three paragraphs, a "patent license" is any express
|
||||||
|
agreement or commitment, however denominated, not to enforce a patent
|
||||||
|
(such as an express permission to practice a patent or covenant not to
|
||||||
|
sue for patent infringement). To "grant" such a patent license to a
|
||||||
|
party means to make such an agreement or commitment not to enforce a
|
||||||
|
patent against the party.
|
||||||
|
|
||||||
|
If you convey a covered work, knowingly relying on a patent license,
|
||||||
|
and the Corresponding Source of the work is not available for anyone
|
||||||
|
to copy, free of charge and under the terms of this License, through a
|
||||||
|
publicly available network server or other readily accessible means,
|
||||||
|
then you must either (1) cause the Corresponding Source to be so
|
||||||
|
available, or (2) arrange to deprive yourself of the benefit of the
|
||||||
|
patent license for this particular work, or (3) arrange, in a manner
|
||||||
|
consistent with the requirements of this License, to extend the patent
|
||||||
|
license to downstream recipients. "Knowingly relying" means you have
|
||||||
|
actual knowledge that, but for the patent license, your conveying the
|
||||||
|
covered work in a country, or your recipient's use of the covered work
|
||||||
|
in a country, would infringe one or more identifiable patents in that
|
||||||
|
country that you have reason to believe are valid.
|
||||||
|
|
||||||
|
If, pursuant to or in connection with a single transaction or
|
||||||
|
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||||
|
covered work, and grant a patent license to some of the parties
|
||||||
|
receiving the covered work authorizing them to use, propagate, modify
|
||||||
|
or convey a specific copy of the covered work, then the patent license
|
||||||
|
you grant is automatically extended to all recipients of the covered
|
||||||
|
work and works based on it.
|
||||||
|
|
||||||
|
A patent license is "discriminatory" if it does not include within
|
||||||
|
the scope of its coverage, prohibits the exercise of, or is
|
||||||
|
conditioned on the non-exercise of one or more of the rights that are
|
||||||
|
specifically granted under this License. You may not convey a covered
|
||||||
|
work if you are a party to an arrangement with a third party that is
|
||||||
|
in the business of distributing software, under which you make payment
|
||||||
|
to the third party based on the extent of your activity of conveying
|
||||||
|
the work, and under which the third party grants, to any of the
|
||||||
|
parties who would receive the covered work from you, a discriminatory
|
||||||
|
patent license (a) in connection with copies of the covered work
|
||||||
|
conveyed by you (or copies made from those copies), or (b) primarily
|
||||||
|
for and in connection with specific products or compilations that
|
||||||
|
contain the covered work, unless you entered into that arrangement,
|
||||||
|
or that patent license was granted, prior to 28 March 2007.
|
||||||
|
|
||||||
|
Nothing in this License shall be construed as excluding or limiting
|
||||||
|
any implied license or other defenses to infringement that may
|
||||||
|
otherwise be available to you under applicable patent law.
|
||||||
|
|
||||||
|
12. No Surrender of Others' Freedom.
|
||||||
|
|
||||||
|
If conditions are imposed on you (whether by court order, agreement or
|
||||||
|
otherwise) that contradict the conditions of this License, they do not
|
||||||
|
excuse you from the conditions of this License. If you cannot convey a
|
||||||
|
covered work so as to satisfy simultaneously your obligations under this
|
||||||
|
License and any other pertinent obligations, then as a consequence you may
|
||||||
|
not convey it at all. For example, if you agree to terms that obligate you
|
||||||
|
to collect a royalty for further conveying from those to whom you convey
|
||||||
|
the Program, the only way you could satisfy both those terms and this
|
||||||
|
License would be to refrain entirely from conveying the Program.
|
||||||
|
|
||||||
|
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, if you modify the
|
||||||
|
Program, your modified version must prominently offer all users
|
||||||
|
interacting with it remotely through a computer network (if your version
|
||||||
|
supports such interaction) an opportunity to receive the Corresponding
|
||||||
|
Source of your version by providing access to the Corresponding Source
|
||||||
|
from a network server at no charge, through some standard or customary
|
||||||
|
means of facilitating copying of software. This Corresponding Source
|
||||||
|
shall include the Corresponding Source for any work covered by version 3
|
||||||
|
of the GNU General Public License that is incorporated pursuant to the
|
||||||
|
following paragraph.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, you have
|
||||||
|
permission to link or combine any covered work with a work licensed
|
||||||
|
under version 3 of the GNU General Public License into a single
|
||||||
|
combined work, and to convey the resulting work. The terms of this
|
||||||
|
License will continue to apply to the part which is the covered work,
|
||||||
|
but the work with which it is combined will remain governed by version
|
||||||
|
3 of the GNU General Public License.
|
||||||
|
|
||||||
|
14. Revised Versions of this License.
|
||||||
|
|
||||||
|
The Free Software Foundation may publish revised and/or new versions of
|
||||||
|
the GNU Affero General Public License from time to time. Such new versions
|
||||||
|
will be similar in spirit to the present version, but may differ in detail to
|
||||||
|
address new problems or concerns.
|
||||||
|
|
||||||
|
Each version is given a distinguishing version number. If the
|
||||||
|
Program specifies that a certain numbered version of the GNU Affero General
|
||||||
|
Public License "or any later version" applies to it, you have the
|
||||||
|
option of following the terms and conditions either of that numbered
|
||||||
|
version or of any later version published by the Free Software
|
||||||
|
Foundation. If the Program does not specify a version number of the
|
||||||
|
GNU Affero General Public License, you may choose any version ever published
|
||||||
|
by the Free Software Foundation.
|
||||||
|
|
||||||
|
If the Program specifies that a proxy can decide which future
|
||||||
|
versions of the GNU Affero General Public License can be used, that proxy's
|
||||||
|
public statement of acceptance of a version permanently authorizes you
|
||||||
|
to choose that version for the Program.
|
||||||
|
|
||||||
|
Later license versions may give you additional or different
|
||||||
|
permissions. However, no additional obligations are imposed on any
|
||||||
|
author or copyright holder as a result of your choosing to follow a
|
||||||
|
later version.
|
||||||
|
|
||||||
|
15. Disclaimer of Warranty.
|
||||||
|
|
||||||
|
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||||
|
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||||
|
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||||
|
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||||
|
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||||
|
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||||
|
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||||
|
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||||
|
|
||||||
|
16. Limitation of Liability.
|
||||||
|
|
||||||
|
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||||
|
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||||
|
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||||
|
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||||
|
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||||
|
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||||
|
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||||
|
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||||
|
SUCH DAMAGES.
|
||||||
|
|
||||||
|
17. Interpretation of Sections 15 and 16.
|
||||||
|
|
||||||
|
If the disclaimer of warranty and limitation of liability provided
|
||||||
|
above cannot be given local legal effect according to their terms,
|
||||||
|
reviewing courts shall apply local law that most closely approximates
|
||||||
|
an absolute waiver of all civil liability in connection with the
|
||||||
|
Program, unless a warranty or assumption of liability accompanies a
|
||||||
|
copy of the Program in return for a fee.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
How to Apply These Terms to Your New Programs
|
||||||
|
|
||||||
|
If you develop a new program, and you want it to be of the greatest
|
||||||
|
possible use to the public, the best way to achieve this is to make it
|
||||||
|
free software which everyone can redistribute and change under these terms.
|
||||||
|
|
||||||
|
To do so, attach the following notices to the program. It is safest
|
||||||
|
to attach them to the start of each source file to most effectively
|
||||||
|
state the exclusion of warranty; and each file should have at least
|
||||||
|
the "copyright" line and a pointer to where the full notice is found.
|
||||||
|
|
||||||
|
<one line to give the program's name and a brief idea of what it does.>
|
||||||
|
Copyright (C) <year> <name of author>
|
||||||
|
|
||||||
|
This program is free software: you can redistribute it and/or modify
|
||||||
|
it under the terms of the GNU Affero General Public License as published
|
||||||
|
by the Free Software Foundation, either version 3 of the License, or
|
||||||
|
(at your option) any later version.
|
||||||
|
|
||||||
|
This program is distributed in the hope that it will be useful,
|
||||||
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
You should have received a copy of the GNU Affero General Public License
|
||||||
|
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
Also add information on how to contact you by electronic and paper mail.
|
||||||
|
|
||||||
|
If your software can interact with users remotely through a computer
|
||||||
|
network, you should also make sure that it provides a way for users to
|
||||||
|
get its source. For example, if your program is a web application, its
|
||||||
|
interface could display a "Source" link that leads users to an archive
|
||||||
|
of the code. There are many ways you could offer source, and different
|
||||||
|
solutions will be better for different programs; see section 13 for the
|
||||||
|
specific requirements.
|
||||||
|
|
||||||
|
You should also get your employer (if you work as a programmer) or school,
|
||||||
|
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||||
|
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||||
|
<https://www.gnu.org/licenses/>.
|
||||||
|
173
README.md
173
README.md
@ -29,19 +29,47 @@ PR-Agent aims to help efficiently review and handle pull requests, by providing
|
|||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Getting Started](#getting-started)
|
||||||
- [News and Updates](#news-and-updates)
|
- [News and Updates](#news-and-updates)
|
||||||
- [Overview](#overview)
|
- [Overview](#overview)
|
||||||
- [Example results](#example-results)
|
- [See It in Action](#see-it-in-action)
|
||||||
- [Try it now](#try-it-now)
|
- [Try It Now](#try-it-now)
|
||||||
- [Qodo Merge](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/)
|
- [Qodo Merge 💎](#qodo-merge-)
|
||||||
- [How it works](#how-it-works)
|
- [How It Works](#how-it-works)
|
||||||
- [Why use PR-Agent?](#why-use-pr-agent)
|
- [Why Use PR-Agent?](#why-use-pr-agent)
|
||||||
- [Data privacy](#data-privacy)
|
- [Data Privacy](#data-privacy)
|
||||||
- [Contributing](#contributing)
|
- [Contributing](#contributing)
|
||||||
- [Links](#links)
|
- [Links](#links)
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
### Try it Instantly
|
||||||
|
Test PR-Agent on any public GitHub repository by commenting `@CodiumAI-Agent /improve`
|
||||||
|
|
||||||
|
### GitHub Action
|
||||||
|
Add automated PR reviews to your repository with a simple workflow file using [GitHub Action setup guide](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action)
|
||||||
|
|
||||||
|
#### Other Platforms
|
||||||
|
- [GitLab webhook setup](https://qodo-merge-docs.qodo.ai/installation/gitlab/)
|
||||||
|
- [BitBucket app installation](https://qodo-merge-docs.qodo.ai/installation/bitbucket/)
|
||||||
|
- [Azure DevOps setup](https://qodo-merge-docs.qodo.ai/installation/azure/)
|
||||||
|
|
||||||
|
### CLI Usage
|
||||||
|
Run PR-Agent locally on your repository via command line: [Local CLI setup guide](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli)
|
||||||
|
|
||||||
|
### Discover Qodo Merge 💎
|
||||||
|
Zero-setup hosted solution with advanced features and priority support
|
||||||
|
- [Intro and Installation guide](https://qodo-merge-docs.qodo.ai/installation/qodo_merge/)
|
||||||
|
- [Plans & Pricing](https://www.qodo.ai/pricing/)
|
||||||
|
|
||||||
|
|
||||||
## News and Updates
|
## News and Updates
|
||||||
|
|
||||||
|
## Jun 3, 2025
|
||||||
|
|
||||||
|
Qodo Merge now offers a simplified free tier 💎.
|
||||||
|
Organizations can use Qodo Merge at no cost, with a [monthly limit](https://qodo-merge-docs.qodo.ai/installation/qodo_merge/#cloud-users) of 75 PR reviews per organization.
|
||||||
|
|
||||||
## May 17, 2025
|
## May 17, 2025
|
||||||
|
|
||||||
- v0.29 was [released](https://github.com/qodo-ai/pr-agent/releases)
|
- v0.29 was [released](https://github.com/qodo-ai/pr-agent/releases)
|
||||||
@ -70,85 +98,58 @@ Read more about it [here](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discus
|
|||||||
|
|
||||||
Supported commands per platform:
|
Supported commands per platform:
|
||||||
|
|
||||||
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|
| | | GitHub | GitLab | Bitbucket | Azure DevOps | Gitea |
|
||||||
| ----- |---------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|
|
|---------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|:-----:|
|
||||||
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
|
| [TOOLS](https://qodo-merge-docs.qodo.ai/tools/) | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | ⮑ [Ask on code lines](https://qodo-merge-docs.qodo.ai/tools/ask/#ask-lines) | ✅ | ✅ | | |
|
| | ⮑ [Ask on code lines](https://qodo-merge-docs.qodo.ai/tools/ask/#ask-lines) | ✅ | ✅ | | | |
|
||||||
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/?h=auto#auto-approval) | ✅ | ✅ | ✅ | | |
|
||||||
| | [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/?h=auto#auto-approval) | ✅ | ✅ | ✅ | |
|
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Add Documentation](https://qodo-merge-docs.qodo.ai/tools/documentation/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | |
|
| | [Analyze](https://qodo-merge-docs.qodo.ai/tools/analyze/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
|
| | [Auto-Approve](https://qodo-merge-docs.qodo.ai/tools/improve/?h=auto#auto-approval) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
|
| | [CI Feedback](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) 💎 | ✅ | | | | |
|
||||||
| | [CI Feedback](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
| | [Custom Prompt](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [PR Documentation](https://qodo-merge-docs.qodo.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
| | [Generate Custom Labels](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Custom Labels](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
| | [Generate Tests](https://qodo-merge-docs.qodo.ai/tools/test/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Analyze](https://qodo-merge-docs.qodo.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
| | [Implement](https://qodo-merge-docs.qodo.ai/tools/implement/) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Similar Code](https://qodo-merge-docs.qodo.ai/tools/similar_code/) 💎 | ✅ | | | |
|
| | [Scan Repo Discussions](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/) 💎 | ✅ | | | | |
|
||||||
| | [Custom Prompt](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Similar Code](https://qodo-merge-docs.qodo.ai/tools/similar_code/) 💎 | ✅ | | | | |
|
||||||
| | [Test](https://qodo-merge-docs.qodo.ai/tools/test/) 💎 | ✅ | ✅ | | |
|
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Implement](https://qodo-merge-docs.qodo.ai/tools/implement/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Scan Repo Discussions](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/) 💎 | ✅ | | | |
|
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | | |
|
||||||
| | [Repo Statistics](https://qodo-merge-docs.qodo.ai/tools/repo_statistics/) 💎 | ✅ | | | |
|
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Auto-Approve](https://qodo-merge-docs.qodo.ai/tools/improve/?h=auto#auto-approval) 💎 | ✅ | ✅ | ✅ | |
|
| | | | | | | |
|
||||||
| | | | | | |
|
| [USAGE](https://qodo-merge-docs.qodo.ai/usage-guide/) | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ |
|
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
|
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | | |
|
||||||
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
|
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ | ✅ | ✅ | ✅ |
|
| | | | | | | |
|
||||||
| | | | | | |
|
| [CORE](https://qodo-merge-docs.qodo.ai/core-abilities/) | [Adaptive and token-aware file patch fitting](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Auto Best Practices 💎](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | ✅ | | | | |
|
||||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
| | [Chat on code suggestions](https://qodo-merge-docs.qodo.ai/core-abilities/chat_on_code_suggestions/) | ✅ | ✅ | | | |
|
||||||
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Code Validation 💎](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Fetching ticket context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) | ✅ | ✅ | ✅ | | |
|
||||||
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Global and wiki configurations](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | | |
|
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Global and wiki configurations](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Incremental Update](https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/) | ✅ | | | | |
|
||||||
| | [PR interactive actions](https://www.qodo.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
|
| | [Interactivity](https://qodo-merge-docs.qodo.ai/core-abilities/interactivity/) | ✅ | ✅ | | | |
|
||||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
|
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Code Validation 💎](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Auto Best Practices 💎](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | ✅ | | | |
|
| | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
|
| | [PR interactive actions](https://www.qodo.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | | |
|
||||||
|
| | [RAG context enrichment](https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/) | ✅ | | ✅ | | |
|
||||||
|
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
|
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | | | |
|
||||||
- 💎 means this feature is available only in [Qodo Merge](https://www.qodo.ai/pricing/)
|
- 💎 means this feature is available only in [Qodo Merge](https://www.qodo.ai/pricing/)
|
||||||
|
|
||||||
[//]: # (- Support for additional git providers is described in [here](./docs/Full_environments.md))
|
[//]: # (- Support for additional git providers is described in [here](./docs/Full_environments.md))
|
||||||
___
|
___
|
||||||
|
|
||||||
‣ **Auto Description ([`/describe`](https://qodo-merge-docs.qodo.ai/tools/describe/))**: Automatically generating PR description - title, type, summary, code walkthrough and labels.
|
## See It in Action
|
||||||
\
|
|
||||||
‣ **Auto Review ([`/review`](https://qodo-merge-docs.qodo.ai/tools/review/))**: Adjustable feedback about the PR, possible issues, security concerns, review effort and more.
|
|
||||||
\
|
|
||||||
‣ **Code Suggestions ([`/improve`](https://qodo-merge-docs.qodo.ai/tools/improve/))**: Code suggestions for improving the PR.
|
|
||||||
\
|
|
||||||
‣ **Question Answering ([`/ask ...`](https://qodo-merge-docs.qodo.ai/tools/ask/))**: Answering free-text questions about the PR.
|
|
||||||
\
|
|
||||||
‣ **Update Changelog ([`/update_changelog`](https://qodo-merge-docs.qodo.ai/tools/update_changelog/))**: Automatically updating the CHANGELOG.md file with the PR changes.
|
|
||||||
\
|
|
||||||
‣ **Help Docs ([`/help_docs`](https://qodo-merge-docs.qodo.ai/tools/help_docs/))**: Answers a question on any repository by utilizing given documentation.
|
|
||||||
\
|
|
||||||
‣ **Add Documentation 💎 ([`/add_docs`](https://qodo-merge-docs.qodo.ai/tools/documentation/))**: Generates documentation to methods/functions/classes that changed in the PR.
|
|
||||||
\
|
|
||||||
‣ **Generate Custom Labels 💎 ([`/generate_labels`](https://qodo-merge-docs.qodo.ai/tools/custom_labels/))**: Generates custom labels for the PR, based on specific guidelines defined by the user.
|
|
||||||
\
|
|
||||||
‣ **Analyze 💎 ([`/analyze`](https://qodo-merge-docs.qodo.ai/tools/analyze/))**: Identify code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component.
|
|
||||||
\
|
|
||||||
‣ **Test 💎 ([`/test`](https://qodo-merge-docs.qodo.ai/tools/test/))**: Generate tests for a selected component, based on the PR code changes.
|
|
||||||
\
|
|
||||||
‣ **Custom Prompt 💎 ([`/custom_prompt`](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/))**: Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user.
|
|
||||||
\
|
|
||||||
‣ **Generate Tests 💎 ([`/test component_name`](https://qodo-merge-docs.qodo.ai/tools/test/))**: Generates unit tests for a selected component, based on the PR code changes.
|
|
||||||
\
|
|
||||||
‣ **CI Feedback 💎 ([`/checks ci_job`](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/))**: Automatically generates feedback and analysis for a failed CI job.
|
|
||||||
\
|
|
||||||
‣ **Similar Code 💎 ([`/find_similar_component`](https://qodo-merge-docs.qodo.ai/tools/similar_code/))**: Retrieves the most similar code components from inside the organization's codebase, or from open-source code.
|
|
||||||
\
|
|
||||||
‣ **Implement 💎 ([`/implement`](https://qodo-merge-docs.qodo.ai/tools/implement/))**: Generates implementation code from review suggestions.
|
|
||||||
___
|
|
||||||
|
|
||||||
## Example results
|
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/530">/describe</a></h4>
|
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/530">/describe</a></h4>
|
||||||
@ -183,7 +184,7 @@ ___
|
|||||||
</div>
|
</div>
|
||||||
<hr>
|
<hr>
|
||||||
|
|
||||||
## Try it now
|
## Try It Now
|
||||||
|
|
||||||
Try the Claude Sonnet powered PR-Agent instantly on _your public GitHub repository_. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment. The agent will generate a response based on your command.
|
Try the Claude Sonnet powered PR-Agent instantly on _your public GitHub repository_. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment. The agent will generate a response based on your command.
|
||||||
For example, add a comment to any pull request with the following text:
|
For example, add a comment to any pull request with the following text:
|
||||||
@ -209,7 +210,7 @@ It does not have 'edit' access to your repo, for example, so it cannot update th
|
|||||||
4. **Extra features** - In addition to the benefits listed above, Qodo Merge will emphasize more customization, and the usage of static code analysis, in addition to LLM logic, to improve results.
|
4. **Extra features** - In addition to the benefits listed above, Qodo Merge will emphasize more customization, and the usage of static code analysis, in addition to LLM logic, to improve results.
|
||||||
See [here](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/) for a list of features available in Qodo Merge.
|
See [here](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/) for a list of features available in Qodo Merge.
|
||||||
|
|
||||||
## How it works
|
## How It Works
|
||||||
|
|
||||||
The following diagram illustrates PR-Agent tools and their flow:
|
The following diagram illustrates PR-Agent tools and their flow:
|
||||||
|
|
||||||
@ -217,7 +218,7 @@ The following diagram illustrates PR-Agent tools and their flow:
|
|||||||
|
|
||||||
Check out the [PR Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/#pr-compression-strategy) page for more details on how we convert a code diff to a manageable LLM prompt
|
Check out the [PR Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/#pr-compression-strategy) page for more details on how we convert a code diff to a manageable LLM prompt
|
||||||
|
|
||||||
## Why use PR-Agent?
|
## Why Use PR-Agent?
|
||||||
|
|
||||||
A reasonable question that can be asked is: `"Why use PR-Agent? What makes it stand out from existing tools?"`
|
A reasonable question that can be asked is: `"Why use PR-Agent? What makes it stand out from existing tools?"`
|
||||||
|
|
||||||
@ -225,10 +226,10 @@ Here are some advantages of PR-Agent:
|
|||||||
|
|
||||||
- We emphasize **real-life practical usage**. Each tool (review, improve, ask, ...) has a single LLM call, no more. We feel that this is critical for realistic team usage - obtaining an answer quickly (~30 seconds) and affordably.
|
- We emphasize **real-life practical usage**. Each tool (review, improve, ask, ...) has a single LLM call, no more. We feel that this is critical for realistic team usage - obtaining an answer quickly (~30 seconds) and affordably.
|
||||||
- Our [PR Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/#pr-compression-strategy) is a core ability that enables to effectively tackle both short and long PRs.
|
- Our [PR Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/#pr-compression-strategy) is a core ability that enables to effectively tackle both short and long PRs.
|
||||||
- Our JSON prompting strategy enables to have **modular, customizable tools**. For example, the '/review' tool categories can be controlled via the [configuration](pr_agent/settings/configuration.toml) file. Adding additional categories is easy and accessible.
|
- Our JSON prompting strategy enables us to have **modular, customizable tools**. For example, the '/review' tool categories can be controlled via the [configuration](pr_agent/settings/configuration.toml) file. Adding additional categories is easy and accessible.
|
||||||
- We support **multiple git providers** (GitHub, GitLab, BitBucket), **multiple ways** to use the tool (CLI, GitHub Action, GitHub App, Docker, ...), and **multiple models** (GPT, Claude, Deepseek, ...)
|
- We support **multiple git providers** (GitHub, GitLab, BitBucket), **multiple ways** to use the tool (CLI, GitHub Action, GitHub App, Docker, ...), and **multiple models** (GPT, Claude, Deepseek, ...)
|
||||||
|
|
||||||
## Data privacy
|
## Data Privacy
|
||||||
|
|
||||||
### Self-hosted PR-Agent
|
### Self-hosted PR-Agent
|
||||||
|
|
||||||
@ -253,7 +254,7 @@ To contribute to the project, get started by reading our [Contributing Guide](ht
|
|||||||
|
|
||||||
## Links
|
## Links
|
||||||
|
|
||||||
- Discord community: https://discord.gg/kG35uSHDBc
|
- Discord community: https://discord.com/invite/SgSxuQ65GF
|
||||||
- Qodo site: https://www.qodo.ai/
|
- Qodo site: https://www.qodo.ai/
|
||||||
- Blog: https://www.qodo.ai/blog/
|
- Blog: https://www.qodo.ai/blog/
|
||||||
- Troubleshooting: https://www.qodo.ai/blog/technical-faq-and-troubleshooting/
|
- Troubleshooting: https://www.qodo.ai/blog/technical-faq-and-troubleshooting/
|
||||||
|
@ -33,6 +33,11 @@ FROM base AS azure_devops_webhook
|
|||||||
ADD pr_agent pr_agent
|
ADD pr_agent pr_agent
|
||||||
CMD ["python", "pr_agent/servers/azuredevops_server_webhook.py"]
|
CMD ["python", "pr_agent/servers/azuredevops_server_webhook.py"]
|
||||||
|
|
||||||
|
FROM base AS gitea_app
|
||||||
|
ADD pr_agent pr_agent
|
||||||
|
CMD ["python", "-m", "gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-c", "pr_agent/servers/gunicorn_config.py","pr_agent.servers.gitea_app:app"]
|
||||||
|
|
||||||
|
|
||||||
FROM base AS test
|
FROM base AS test
|
||||||
ADD requirements-dev.txt .
|
ADD requirements-dev.txt .
|
||||||
RUN pip install --no-cache-dir -r requirements-dev.txt && rm requirements-dev.txt
|
RUN pip install --no-cache-dir -r requirements-dev.txt && rm requirements-dev.txt
|
||||||
|
@ -4,7 +4,7 @@ With a single-click installation you will gain access to a context-aware chat on
|
|||||||
|
|
||||||
The extension is powered by top code models like Claude 3.7 Sonnet and o4-mini. All the extension's features are free to use on public repositories.
|
The extension is powered by top code models like Claude 3.7 Sonnet and o4-mini. All the extension's features are free to use on public repositories.
|
||||||
|
|
||||||
For private repositories, you will need to install [Qodo Merge](https://github.com/apps/qodo-merge-pro){:target="_blank"} in addition to the extension (Quick GitHub app setup with a 14-day free trial. No credit card needed).
|
For private repositories, you will need to install [Qodo Merge](https://github.com/apps/qodo-merge-pro){:target="_blank"} in addition to the extension.
|
||||||
For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4){:target="_blank"}.
|
For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4){:target="_blank"}.
|
||||||
|
|
||||||
<img src="https://codium.ai/images/pr_agent/PR-AgentChat.gif" width="768">
|
<img src="https://codium.ai/images/pr_agent/PR-AgentChat.gif" width="768">
|
||||||
|
55
docs/docs/core-abilities/chat_on_code_suggestions.md
Normal file
55
docs/docs/core-abilities/chat_on_code_suggestions.md
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
# Chat on code suggestions 💎
|
||||||
|
|
||||||
|
`Supported Git Platforms: GitHub, GitLab`
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Qodo Merge implements an orchestrator agent that enables interactive code discussions, listening and responding to comments without requiring explicit tool calls.
|
||||||
|
The orchestrator intelligently analyzes your responses to determine if you want to implement a suggestion, ask a question, or request help, then delegates to the appropriate specialized tool.
|
||||||
|
|
||||||
|
To minimize unnecessary notifications and maintain focused discussions, the orchestrator agent will only respond to comments made directly within the inline code suggestion discussions it has created (`/improve`) or within discussions initiated by the `/implement` command.
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
Enable interactive code discussions by adding the following to your configuration file (default is `True`):
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pr_code_suggestions]
|
||||||
|
enable_chat_in_code_suggestions = true
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Activation
|
||||||
|
|
||||||
|
#### `/improve`
|
||||||
|
|
||||||
|
To obtain dynamic responses, the following steps are required:
|
||||||
|
|
||||||
|
1. Run the `/improve` command (mostly automatic)
|
||||||
|
2. Check the `/improve` recommendation checkboxes (_Apply this suggestion_) to have Qodo Merge generate a new inline code suggestion discussion
|
||||||
|
3. The orchestrator agent will then automatically listen to and reply to comments within the discussion without requiring additional commands
|
||||||
|
|
||||||
|
#### `/implement`
|
||||||
|
|
||||||
|
To obtain dynamic responses, the following steps are required:
|
||||||
|
|
||||||
|
1. Select code lines in the PR diff and run the `/implement` command
|
||||||
|
2. Wait for Qodo Merge to generate a new inline code suggestion
|
||||||
|
3. The orchestrator agent will then automatically listen to and reply to comments within the discussion without requiring additional commands
|
||||||
|
|
||||||
|
|
||||||
|
## Explore the available interaction patterns
|
||||||
|
|
||||||
|
!!! tip "Tip: Direct the agent with keywords"
|
||||||
|
Use "implement" or "apply" for code generation. Use "explain", "why", or "how" for information and help.
|
||||||
|
|
||||||
|
=== "Asking for Details"
|
||||||
|
{width=512}
|
||||||
|
|
||||||
|
=== "Implementing Suggestions"
|
||||||
|
{width=512}
|
||||||
|
|
||||||
|
=== "Providing Additional Help"
|
||||||
|
{width=512}
|
@ -9,8 +9,9 @@ This integration enriches the review process by automatically surfacing relevant
|
|||||||
|
|
||||||
**Ticket systems supported**:
|
**Ticket systems supported**:
|
||||||
|
|
||||||
- GitHub
|
- [GitHub](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#github-issues-integration)
|
||||||
- Jira (💎)
|
- [Jira (💎)](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration)
|
||||||
|
- [Linear (💎)](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#linear-integration)
|
||||||
|
|
||||||
**Ticket data fetched:**
|
**Ticket data fetched:**
|
||||||
|
|
||||||
@ -75,13 +76,17 @@ The recommended way to authenticate with Jira Cloud is to install the Qodo Merge
|
|||||||
|
|
||||||
Installation steps:
|
Installation steps:
|
||||||
|
|
||||||
1. Click [here](https://auth.atlassian.com/authorize?audience=api.atlassian.com&client_id=8krKmA4gMD8mM8z24aRCgPCSepZNP1xf&scope=read%3Ajira-work%20offline_access&redirect_uri=https%3A%2F%2Fregister.jira.pr-agent.codium.ai&state=qodomerge&response_type=code&prompt=consent) to install the Qodo Merge app in your Jira Cloud instance, click the `accept` button.<br>
|
1. Go to the [Qodo Merge integrations page](https://app.qodo.ai/qodo-merge/integrations)
|
||||||
|
|
||||||
|
2. Click on the Connect **Jira Cloud** button to connect the Jira Cloud app
|
||||||
|
|
||||||
|
3. Click the `accept` button.<br>
|
||||||
{width=384}
|
{width=384}
|
||||||
|
|
||||||
2. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.<br>
|
4. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.<br>
|
||||||
{width=384}
|
{width=384}
|
||||||
|
|
||||||
3. Now Qodo Merge will be able to fetch Jira ticket context for your PRs.
|
5. Now Qodo Merge will be able to fetch Jira ticket context for your PRs.
|
||||||
|
|
||||||
**2) Email/Token Authentication**
|
**2) Email/Token Authentication**
|
||||||
|
|
||||||
@ -190,7 +195,7 @@ This following steps will help you check if the basic auth is working correctly,
|
|||||||
|
|
||||||
2. run the following Python script (after replacing the placeholders with your actual values):
|
2. run the following Python script (after replacing the placeholders with your actual values):
|
||||||
|
|
||||||
??? example "Script to validate basic auth"
|
???- example "Script to validate basic auth"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from jira import JIRA
|
from jira import JIRA
|
||||||
@ -246,7 +251,7 @@ This following steps will help you check if the token is working correctly, and
|
|||||||
|
|
||||||
2. run the following Python script (after replacing the placeholders with your actual values):
|
2. run the following Python script (after replacing the placeholders with your actual values):
|
||||||
|
|
||||||
??? example "Script to validate PAT token"
|
??? example- "Script to validate PAT token"
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from jira import JIRA
|
from jira import JIRA
|
||||||
@ -281,6 +286,83 @@ This following steps will help you check if the token is working correctly, and
|
|||||||
print(f"Error fetching JIRA ticket details: {e}")
|
print(f"Error fetching JIRA ticket details: {e}")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Multi-JIRA Server Configuration 💎
|
||||||
|
|
||||||
|
Qodo Merge supports connecting to multiple JIRA servers using different authentication methods.
|
||||||
|
|
||||||
|
=== "Email/Token (Basic Auth)"
|
||||||
|
|
||||||
|
Configure multiple servers using Email/Token authentication:
|
||||||
|
|
||||||
|
- `jira_servers`: List of JIRA server URLs
|
||||||
|
- `jira_api_token`: List of API tokens (for Cloud) or passwords (for Data Center)
|
||||||
|
- `jira_api_email`: List of emails (for Cloud) or usernames (for Data Center)
|
||||||
|
- `jira_base_url`: Default server for ticket IDs like `PROJ-123`, Each repository can configure (local config file) its own `jira_base_url` to choose which server to use by default.
|
||||||
|
|
||||||
|
**Example Configuration:**
|
||||||
|
```toml
|
||||||
|
[jira]
|
||||||
|
# Server URLs
|
||||||
|
jira_servers = ["https://company.atlassian.net", "https://datacenter.jira.com"]
|
||||||
|
|
||||||
|
# API tokens/passwords
|
||||||
|
jira_api_token = ["cloud_api_token_here", "datacenter_password"]
|
||||||
|
|
||||||
|
# Emails/usernames (both required)
|
||||||
|
jira_api_email = ["user@company.com", "datacenter_username"]
|
||||||
|
|
||||||
|
# Default server for ticket IDs
|
||||||
|
jira_base_url = "https://company.atlassian.net"
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "PAT Auth"
|
||||||
|
|
||||||
|
Configure multiple servers using Personal Access Token authentication:
|
||||||
|
|
||||||
|
- `jira_servers`: List of JIRA server URLs
|
||||||
|
- `jira_api_token`: List of PAT tokens
|
||||||
|
- `jira_api_email`: Not needed (can be omitted or left empty)
|
||||||
|
- `jira_base_url`: Default server for ticket IDs like `PROJ-123`, Each repository can configure (local config file) its own `jira_base_url` to choose which server to use by default.
|
||||||
|
|
||||||
|
**Example Configuration:**
|
||||||
|
```toml
|
||||||
|
[jira]
|
||||||
|
# Server URLs
|
||||||
|
jira_servers = ["https://server1.jira.com", "https://server2.jira.com"]
|
||||||
|
|
||||||
|
# PAT tokens only
|
||||||
|
jira_api_token = ["pat_token_1", "pat_token_2"]
|
||||||
|
|
||||||
|
# Default server for ticket IDs
|
||||||
|
jira_base_url = "https://server1.jira.com"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Mixed Authentication (Email/Token + PAT):**
|
||||||
|
```toml
|
||||||
|
[jira]
|
||||||
|
jira_servers = ["https://company.atlassian.net", "https://server.jira.com"]
|
||||||
|
jira_api_token = ["cloud_api_token", "server_pat_token"]
|
||||||
|
jira_api_email = ["user@company.com", ""] # Empty for PAT
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Jira Cloud App"
|
||||||
|
|
||||||
|
For Jira Cloud instances using App Authentication:
|
||||||
|
|
||||||
|
1. Install the Qodo Merge app on each JIRA Cloud instance you want to connect to
|
||||||
|
2. Set the default server for ticket ID resolution:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[jira]
|
||||||
|
jira_base_url = "https://primary-team.atlassian.net"
|
||||||
|
```
|
||||||
|
|
||||||
|
Full URLs (e.g., `https://other-team.atlassian.net/browse/TASK-456`) will automatically use the correct connected instance.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### How to link a PR to a Jira ticket
|
### How to link a PR to a Jira ticket
|
||||||
|
|
||||||
To integrate with Jira, you can link your PR to a ticket using either of these methods:
|
To integrate with Jira, you can link your PR to a ticket using either of these methods:
|
||||||
@ -300,3 +382,45 @@ Name your branch with the ticket ID as a prefix (e.g., `ISSUE-123-feature-descri
|
|||||||
[jira]
|
[jira]
|
||||||
jira_base_url = "https://<JIRA_ORG>.atlassian.net"
|
jira_base_url = "https://<JIRA_ORG>.atlassian.net"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Linear Integration 💎
|
||||||
|
|
||||||
|
### Linear App Authentication
|
||||||
|
|
||||||
|
The recommended way to authenticate with Linear is to connect the Linear app through the Qodo Merge portal.
|
||||||
|
|
||||||
|
Installation steps:
|
||||||
|
|
||||||
|
1. Go to the [Qodo Merge integrations page](https://app.qodo.ai/qodo-merge/integrations)
|
||||||
|
|
||||||
|
2. Navigate to the **Integrations** tab
|
||||||
|
|
||||||
|
3. Click on the **Linear** button to connect the Linear app
|
||||||
|
|
||||||
|
4. Follow the authentication flow to authorize Qodo Merge to access your Linear workspace
|
||||||
|
|
||||||
|
5. Once connected, Qodo Merge will be able to fetch Linear ticket context for your PRs
|
||||||
|
|
||||||
|
### How to link a PR to a Linear ticket
|
||||||
|
|
||||||
|
Qodo Merge will automatically detect Linear tickets using either of these methods:
|
||||||
|
|
||||||
|
**Method 1: Description Reference:**
|
||||||
|
|
||||||
|
Include a ticket reference in your PR description using either:
|
||||||
|
- The complete Linear ticket URL: `https://linear.app/[ORG_ID]/issue/[TICKET_ID]`
|
||||||
|
- The shortened ticket ID: `[TICKET_ID]` (e.g., `ABC-123`) - requires linear_base_url configuration (see below).
|
||||||
|
|
||||||
|
**Method 2: Branch Name Detection:**
|
||||||
|
|
||||||
|
Name your branch with the ticket ID as a prefix (e.g., `ABC-123-feature-description` or `feature/ABC-123/feature-description`).
|
||||||
|
|
||||||
|
!!! note "Linear Base URL"
|
||||||
|
For shortened ticket IDs or branch detection (method 2), you must configure the Linear base URL in your configuration file under the [linear] section:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[linear]
|
||||||
|
linear_base_url = "https://linear.app/[ORG_ID]"
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `[ORG_ID]` with your Linear organization identifier.
|
||||||
|
33
docs/docs/core-abilities/incremental_update.md
Normal file
33
docs/docs/core-abilities/incremental_update.md
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
# Incremental Update 💎
|
||||||
|
|
||||||
|
`Supported Git Platforms: GitHub`
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
The Incremental Update feature helps users focus on feedback for their newest changes, making large PRs more manageable.
|
||||||
|
|
||||||
|
### How it works
|
||||||
|
|
||||||
|
=== "Update Option on Subsequent Commits"
|
||||||
|
{width=512}
|
||||||
|
|
||||||
|
=== "Generation of Incremental Update"
|
||||||
|
{width=512}
|
||||||
|
|
||||||
|
___
|
||||||
|
|
||||||
|
Whenever new commits are pushed following a recent code suggestions report for this PR, an Update button appears (as seen above).
|
||||||
|
|
||||||
|
Once the user clicks on the button:
|
||||||
|
|
||||||
|
- The `improve` tool identifies the new changes (the "delta")
|
||||||
|
- Provides suggestions on these recent changes
|
||||||
|
- Combines these suggestions with the overall PR feedback, prioritizing delta-related comments
|
||||||
|
- Marks delta-related comments with a textual indication followed by an asterisk (*) with a link to this page, so they can easily be identified
|
||||||
|
|
||||||
|
### Benefits for Developers
|
||||||
|
|
||||||
|
- Focus on what matters: See feedback on newest code first
|
||||||
|
- Clearer organization: Comments on recent changes are clearly marked
|
||||||
|
- Better workflow: Address feedback more systematically, starting with recent changes
|
||||||
|
|
||||||
|
|
@ -3,11 +3,13 @@
|
|||||||
Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
|
Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
|
||||||
|
|
||||||
- [Auto best practices](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/)
|
- [Auto best practices](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/)
|
||||||
|
- [Chat on code suggestions](https://qodo-merge-docs.qodo.ai/core-abilities/chat_on_code_suggestions/)
|
||||||
- [Code validation](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/)
|
- [Code validation](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/)
|
||||||
- [Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/)
|
- [Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/)
|
||||||
- [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/)
|
- [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/)
|
||||||
- [Fetching ticket context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
- [Fetching ticket context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
||||||
- [Impact evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/)
|
- [Impact evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/)
|
||||||
|
- [Incremental Update](https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/)
|
||||||
- [Interactivity](https://qodo-merge-docs.qodo.ai/core-abilities/interactivity/)
|
- [Interactivity](https://qodo-merge-docs.qodo.ai/core-abilities/interactivity/)
|
||||||
- [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/)
|
- [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/)
|
||||||
- [RAG context enrichment](https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/)
|
- [RAG context enrichment](https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/)
|
||||||
|
@ -27,7 +27,7 @@ In order to enable the RAG feature, add the following lines to your configuratio
|
|||||||
enable_rag=true
|
enable_rag=true
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! example "RAG Arguments Options"
|
???+ example "RAG Arguments Options"
|
||||||
|
|
||||||
<table>
|
<table>
|
||||||
<tr>
|
<tr>
|
||||||
|
@ -26,48 +26,52 @@ To search the documentation site using natural language:
|
|||||||
|
|
||||||
PR-Agent and Qodo Merge offers extensive pull request functionalities across various git providers:
|
PR-Agent and Qodo Merge offers extensive pull request functionalities across various git providers:
|
||||||
|
|
||||||
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|
| | | GitHub | GitLab | Bitbucket | Azure DevOps | Gitea |
|
||||||
| ----- | ------------------------------------------------------------------------------------------------------- |:------:|:------:|:---------:|:------------:|
|
| ----- |---------------------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|:-----:|
|
||||||
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
|
| [TOOLS](https://qodo-merge-docs.qodo.ai/tools/) | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | ⮑ [Ask on code lines](https://qodo-merge-docs.qodo.ai/tools/ask/#ask-lines) | ✅ | ✅ | | |
|
| | ⮑ [Ask on code lines](https://qodo-merge-docs.qodo.ai/tools/ask/#ask-lines) | ✅ | ✅ | | | |
|
||||||
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/?h=auto#auto-approval) | ✅ | ✅ | ✅ | | |
|
||||||
| | [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/?h=auto#auto-approval) | ✅ | ✅ | ✅ | |
|
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Add Documentation](https://qodo-merge-docs.qodo.ai/tools/documentation/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | |
|
| | [Analyze](https://qodo-merge-docs.qodo.ai/tools/analyze/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
|
| | [Auto-Approve](https://qodo-merge-docs.qodo.ai/tools/improve/?h=auto#auto-approval) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
|
| | [CI Feedback](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) 💎 | ✅ | | | | |
|
||||||
| | [CI Feedback](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
| | [Custom Prompt](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [PR Documentation](https://qodo-merge-docs.qodo.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
| | [Generate Custom Labels](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Custom Labels](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
| | [Generate Tests](https://qodo-merge-docs.qodo.ai/tools/test/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Analyze](https://qodo-merge-docs.qodo.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
| | [Implement](https://qodo-merge-docs.qodo.ai/tools/implement/) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Similar Code](https://qodo-merge-docs.qodo.ai/tools/similar_code/) 💎 | ✅ | | | |
|
| | [Scan Repo Discussions](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/) 💎 | ✅ | | | | |
|
||||||
| | [Custom Prompt](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Similar Code](https://qodo-merge-docs.qodo.ai/tools/similar_code/) 💎 | ✅ | | | | |
|
||||||
| | [Test](https://qodo-merge-docs.qodo.ai/tools/test/) 💎 | ✅ | ✅ | | |
|
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Implement](https://qodo-merge-docs.qodo.ai/tools/implement/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Scan Repo Discussions](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/) 💎 | ✅ | | | |
|
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | | |
|
||||||
| | [Repo Statistics](https://qodo-merge-docs.qodo.ai/tools/repo_statistics/) 💎 | ✅ | | | |
|
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Auto-Approve](https://qodo-merge-docs.qodo.ai/tools/improve/?h=auto#auto-approval) 💎 | ✅ | ✅ | ✅ | |
|
| | | | | | | |
|
||||||
| | | | | | |
|
| [USAGE](https://qodo-merge-docs.qodo.ai/usage-guide/) | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ |
|
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
|
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | | |
|
||||||
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
|
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ | ✅ | ✅ | ✅ |
|
| | | | | | | |
|
||||||
| | | | | | |
|
| [CORE](https://qodo-merge-docs.qodo.ai/core-abilities/) | [Adaptive and token-aware file patch fitting](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Auto Best Practices 💎](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | ✅ | | | | |
|
||||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
| | [Chat on code suggestions](https://qodo-merge-docs.qodo.ai/core-abilities/chat_on_code_suggestions/) | ✅ | ✅ | | | |
|
||||||
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Code Validation 💎](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Fetching ticket context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) | ✅ | ✅ | ✅ | | |
|
||||||
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Global and wiki configurations](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | | |
|
||||||
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | | |
|
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | | |
|
||||||
| | [Global and wiki configurations](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Incremental Update 💎](https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/) | ✅ | | | | |
|
||||||
| | [PR interactive actions](https://www.qodo.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
|
| | [Interactivity](https://qodo-merge-docs.qodo.ai/core-abilities/interactivity/) | ✅ | ✅ | | | |
|
||||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
|
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Code Validation 💎](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
| | [Auto Best Practices 💎](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | ✅ | | | |
|
| | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
|
| | [PR interactive actions](https://www.qodo.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | | |
|
||||||
|
| | [RAG context enrichment](https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/) | ✅ | | ✅ | | |
|
||||||
|
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ | |
|
||||||
|
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | | | |
|
||||||
!!! note "💎 means Qodo Merge only"
|
!!! note "💎 means Qodo Merge only"
|
||||||
All along the documentation, 💎 marks a feature available only in [Qodo Merge](https://www.codium.ai/pricing/){:target="_blank"}, and not in the open-source version.
|
All along the documentation, 💎 marks a feature available only in [Qodo Merge](https://www.codium.ai/pricing/){:target="_blank"}, and not in the open-source version.
|
||||||
|
|
||||||
|
46
docs/docs/installation/gitea.md
Normal file
46
docs/docs/installation/gitea.md
Normal file
@ -0,0 +1,46 @@
|
|||||||
|
## Run a Gitea webhook server
|
||||||
|
|
||||||
|
1. In Gitea create a new user and give it "Reporter" role ("Developer" if using Pro version of the agent) for the intended group or project.
|
||||||
|
|
||||||
|
2. For the user from step 1. generate a `personal_access_token` with `api` access.
|
||||||
|
|
||||||
|
3. Generate a random secret for your app, and save it for later (`webhook_secret`). For example, you can use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
WEBHOOK_SECRET=$(python -c "import secrets; print(secrets.token_hex(10))")
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Clone this repository:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/qodo-ai/pr-agent.git
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Prepare variables and secrets. Skip this step if you plan on setting these as environment variables when running the agent:
|
||||||
|
1. In the configuration file/variables:
|
||||||
|
- Set `config.git_provider` to "gitea"
|
||||||
|
|
||||||
|
2. In the secrets file/variables:
|
||||||
|
- Set your AI model key in the respective section
|
||||||
|
- In the [Gitea] section, set `personal_access_token` (with token from step 2) and `webhook_secret` (with secret from step 3)
|
||||||
|
|
||||||
|
6. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -f /docker/Dockerfile -t pr-agent:gitea_app --target gitea_app .
|
||||||
|
docker push codiumai/pr-agent:gitea_webhook # Push to your Docker repository
|
||||||
|
```
|
||||||
|
|
||||||
|
7. Set the environmental variables, the method depends on your docker runtime. Skip this step if you included your secrets/configuration directly in the Docker image.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CONFIG__GIT_PROVIDER=gitea
|
||||||
|
GITEA__PERSONAL_ACCESS_TOKEN=<personal_access_token>
|
||||||
|
GITEA__WEBHOOK_SECRET=<webhook_secret>
|
||||||
|
GITEA__URL=https://gitea.com # Or self host
|
||||||
|
OPENAI__KEY=<your_openai_api_key>
|
||||||
|
```
|
||||||
|
|
||||||
|
8. Create a webhook in your Gitea project. Set the URL to `http[s]://<PR_AGENT_HOSTNAME>/api/v1/gitea_webhooks`, the secret token to the generated secret from step 3, and enable the triggers `push`, `comments` and `merge request events`.
|
||||||
|
|
||||||
|
9. Test your installation by opening a merge request or commenting on a merge request using one of PR Agent's commands.
|
@ -193,9 +193,8 @@ For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
|
|||||||
3. Push image to ECR
|
3. Push image to ECR
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
docker tag codiumai/pr-agent:serverless <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
||||||
docker tag codiumai/pr-agent:serverless <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
docker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
||||||
docker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.
|
4. Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.
|
||||||
@ -204,6 +203,28 @@ For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
|
|||||||
7. Go back to steps 8-9 of [Method 5](#run-as-a-github-app) with the function url as your Webhook URL.
|
7. Go back to steps 8-9 of [Method 5](#run-as-a-github-app) with the function url as your Webhook URL.
|
||||||
The Webhook URL would look like `https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks`
|
The Webhook URL would look like `https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks`
|
||||||
|
|
||||||
|
### Using AWS Secrets Manager
|
||||||
|
|
||||||
|
For production Lambda deployments, use AWS Secrets Manager instead of environment variables:
|
||||||
|
|
||||||
|
1. Create a secret in AWS Secrets Manager with JSON format like this:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"openai.key": "sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
|
||||||
|
"github.webhook_secret": "your-webhook-secret-from-step-2",
|
||||||
|
"github.private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA...\n-----END RSA PRIVATE KEY-----"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Add IAM permission `secretsmanager:GetSecretValue` to your Lambda execution role
|
||||||
|
3. Set these environment variables in your Lambda:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
AWS_SECRETS_MANAGER__SECRET_ARN=arn:aws:secretsmanager:us-east-1:123456789012:secret:pr-agent-secrets-AbCdEf
|
||||||
|
CONFIG__SECRET_PROVIDER=aws_secrets_manager
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## AWS CodeCommit Setup
|
## AWS CodeCommit Setup
|
||||||
|
@ -9,6 +9,7 @@ There are several ways to use self-hosted PR-Agent:
|
|||||||
- [GitLab integration](./gitlab.md)
|
- [GitLab integration](./gitlab.md)
|
||||||
- [BitBucket integration](./bitbucket.md)
|
- [BitBucket integration](./bitbucket.md)
|
||||||
- [Azure DevOps integration](./azure.md)
|
- [Azure DevOps integration](./azure.md)
|
||||||
|
- [Gitea integration](./gitea.md)
|
||||||
|
|
||||||
## Qodo Merge 💎
|
## Qodo Merge 💎
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
To run PR-Agent locally, you first need to acquire two keys:
|
To run PR-Agent locally, you first need to acquire two keys:
|
||||||
|
|
||||||
1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o4-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer).
|
1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o4-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer).
|
||||||
2. A personal access token from your Git platform (GitHub, GitLab, BitBucket) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}
|
2. A personal access token from your Git platform (GitHub, GitLab, BitBucket,Gitea) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}
|
||||||
|
|
||||||
## Using Docker image
|
## Using Docker image
|
||||||
|
|
||||||
@ -40,6 +40,19 @@ To invoke a tool (for example `review`), you can run PR-Agent directly from the
|
|||||||
docker run --rm -it -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=<pr_url> review
|
docker run --rm -it -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=<pr_url> review
|
||||||
```
|
```
|
||||||
|
|
||||||
|
- For Gitea:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --rm -it -e OPENAI.KEY=<your key> -e CONFIG.GIT_PROVIDER=gitea -e GITEA.PERSONAL_ACCESS_TOKEN=<your token> codiumai/pr-agent:latest --pr_url <pr_url> review
|
||||||
|
```
|
||||||
|
|
||||||
|
If you have a dedicated Gitea instance, you need to specify the custom url as variable:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
-e GITEA.URL=<your gitea instance url>
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
For other git providers, update `CONFIG.GIT_PROVIDER` accordingly and check the [`pr_agent/settings/.secrets_template.toml`](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/.secrets_template.toml) file for environment variables expected names and values.
|
For other git providers, update `CONFIG.GIT_PROVIDER` accordingly and check the [`pr_agent/settings/.secrets_template.toml`](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/.secrets_template.toml) file for environment variables expected names and values.
|
||||||
|
|
||||||
### Utilizing environment variables
|
### Utilizing environment variables
|
||||||
|
@ -1,20 +1,20 @@
|
|||||||
Qodo Merge is a versatile application compatible with GitHub, GitLab, and BitBucket, hosted by QodoAI.
|
Qodo Merge is a versatile application compatible with GitHub, GitLab, and BitBucket, hosted by QodoAI.
|
||||||
See [here](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/) for more details about the benefits of using Qodo Merge.
|
See [here](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/) for more details about the benefits of using Qodo Merge.
|
||||||
|
|
||||||
## Trial Period and Licensing
|
## Usage and Licensing
|
||||||
|
|
||||||
### Cloud Users with Teams Account
|
### Cloud Users
|
||||||
|
|
||||||
A complimentary two-week trial is provided to all new users (with three additional grace usages). When the trial period ends, users will stop receiving feedback from Qodo Merge.
|
Non-paying users will enjoy feedback on up to 75 PRs per git organization per month. Above this limit, PRs will not receive feedback until a new month begins.
|
||||||
|
|
||||||
Following the trial period, user licenses (seats) are required for continued access. Each user requires an individual seat license.
|
For unlimited access, user licenses (seats) are required. Each user requires an individual seat license.
|
||||||
After purchasing seats, the team owner can assign them to specific users through the management portal.
|
After purchasing seats, the team owner can assign them to specific users through the management portal.
|
||||||
|
|
||||||
With an assigned seat, users can seamlessly deploy the application across any of their code repositories.
|
With an assigned seat, users can seamlessly deploy the application across any of their code repositories in a git organization, and receive feedback on all their PRs.
|
||||||
|
|
||||||
### Enterprise Account
|
### Enterprise Account
|
||||||
|
|
||||||
For organizations who require an Enterprise account, please [contact](https://www.qodo.ai/contact/#pricing) us to initiate a trial period, and to discuss pricing and licensing options.
|
For companies who require an Enterprise account, please [contact](https://www.qodo.ai/contact/#pricing) us to initiate a trial period, and to discuss pricing and licensing options.
|
||||||
|
|
||||||
|
|
||||||
## Install Qodo Merge for GitHub
|
## Install Qodo Merge for GitHub
|
||||||
@ -95,4 +95,4 @@ Open a new merge request or add a MR comment with one of Qodo Merge’s commands
|
|||||||
|
|
||||||
### GitLab Server
|
### GitLab Server
|
||||||
|
|
||||||
For a trial period of two weeks on your private GitLab Server, the same [installation steps](#gitlab-cloud) as for GitLab Cloud apply. After the trial period, you will need to [contact](https://www.qodo.ai/contact/#pricing) Qodo for moving to an Enterprise account.
|
For [limited free usage](https://qodo-merge-docs.qodo.ai/installation/qodo_merge/#cloud-users) on private GitLab Server, the same [installation steps](#gitlab-cloud) as for GitLab Cloud apply. For unlimited usage, you will need to [contact](https://www.qodo.ai/contact/#pricing) Qodo for moving to an Enterprise account.
|
||||||
|
@ -1,7 +1,11 @@
|
|||||||
### Overview
|
### Overview
|
||||||
|
|
||||||
[Qodo Merge](https://www.codium.ai/pricing/){:target="_blank"} is a paid, hosted version of open-source [PR-Agent](https://github.com/Codium-ai/pr-agent){:target="_blank"}. A complimentary two-week trial is offered, followed by a monthly subscription fee.
|
[Qodo Merge](https://www.codium.ai/pricing/){:target="_blank"} is a hosted version of the open-source [PR-Agent](https://github.com/Codium-ai/pr-agent){:target="_blank"}.
|
||||||
Qodo Merge is designed for companies and teams that require additional features and capabilities. It provides the following benefits:
|
It is designed for companies and teams that require additional features and capabilities.
|
||||||
|
|
||||||
|
Free users receive a monthly quota of 75 PR reviews per git organization, while unlimited usage requires a paid subscription. See [details](https://qodo-merge-docs.qodo.ai/installation/qodo_merge/#cloud-users).
|
||||||
|
|
||||||
|
Qodo Merge provides the following benefits:
|
||||||
|
|
||||||
1. **Fully managed** - We take care of everything for you - hosting, models, regular updates, and more. Installation is as simple as signing up and adding the Qodo Merge app to your GitHub\GitLab\BitBucket repo.
|
1. **Fully managed** - We take care of everything for you - hosting, models, regular updates, and more. Installation is as simple as signing up and adding the Qodo Merge app to your GitHub\GitLab\BitBucket repo.
|
||||||
|
|
||||||
@ -45,7 +49,7 @@ Here are additional tools that are available only for Qodo Merge users:
|
|||||||
|
|
||||||
### Supported languages
|
### Supported languages
|
||||||
|
|
||||||
Qodo Merge leverages the world's leading code models, such as Claude 3.7 Sonnet and o3-mini.
|
Qodo Merge leverages the world's leading code models, such as Claude 4 Sonnet, o4-mini and Gemini-2.5-Pro.
|
||||||
As a result, its primary tools such as `describe`, `review`, and `improve`, as well as the PR-chat feature, support virtually all programming languages.
|
As a result, its primary tools such as `describe`, `review`, and `improve`, as well as the PR-chat feature, support virtually all programming languages.
|
||||||
|
|
||||||
For specialized commands that require static code analysis, Qodo Merge offers support for specific languages. For more details about features that require static code analysis, please refer to the [documentation](https://qodo-merge-docs.qodo.ai/tools/analyze/#overview).
|
For specialized commands that require static code analysis, Qodo Merge offers support for specific languages. For more details about features that require static code analysis, please refer to the [documentation](https://qodo-merge-docs.qodo.ai/tools/analyze/#overview).
|
||||||
|
@ -1,22 +1,23 @@
|
|||||||
# Recent Updates and Future Roadmap
|
# Recent Updates and Future Roadmap
|
||||||
|
|
||||||
`Page last updated: 2025-05-11`
|
`Page last updated: 2025-06-01`
|
||||||
|
|
||||||
This page summarizes recent enhancements to Qodo Merge (last three months).
|
This page summarizes recent enhancements to Qodo Merge (last three months).
|
||||||
|
|
||||||
It also outlines our development roadmap for the upcoming three months. Please note that the roadmap is subject to change, and features may be adjusted, added, or reprioritized.
|
It also outlines our development roadmap for the upcoming three months. Please note that the roadmap is subject to change, and features may be adjusted, added, or reprioritized.
|
||||||
|
|
||||||
=== "Recent Updates"
|
=== "Recent Updates"
|
||||||
|
- **Simplified Free Tier**: Qodo Merge now offers a simplified free tier with a monthly limit of 75 PR reviews per organization, replacing the previous two-week trial. ([Learn more](https://qodo-merge-docs.qodo.ai/installation/qodo_merge/#cloud-users))
|
||||||
|
- **CLI Endpoint**: A new Qodo Merge endpoint that accepts a lists of before/after code changes, executes Qodo Merge commands, and return the results. Currently available for enterprise customers. Contact [Qodo](https://www.qodo.ai/contact/) for more information.
|
||||||
|
- **Linear tickets support**: Qodo Merge now supports Linear tickets. ([Learn more](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#linear-integration))
|
||||||
|
- **Smart Update**: Upon PR updates, Qodo Merge will offer tailored code suggestions, addressing both the entire PR and the specific incremental changes since the last feedback ([Learn more](https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update//))
|
||||||
- **Qodo Merge Pull Request Benchmark** - evaluating the performance of LLMs in analyzing pull request code ([Learn more](https://qodo-merge-docs.qodo.ai/pr_benchmark/))
|
- **Qodo Merge Pull Request Benchmark** - evaluating the performance of LLMs in analyzing pull request code ([Learn more](https://qodo-merge-docs.qodo.ai/pr_benchmark/))
|
||||||
- **Chat on Suggestions**: Users can now chat with Qodo Merge code suggestions ([Learn more](https://qodo-merge-docs.qodo.ai/tools/improve/#chat-on-code-suggestions))
|
- **Chat on Suggestions**: Users can now chat with code suggestions ([Learn more](https://qodo-merge-docs.qodo.ai/tools/improve/#chat-on-code-suggestions))
|
||||||
- **Scan Repo Discussions Tool**: A new tool that analyzes past code discussions to generate a `best_practices.md` file, distilling key insights and recommendations. ([Learn more](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/))
|
- **Scan Repo Discussions Tool**: A new tool that analyzes past code discussions to generate a `best_practices.md` file, distilling key insights and recommendations. ([Learn more](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/))
|
||||||
- **Enhanced Models**: Qodo Merge now defaults to a combination of top models (Claude Sonnet 3.7 and Gemini 2.5 Pro) and incorporates dedicated code validation logic for improved results. ([Details 1](https://qodo-merge-docs.qodo.ai/usage-guide/qodo_merge_models/), [Details 2](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/))
|
|
||||||
- **Chrome Extension Update**: Qodo Merge Chrome extension now supports single-tenant users. ([Learn more](https://qodo-merge-docs.qodo.ai/chrome-extension/options/#configuration-options/))
|
|
||||||
- **Repo Statistics Tool**: A new tool that provides repository statistics on time to merge and time to first comment. ([Learn more](https://qodo-merge-docs.qodo.ai/tools/repo_statistics/))
|
|
||||||
|
|
||||||
=== "Future Roadmap"
|
=== "Future Roadmap"
|
||||||
- **Smart Update**: Upon PR updates, Qodo Merge will offer tailored code suggestions, addressing both the entire PR and the specific incremental changes since the last feedback.
|
|
||||||
- **CLI Endpoint**: A new Qodo Merge endpoint will accept lists of before/after code changes, execute Qodo Merge commands, and return the results.
|
|
||||||
- **Simplified Free Tier**: We plan to transition from a two-week free trial to a free tier offering a limited number of suggestions per month per organization.
|
|
||||||
- **Best Practices Hierarchy**: Introducing support for structured best practices, such as for folders in monorepos or a unified best practice file for a group of repositories.
|
- **Best Practices Hierarchy**: Introducing support for structured best practices, such as for folders in monorepos or a unified best practice file for a group of repositories.
|
||||||
- **Installation Metrics**: Upon installation, Qodo Merge will analyze past PRs for key metrics (e.g., time to merge, time to first reviewer feedback), enabling pre/post-installation comparison to calculate ROI.
|
- **Enhanced `review` tool**: Enhancing the `review` tool validate compliance across multiple categories including security, tickets, and custom best practices.
|
||||||
|
- **Smarter context retrieval**: Leverage AST and LSP analysis to gather relevant context from across the entire repository.
|
||||||
|
- **Enhanced portal experience**: Improved user experience in the Qodo Merge portal with new options and capabilities.
|
||||||
|
@ -56,9 +56,24 @@ Everything below this marker is treated as previously auto-generated content and
|
|||||||
|
|
||||||
{width=512}
|
{width=512}
|
||||||
|
|
||||||
|
### Sequence Diagram Support
|
||||||
|
When the `enable_pr_diagram` option is enabled in your configuration, the `/describe` tool will include a `Mermaid` sequence diagram in the PR description.
|
||||||
|
|
||||||
|
This diagram represents interactions between components/functions based on the diff content.
|
||||||
|
|
||||||
|
### How to enable
|
||||||
|
|
||||||
|
In your configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
toml
|
||||||
|
[pr_description]
|
||||||
|
enable_pr_diagram = true
|
||||||
|
```
|
||||||
|
|
||||||
## Configuration options
|
## Configuration options
|
||||||
|
|
||||||
!!! example "Possible configurations"
|
???+ example "Possible configurations"
|
||||||
|
|
||||||
<table>
|
<table>
|
||||||
<tr>
|
<tr>
|
||||||
@ -109,6 +124,10 @@ Everything below this marker is treated as previously auto-generated content and
|
|||||||
<td><b>enable_help_text</b></td>
|
<td><b>enable_help_text</b></td>
|
||||||
<td>If set to true, the tool will display a help text in the comment. Default is false.</td>
|
<td>If set to true, the tool will display a help text in the comment. Default is false.</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><b>enable_pr_diagram</b></td>
|
||||||
|
<td>If set to true, the tool will generate a horizontal Mermaid flowchart summarizing the main pull request changes. This field remains empty if not applicable. Default is false.</td>
|
||||||
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
## Inline file summary 💎
|
## Inline file summary 💎
|
||||||
|
@ -26,6 +26,26 @@ You can state a name of a specific component in the PR to get documentation only
|
|||||||
/add_docs component_name
|
/add_docs component_name
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Manual triggering
|
||||||
|
|
||||||
|
Comment `/add_docs` on a PR to invoke it manually.
|
||||||
|
|
||||||
|
## Automatic triggering
|
||||||
|
|
||||||
|
To automatically run the `add_docs` tool when a pull request is opened, define in a [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/):
|
||||||
|
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[github_app]
|
||||||
|
pr_commands = [
|
||||||
|
"/add_docs",
|
||||||
|
...
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
The `pr_commands` list defines commands that run automatically when a PR is opened.
|
||||||
|
Since this is under the [github_app] section, it only applies when using the Qodo Merge GitHub App in GitHub environments.
|
||||||
|
|
||||||
## Configuration options
|
## Configuration options
|
||||||
|
|
||||||
- `docs_style`: The exact style of the documentation (for python docstring). you can choose between: `google`, `numpy`, `sphinx`, `restructuredtext`, `plain`. Default is `sphinx`.
|
- `docs_style`: The exact style of the documentation (for python docstring). you can choose between: `google`, `numpy`, `sphinx`, `restructuredtext`, `plain`. Default is `sphinx`.
|
||||||
|
@ -7,50 +7,50 @@ It leverages LLM technology to transform PR comments and review suggestions into
|
|||||||
|
|
||||||
## Usage Scenarios
|
## Usage Scenarios
|
||||||
|
|
||||||
### For Reviewers
|
=== "For Reviewers"
|
||||||
|
|
||||||
Reviewers can request code changes by:
|
Reviewers can request code changes by:
|
||||||
|
|
||||||
1. Selecting the code block to be modified.
|
1. Selecting the code block to be modified.
|
||||||
2. Adding a comment with the syntax:
|
2. Adding a comment with the syntax:
|
||||||
|
|
||||||
```
|
```
|
||||||
/implement <code-change-description>
|
/implement <code-change-description>
|
||||||
```
|
```
|
||||||
|
|
||||||
{width=640}
|
{width=640}
|
||||||
|
|
||||||
### For PR Authors
|
=== "For PR Authors"
|
||||||
|
|
||||||
PR authors can implement suggested changes by replying to a review comment using either: <br>
|
PR authors can implement suggested changes by replying to a review comment using either:
|
||||||
|
|
||||||
1. Add specific implementation details as described above
|
1. Add specific implementation details as described above
|
||||||
|
|
||||||
```
|
```
|
||||||
/implement <code-change-description>
|
/implement <code-change-description>
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Use the original review comment as instructions
|
2. Use the original review comment as instructions
|
||||||
|
|
||||||
```
|
```
|
||||||
/implement
|
/implement
|
||||||
```
|
```
|
||||||
|
|
||||||
{width=640}
|
{width=640}
|
||||||
|
|
||||||
### For Referencing Comments
|
=== "For Referencing Comments"
|
||||||
|
|
||||||
You can reference and implement changes from any comment by:
|
You can reference and implement changes from any comment by:
|
||||||
|
|
||||||
```
|
```
|
||||||
/implement <link-to-review-comment>
|
/implement <link-to-review-comment>
|
||||||
```
|
```
|
||||||
|
|
||||||
{width=640}
|
{width=640}
|
||||||
|
|
||||||
Note that the implementation will occur within the review discussion thread.
|
Note that the implementation will occur within the review discussion thread.
|
||||||
|
|
||||||
**Configuration options**
|
## Configuration options
|
||||||
|
|
||||||
- Use `/implement` to implement code change within and based on the review discussion.
|
- Use `/implement` to implement code change within and based on the review discussion.
|
||||||
- Use `/implement <code-change-description>` inside a review discussion to implement specific instructions.
|
- Use `/implement <code-change-description>` inside a review discussion to implement specific instructions.
|
||||||
|
@ -144,84 +144,216 @@ Use triple quotes to write multi-line instructions. Use bullet points or numbers
|
|||||||
|
|
||||||
> `💎 feature. Platforms supported: GitHub, GitLab, Bitbucket`
|
> `💎 feature. Platforms supported: GitHub, GitLab, Bitbucket`
|
||||||
|
|
||||||
Another option to give additional guidance to the AI model is by creating a `best_practices.md` file in your repository's root directory.
|
Qodo Merge supports both simple and hierarchical best practices configurations to provide guidance to the AI model for generating relevant code suggestions.
|
||||||
This page can contain a list of best practices, coding standards, and guidelines that are specific to your repo/organization.
|
|
||||||
|
|
||||||
The AI model will use this `best_practices.md` file as a reference, and in case the PR code violates any of the guidelines, it will create additional suggestions, with a dedicated label: `Organization
|
???- tip "Writing effective best practices files"
|
||||||
best practice`.
|
|
||||||
|
The following guidelines apply to all best practices files:
|
||||||
|
|
||||||
|
- Write clearly and concisely
|
||||||
|
- Include brief code examples when helpful with before/after patterns
|
||||||
|
- Focus on project-specific guidelines that will result in relevant suggestions you actually want to get
|
||||||
|
- Keep each file relatively short, under 800 lines, since:
|
||||||
|
- AI models may not process effectively very long documents
|
||||||
|
- Long files tend to contain generic guidelines already known to AI
|
||||||
|
- Use pattern-based structure rather than simple bullet points for better clarity
|
||||||
|
|
||||||
Example for a Python `best_practices.md` content:
|
???- tip "Example of a best practices file"
|
||||||
|
|
||||||
|
Pattern 1: Add proper error handling with try-except blocks around external function calls.
|
||||||
|
|
||||||
|
Example code before:
|
||||||
|
|
||||||
```markdown
|
```python
|
||||||
## Project best practices
|
# Some code that might raise an exception
|
||||||
- Make sure that I/O operations are encapsulated in a try-except block
|
return process_pr_data(data)
|
||||||
- Use the `logging` module for logging instead of `print` statements
|
```
|
||||||
- Use `is` and `is not` to compare with `None`
|
|
||||||
- Use `if __name__ == '__main__':` to run the code only when the script is executed
|
|
||||||
- Use `with` statement to open files
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
Tips for writing an effective `best_practices.md` file:
|
Example code after:
|
||||||
|
|
||||||
- Write clearly and concisely
|
```python
|
||||||
- Include brief code examples when helpful
|
try:
|
||||||
- Focus on project-specific guidelines, that will result in relevant suggestions you actually want to get
|
# Some code that might raise an exception
|
||||||
- Keep the file relatively short, under 800 lines, since:
|
return process_pr_data(data)
|
||||||
- AI models may not process effectively very long documents
|
except Exception as e:
|
||||||
- Long files tend to contain generic guidelines already known to AI
|
logger.exception("Failed to process request", extra={"error": e})
|
||||||
|
```
|
||||||
|
|
||||||
To control the number of best practices suggestions generated by the `improve` tool, give the following configuration:
|
Pattern 2: Add defensive null/empty checks before accessing object properties or performing operations on potentially null variables to prevent runtime errors.
|
||||||
|
|
||||||
|
Example code before:
|
||||||
|
|
||||||
```toml
|
```python
|
||||||
[best_practices]
|
def get_pr_code(pr_data):
|
||||||
num_best_practice_suggestions = 2
|
if "changed_code" in pr_data:
|
||||||
```
|
return pr_data.get("changed_code", "")
|
||||||
|
return ""
|
||||||
|
```
|
||||||
|
|
||||||
#### Local and global best practices
|
Example code after:
|
||||||
|
|
||||||
By default, Qodo Merge will look for a local `best_practices.md` in the root of the relevant local repo.
|
```python
|
||||||
|
def get_pr_code(pr_data):
|
||||||
|
if pr_data is None:
|
||||||
|
return ""
|
||||||
|
if "changed_code" in pr_data:
|
||||||
|
return pr_data.get("changed_code", "")
|
||||||
|
return ""
|
||||||
|
```
|
||||||
|
|
||||||
If you want to enable also a global `best_practices.md` file, set first in the global configuration file:
|
#### Local best practices
|
||||||
|
|
||||||
```toml
|
For basic usage, create a `best_practices.md` file in your repository's root directory containing a list of best practices, coding standards, and guidelines specific to your repository.
|
||||||
[best_practices]
|
|
||||||
enable_global_best_practices = true
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, create a `best_practices.md` file in the root of [global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration repository, `pr-agent-settings`.
|
The AI model will use this `best_practices.md` file as a reference, and in case the PR code violates any of the guidelines, it will create additional suggestions, with a dedicated label: `Organization best practice`.
|
||||||
|
|
||||||
#### Best practices for multiple languages
|
#### Global hierarchical best practices
|
||||||
|
|
||||||
For a git organization working with multiple programming languages, you can maintain a centralized global `best_practices.md` file containing language-specific guidelines.
|
|
||||||
When reviewing pull requests, Qodo Merge automatically identifies the programming language and applies the relevant best practices from this file.
|
|
||||||
|
|
||||||
To do this, structure your `best_practices.md` file using the following format:
|
For organizations managing multiple repositories with different requirements, Qodo Merge supports a hierarchical best practices system using a dedicated global configuration repository.
|
||||||
|
|
||||||
```
|
**Supported scenarios:**
|
||||||
# [Python]
|
|
||||||
...
|
|
||||||
# [Java]
|
|
||||||
...
|
|
||||||
# [JavaScript]
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Dedicated label for best practices suggestions
|
1. **Standalone repositories**: Individual repositories can have their own specific best practices tailored to their unique requirements
|
||||||
|
2. **Groups of repositories**: Repositories can be mapped to shared group-level best practices for consistent standards across similar projects
|
||||||
|
3. **Monorepos with subprojects**: Large monorepos can have both repository-level and subproject-level best practices, with automatic path-based matching
|
||||||
|
|
||||||
Best practice suggestions are labeled as `Organization best practice` by default.
|
#### Setting up global hierarchical best practices
|
||||||
To customize this label, modify it in your configuration file:
|
|
||||||
|
1\. Create a new repository named `pr-agent-settings` in your organization/workspace.
|
||||||
|
|
||||||
```toml
|
2\. Build the folder hierarchy in your `pr-agent-settings` repository, for example:
|
||||||
[best_practices]
|
|
||||||
organization_name = "..."
|
|
||||||
```
|
|
||||||
|
|
||||||
And the label will be: `{organization_name} best practice`.
|
```bash
|
||||||
|
pr-agent-settings/
|
||||||
|
├── metadata.yaml # Maps repos/folders to best practice paths
|
||||||
|
└── codebase_standards/ # Root for all best practice definitions
|
||||||
|
├── global/ # Global rules, inherited widely
|
||||||
|
│ └── best_practices.md
|
||||||
|
├── groups/ # For groups of repositories
|
||||||
|
│ ├── frontend_repos/
|
||||||
|
│ │ └── best_practices.md
|
||||||
|
│ ├── backend_repos/
|
||||||
|
│ │ └── best_practices.md
|
||||||
|
│ └── ...
|
||||||
|
├── qodo-merge/ # For standalone repositories
|
||||||
|
│ └── best_practices.md
|
||||||
|
├── qodo-monorepo/ # For monorepo-specific rules
|
||||||
|
│ ├── best_practices.md # Root level monorepo rules
|
||||||
|
│ ├── qodo-github/ # Subproject best practices
|
||||||
|
│ │ └── best_practices.md
|
||||||
|
│ └── qodo-gitlab/ # Another subproject
|
||||||
|
│ └── best_practices.md
|
||||||
|
└── ... # More repositories
|
||||||
|
```
|
||||||
|
|
||||||
#### Example results
|
3\. Define the metadata file `metadata.yaml` that maps your repositories to their relevant best practices paths, for example:
|
||||||
|
|
||||||
{width=512}
|
```yaml
|
||||||
|
# Standalone repos
|
||||||
|
qodo-merge:
|
||||||
|
best_practices_paths:
|
||||||
|
- "qodo-merge"
|
||||||
|
|
||||||
|
# Group-associated repos
|
||||||
|
repo_b:
|
||||||
|
best_practices_paths:
|
||||||
|
- "groups/backend_repos"
|
||||||
|
|
||||||
|
# Multi-group repos
|
||||||
|
repo_c:
|
||||||
|
best_practices_paths:
|
||||||
|
- "groups/frontend_repos"
|
||||||
|
- "groups/backend_repos"
|
||||||
|
|
||||||
|
# Monorepo with subprojects
|
||||||
|
qodo-monorepo:
|
||||||
|
best_practices_paths:
|
||||||
|
- "qodo-monorepo"
|
||||||
|
monorepo_subprojects:
|
||||||
|
qodo-github:
|
||||||
|
best_practices_paths:
|
||||||
|
- "qodo-monorepo/qodo-github"
|
||||||
|
qodo-gitlab:
|
||||||
|
best_practices_paths:
|
||||||
|
- "qodo-monorepo/qodo-gitlab"
|
||||||
|
```
|
||||||
|
|
||||||
|
4\. Set the following configuration in your global configuration file:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[best_practices]
|
||||||
|
enable_global_best_practices = true
|
||||||
|
```
|
||||||
|
|
||||||
|
???- info "Best practices priority and fallback behavior"
|
||||||
|
|
||||||
|
When global best practices are enabled, Qodo Merge follows this priority order:
|
||||||
|
|
||||||
|
1\. **Primary**: Global hierarchical best practices from `pr-agent-settings` repository:
|
||||||
|
|
||||||
|
1.1 If the repository is mapped in `metadata.yaml`, it uses the specified paths
|
||||||
|
|
||||||
|
1.2 For monorepos, it automatically collects best practices matching PR file paths
|
||||||
|
|
||||||
|
1.3 If no mapping exists, it falls back to the global best practices
|
||||||
|
|
||||||
|
2\. **Fallback**: Local repository `best_practices.md` file:
|
||||||
|
|
||||||
|
2.1 Used when global best practices are not found or configured
|
||||||
|
|
||||||
|
2.2 Acts as a safety net for repositories not yet configured in the global system
|
||||||
|
|
||||||
|
2.3 Local best practices are completely ignored when global best practices are successfully loaded
|
||||||
|
|
||||||
|
???- info "Edge cases and behavior"
|
||||||
|
|
||||||
|
- **Missing paths**: If specified paths in `metadata.yaml` don't exist in the file system, those paths are skipped
|
||||||
|
- **Monorepo subproject matching**: For monorepos, Qodo Merge automatically matches PR file paths against subproject paths to apply relevant best practices
|
||||||
|
- **Multiple group inheritance**: Repositories can inherit from multiple groups, and all applicable best practices are combined
|
||||||
|
|
||||||
|
[//]: # (#### Best practices for multiple languages)
|
||||||
|
|
||||||
|
[//]: # ()
|
||||||
|
[//]: # (For a git organization working with multiple programming languages, you can maintain a centralized global `best_practices.md` file containing language-specific guidelines.)
|
||||||
|
|
||||||
|
[//]: # (When reviewing pull requests, Qodo Merge automatically identifies the programming language and applies the relevant best practices from this file.)
|
||||||
|
|
||||||
|
[//]: # ()
|
||||||
|
[//]: # (To do this, structure your `best_practices.md` file using the following format:)
|
||||||
|
|
||||||
|
[//]: # ()
|
||||||
|
[//]: # (```)
|
||||||
|
|
||||||
|
[//]: # (# [Python])
|
||||||
|
|
||||||
|
[//]: # (...)
|
||||||
|
|
||||||
|
[//]: # (# [Java])
|
||||||
|
|
||||||
|
[//]: # (...)
|
||||||
|
|
||||||
|
[//]: # (# [JavaScript])
|
||||||
|
|
||||||
|
[//]: # (...)
|
||||||
|
|
||||||
|
[//]: # (```)
|
||||||
|
|
||||||
|
???- info "Dedicated label for best practices suggestions"
|
||||||
|
|
||||||
|
Best practice suggestions are labeled as `Organization best practice` by default.
|
||||||
|
To customize this label, modify it in your configuration file:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[best_practices]
|
||||||
|
organization_name = "..."
|
||||||
|
```
|
||||||
|
|
||||||
|
And the label will be: `{organization_name} best practice`.
|
||||||
|
|
||||||
|
#### Example results
|
||||||
|
|
||||||
|
{width=512}
|
||||||
|
|
||||||
### Auto best practices
|
### Auto best practices
|
||||||
|
|
||||||
@ -288,45 +420,6 @@ We advise users to apply critical analysis and judgment when implementing the pr
|
|||||||
In addition to mistakes (which may happen, but are rare), sometimes the presented code modification may serve more as an _illustrative example_ than a directly applicable solution.
|
In addition to mistakes (which may happen, but are rare), sometimes the presented code modification may serve more as an _illustrative example_ than a directly applicable solution.
|
||||||
In such cases, we recommend prioritizing the suggestion's detailed description, using the diff snippet primarily as a supporting reference.
|
In such cases, we recommend prioritizing the suggestion's detailed description, using the diff snippet primarily as a supporting reference.
|
||||||
|
|
||||||
|
|
||||||
### Chat on code suggestions
|
|
||||||
|
|
||||||
> `💎 feature` Platforms supported: GitHub, GitLab
|
|
||||||
|
|
||||||
Qodo Merge implements an orchestrator agent that enables interactive code discussions, listening and responding to comments without requiring explicit tool calls.
|
|
||||||
The orchestrator intelligently analyzes your responses to determine if you want to implement a suggestion, ask a question, or request help, then delegates to the appropriate specialized tool.
|
|
||||||
|
|
||||||
#### Setup and Activation
|
|
||||||
|
|
||||||
Enable interactive code discussions by adding the following to your configuration file (default is `True`):
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[pr_code_suggestions]
|
|
||||||
enable_chat_in_code_suggestions = true
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! info "Activating Dynamic Responses"
|
|
||||||
To obtain dynamic responses, the following steps are required:
|
|
||||||
|
|
||||||
1. Run the `/improve` command (mostly automatic)
|
|
||||||
2. Tick the `/improve` recommendation checkboxes (_Apply this suggestion_) to have Qodo Merge generate a new inline code suggestion discussion
|
|
||||||
3. The orchestrator agent will then automatically listen and reply to comments within the discussion without requiring additional commands
|
|
||||||
|
|
||||||
#### Explore the available interaction patterns:
|
|
||||||
|
|
||||||
!!! tip "Tip: Direct the agent with keywords"
|
|
||||||
Use "implement" or "apply" for code generation. Use "explain", "why", or "how" for information and help.
|
|
||||||
|
|
||||||
=== "Asking for Details"
|
|
||||||
{width=512}
|
|
||||||
|
|
||||||
=== "Implementing Suggestions"
|
|
||||||
{width=512}
|
|
||||||
|
|
||||||
=== "Providing Additional Help"
|
|
||||||
{width=512}
|
|
||||||
|
|
||||||
|
|
||||||
### Dual publishing mode
|
### Dual publishing mode
|
||||||
|
|
||||||
Our recommended approach for presenting code suggestions is through a [table](https://qodo-merge-docs.qodo.ai/tools/improve/#overview) (`--pr_code_suggestions.commitable_code_suggestions=false`).
|
Our recommended approach for presenting code suggestions is through a [table](https://qodo-merge-docs.qodo.ai/tools/improve/#overview) (`--pr_code_suggestions.commitable_code_suggestions=false`).
|
||||||
@ -435,7 +528,7 @@ To enable auto-approval based on specific criteria, first, you need to enable th
|
|||||||
enable_auto_approval = true
|
enable_auto_approval = true
|
||||||
```
|
```
|
||||||
|
|
||||||
There are two criteria that can be set for auto-approval:
|
There are several criteria that can be set for auto-approval:
|
||||||
|
|
||||||
- **Review effort score**
|
- **Review effort score**
|
||||||
|
|
||||||
@ -457,7 +550,19 @@ enable_auto_approval = true
|
|||||||
auto_approve_for_no_suggestions = true
|
auto_approve_for_no_suggestions = true
|
||||||
```
|
```
|
||||||
|
|
||||||
When no [code suggestion](https://www.qodo.ai/images/pr_agent/code_suggestions_as_comment_closed.png) were found for the PR, the PR will be auto-approved.
|
When no [code suggestions](https://www.qodo.ai/images/pr_agent/code_suggestions_as_comment_closed.png) were found for the PR, the PR will be auto-approved.
|
||||||
|
|
||||||
|
___
|
||||||
|
|
||||||
|
- **Ticket Compliance**
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[config]
|
||||||
|
enable_auto_approval = true
|
||||||
|
ensure_ticket_compliance = true # Default is false
|
||||||
|
```
|
||||||
|
|
||||||
|
If `ensure_ticket_compliance` is set to `true`, auto-approval will be disabled if a ticket is linked to the PR and the ticket is not compliant (e.g., the `review` tool did not mark the PR as fully compliant with the ticket). This ensures that PRs are only auto-approved if their associated tickets are properly resolved.
|
||||||
|
|
||||||
### How many code suggestions are generated?
|
### How many code suggestions are generated?
|
||||||
|
|
||||||
@ -481,7 +586,7 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 600 line
|
|||||||
|
|
||||||
## Configuration options
|
## Configuration options
|
||||||
|
|
||||||
??? example "General options"
|
???+ example "General options"
|
||||||
|
|
||||||
<table>
|
<table>
|
||||||
<tr>
|
<tr>
|
||||||
@ -541,7 +646,7 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 600 line
|
|||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
??? example "Params for number of suggestions and AI calls"
|
???+ example "Params for number of suggestions and AI calls"
|
||||||
|
|
||||||
<table>
|
<table>
|
||||||
<tr>
|
<tr>
|
||||||
|
@ -8,18 +8,18 @@ Here is a list of Qodo Merge tools, each with a dedicated page that explains how
|
|||||||
| **[PR Review (`/review`](./review.md))** | Adjustable feedback about the PR, possible issues, security concerns, review effort and more |
|
| **[PR Review (`/review`](./review.md))** | Adjustable feedback about the PR, possible issues, security concerns, review effort and more |
|
||||||
| **[Code Suggestions (`/improve`](./improve.md))** | Code suggestions for improving the PR |
|
| **[Code Suggestions (`/improve`](./improve.md))** | Code suggestions for improving the PR |
|
||||||
| **[Question Answering (`/ask ...`](./ask.md))** | Answering free-text questions about the PR, or on specific code lines |
|
| **[Question Answering (`/ask ...`](./ask.md))** | Answering free-text questions about the PR, or on specific code lines |
|
||||||
| **[Update Changelog (`/update_changelog`](./update_changelog.md))** | Automatically updating the CHANGELOG.md file with the PR changes |
|
|
||||||
| **[Help (`/help`](./help.md))** | Provides a list of all the available tools. Also enables to trigger them interactively (💎) |
|
| **[Help (`/help`](./help.md))** | Provides a list of all the available tools. Also enables to trigger them interactively (💎) |
|
||||||
|
| **[Help Docs (`/help_docs`](./help_docs.md))** | Answer a free-text question based on a git documentation folder. |
|
||||||
|
| **[Update Changelog (`/update_changelog`](./update_changelog.md))** | Automatically updating the CHANGELOG.md file with the PR changes |
|
||||||
| **💎 [Add Documentation (`/add_docs`](./documentation.md))** | Generates documentation to methods/functions/classes that changed in the PR |
|
| **💎 [Add Documentation (`/add_docs`](./documentation.md))** | Generates documentation to methods/functions/classes that changed in the PR |
|
||||||
| **💎 [Generate Custom Labels (`/generate_labels`](./custom_labels.md))** | Generates custom labels for the PR, based on specific guidelines defined by the user |
|
|
||||||
| **💎 [Analyze (`/analyze`](./analyze.md))** | Identify code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component |
|
| **💎 [Analyze (`/analyze`](./analyze.md))** | Identify code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component |
|
||||||
| **💎 [Test (`/test`](./test.md))** | generate tests for a selected component, based on the PR code changes |
|
|
||||||
| **💎 [Custom Prompt (`/custom_prompt`](./custom_prompt.md))** | Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user |
|
|
||||||
| **💎 [Generate Tests (`/test component_name`](./test.md))** | Automatically generates unit tests for a selected component, based on the PR code changes |
|
|
||||||
| **💎 [Improve Component (`/improve_component component_name`](./improve_component.md))** | Generates code suggestions for a specific code component that changed in the PR |
|
|
||||||
| **💎 [CI Feedback (`/checks ci_job`](./ci_feedback.md))** | Automatically generates feedback and analysis for a failed CI job |
|
| **💎 [CI Feedback (`/checks ci_job`](./ci_feedback.md))** | Automatically generates feedback and analysis for a failed CI job |
|
||||||
|
| **💎 [Custom Prompt (`/custom_prompt`](./custom_prompt.md))** | Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user |
|
||||||
|
| **💎 [Generate Custom Labels (`/generate_labels`](./custom_labels.md))** | Generates custom labels for the PR, based on specific guidelines defined by the user |
|
||||||
|
| **💎 [Generate Tests (`/test`](./test.md))** | Automatically generates unit tests for a selected component, based on the PR code changes |
|
||||||
| **💎 [Implement (`/implement`](./implement.md))** | Generates implementation code from review suggestions |
|
| **💎 [Implement (`/implement`](./implement.md))** | Generates implementation code from review suggestions |
|
||||||
|
| **💎 [Improve Component (`/improve_component component_name`](./improve_component.md))** | Generates code suggestions for a specific code component that changed in the PR |
|
||||||
| **💎 [Scan Repo Discussions (`/scan_repo_discussions`](./scan_repo_discussions.md))** | Generates `best_practices.md` file based on previous discussions in the repository |
|
| **💎 [Scan Repo Discussions (`/scan_repo_discussions`](./scan_repo_discussions.md))** | Generates `best_practices.md` file based on previous discussions in the repository |
|
||||||
| **💎 [Repo Statistics (`/repo_statistics`](./repo_statistics.md))** | Provides repository statistics on time to merge and time to first comment |
|
| **💎 [Similar Code (`/similar_code`](./similar_code.md))** | Retrieves the most similar code components from inside the organization's codebase, or from open-source code. |
|
||||||
|
|
||||||
Note that the tools marked with 💎 are available only for Qodo Merge users.
|
Note that the tools marked with 💎 are available only for Qodo Merge users.
|
@ -1,44 +0,0 @@
|
|||||||
`Platforms supported: GitHub`
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The `repo_statistics` tool analyzes statistics from merged pull requests over the past 12 months prior to Qodo Merge installation.
|
|
||||||
It calculates key metrics that help teams establish a baseline of their PR workflow efficiency.
|
|
||||||
|
|
||||||
!!! note "Active repositories are needed"
|
|
||||||
The tool is designed to work with real-life repositories, as it relies on actual discussions to generate meaningful insights.
|
|
||||||
At least 30 merged PRs are required to generate meaningful statistical data.
|
|
||||||
|
|
||||||
### Metrics Analyzed
|
|
||||||
|
|
||||||
- **Time to merge:** The median and average time it takes for PRs to be merged after opening
|
|
||||||
- **Time to first comment:** The median and average time it takes to get the first comment on a PR
|
|
||||||
|
|
||||||
|
|
||||||
### Usage
|
|
||||||
|
|
||||||
The tool can be invoked manually by commenting on any PR:
|
|
||||||
|
|
||||||
```
|
|
||||||
/repo_statistics
|
|
||||||
```
|
|
||||||
|
|
||||||
In response, the bot will comment with the statistical data.
|
|
||||||
Note that the scan can take several minutes to complete, since up to 100 PRs are scanned.
|
|
||||||
|
|
||||||
!!! info "Automatic trigger"
|
|
||||||
Upon adding the Qodo Merge bot to a repository, the tool will automatically scan the last 365 days of PRs and send them to MixPanel, if enabled.
|
|
||||||
|
|
||||||
## Example usage
|
|
||||||
|
|
||||||
{width=640}
|
|
||||||
|
|
||||||
MixPanel optional presentation:
|
|
||||||
|
|
||||||
{width=640}
|
|
||||||
|
|
||||||
|
|
||||||
### Configuration options
|
|
||||||
|
|
||||||
- Use `/repo_statistics --repo_statistics.days_back=X` to specify the number of days back to scan for discussions. The default is 365 days.
|
|
||||||
- Use `/repo_statistics --repo_statistics.minimal_number_of_prs=X` to specify the minimum number of merged PRs needed to generate the statistics. The default is 30 PRs.
|
|
@ -51,7 +51,7 @@ extra_instructions = "..."
|
|||||||
|
|
||||||
## Configuration options
|
## Configuration options
|
||||||
|
|
||||||
!!! example "General options"
|
???+ example "General options"
|
||||||
|
|
||||||
<table>
|
<table>
|
||||||
<tr>
|
<tr>
|
||||||
@ -70,9 +70,13 @@ extra_instructions = "..."
|
|||||||
<td><b>enable_help_text</b></td>
|
<td><b>enable_help_text</b></td>
|
||||||
<td>If set to true, the tool will display a help text in the comment. Default is true.</td>
|
<td>If set to true, the tool will display a help text in the comment. Default is true.</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><b>num_max_findings</b></td>
|
||||||
|
<td>Number of maximum returned findings. Default is 3.</td>
|
||||||
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
!!! example "Enable\\disable specific sub-sections"
|
???+ example "Enable\\disable specific sub-sections"
|
||||||
|
|
||||||
<table>
|
<table>
|
||||||
<tr>
|
<tr>
|
||||||
@ -101,7 +105,7 @@ extra_instructions = "..."
|
|||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
!!! example "Adding PR labels"
|
???+ example "Adding PR labels"
|
||||||
|
|
||||||
You can enable\disable the `review` tool to add specific labels to the PR:
|
You can enable\disable the `review` tool to add specific labels to the PR:
|
||||||
|
|
||||||
@ -112,13 +116,15 @@ extra_instructions = "..."
|
|||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><b>enable_review_labels_effort</b></td>
|
<td><b>enable_review_labels_effort</b></td>
|
||||||
<td>If set to true, the tool will publish a 'Review effort [1-5]: x' label. Default is true.</td>
|
<td>If set to true, the tool will publish a 'Review effort x/5' label (1–5 scale). Default is true.</td>
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
## Usage Tips
|
## Usage Tips
|
||||||
|
|
||||||
!!! tip "General guidelines"
|
### General guidelines
|
||||||
|
|
||||||
|
!!! tip ""
|
||||||
|
|
||||||
The `review` tool provides a collection of configurable feedbacks about a PR.
|
The `review` tool provides a collection of configurable feedbacks about a PR.
|
||||||
It is recommended to review the [Configuration options](#configuration-options) section, and choose the relevant options for your use case.
|
It is recommended to review the [Configuration options](#configuration-options) section, and choose the relevant options for your use case.
|
||||||
@ -128,7 +134,9 @@ extra_instructions = "..."
|
|||||||
|
|
||||||
On the other hand, if you find one of the enabled features to be irrelevant for your use case, disable it. No default configuration can fit all use cases.
|
On the other hand, if you find one of the enabled features to be irrelevant for your use case, disable it. No default configuration can fit all use cases.
|
||||||
|
|
||||||
!!! tip "Automation"
|
### Automation
|
||||||
|
|
||||||
|
!!! tip ""
|
||||||
When you first install Qodo Merge app, the [default mode](../usage-guide/automations_and_usage.md#github-app-automatic-tools-when-a-new-pr-is-opened) for the `review` tool is:
|
When you first install Qodo Merge app, the [default mode](../usage-guide/automations_and_usage.md#github-app-automatic-tools-when-a-new-pr-is-opened) for the `review` tool is:
|
||||||
```
|
```
|
||||||
pr_commands = ["/review", ...]
|
pr_commands = ["/review", ...]
|
||||||
@ -136,16 +144,30 @@ extra_instructions = "..."
|
|||||||
Meaning the `review` tool will run automatically on every PR, without any additional configurations.
|
Meaning the `review` tool will run automatically on every PR, without any additional configurations.
|
||||||
Edit this field to enable/disable the tool, or to change the configurations used.
|
Edit this field to enable/disable the tool, or to change the configurations used.
|
||||||
|
|
||||||
!!! tip "Possible labels from the review tool"
|
### Auto-generated PR labels by the Review Tool
|
||||||
|
|
||||||
The `review` tool can auto-generate two specific types of labels for a PR:
|
!!! tip ""
|
||||||
|
|
||||||
- a `possible security issue` label that detects if a possible [security issue](https://github.com/Codium-ai/pr-agent/blob/tr/user_description/pr_agent/settings/pr_reviewer_prompts.toml#L136) exists in the PR code (`enable_review_labels_security` flag)
|
The `review` can tool automatically add labels to your Pull Requests:
|
||||||
- a `Review effort [1-5]: x` label, where x is the estimated effort to review the PR (`enable_review_labels_effort` flag)
|
|
||||||
|
|
||||||
Both modes are useful, and we recommended to enable them.
|
- **`possible security issue`**: This label is applied if the tool detects a potential [security vulnerability](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml#L103) in the PR's code. This feedback is controlled by the 'enable_review_labels_security' flag (default is true).
|
||||||
|
- **`review effort [x/5]`**: This label estimates the [effort](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml#L90) required to review the PR on a relative scale of 1 to 5, where 'x' represents the assessed effort. This feedback is controlled by the 'enable_review_labels_effort' flag (default is true).
|
||||||
|
- **`ticket compliance`**: Adds a label indicating code compliance level ("Fully compliant" | "PR Code Verified" | "Partially compliant" | "Not compliant") to any GitHub/Jira/Linea ticket linked in the PR. Controlled by the 'require_ticket_labels' flag (default: false). If 'require_no_ticket_labels' is also enabled, PRs without ticket links will receive a "No ticket found" label.
|
||||||
|
|
||||||
!!! tip "Extra instructions"
|
|
||||||
|
### Blocking PRs from merging based on the generated labels
|
||||||
|
|
||||||
|
!!! tip ""
|
||||||
|
|
||||||
|
You can configure a CI/CD Action to prevent merging PRs with specific labels. For example, implement a dedicated [GitHub Action](https://medium.com/sequra-tech/quick-tip-block-pull-request-merge-using-labels-6cc326936221).
|
||||||
|
|
||||||
|
This approach helps ensure PRs with potential security issues or ticket compliance problems will not be merged without further review.
|
||||||
|
|
||||||
|
Since AI may make mistakes or lack complete context, use this feature judiciously. For flexibility, users with appropriate permissions can remove generated labels when necessary. When a label is removed, this action will be automatically documented in the PR discussion, clearly indicating it was a deliberate override by an authorized user to allow the merge.
|
||||||
|
|
||||||
|
### Extra instructions
|
||||||
|
|
||||||
|
!!! tip ""
|
||||||
|
|
||||||
Extra instructions are important.
|
Extra instructions are important.
|
||||||
The `review` tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.
|
The `review` tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.
|
||||||
@ -164,7 +186,3 @@ extra_instructions = "..."
|
|||||||
"""
|
"""
|
||||||
```
|
```
|
||||||
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
|
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
|
||||||
|
|
||||||
!!! tip "Code suggestions"
|
|
||||||
|
|
||||||
The `review` tool previously included a legacy feature for providing code suggestions (controlled by `--pr_reviewer.num_code_suggestion`). This functionality has been deprecated and replaced by the [`improve`](./improve.md) tool, which offers higher quality and more actionable code suggestions.
|
|
||||||
|
@ -50,7 +50,7 @@ glob = ['*.py']
|
|||||||
And to ignore Python files in all PRs using `regex` pattern, set in a configuration file:
|
And to ignore Python files in all PRs using `regex` pattern, set in a configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
[regex]
|
[ignore]
|
||||||
regex = ['.*\.py$']
|
regex = ['.*\.py$']
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -249,4 +249,4 @@ ignore_pr_authors = ["my-special-bot-user", ...]
|
|||||||
Where the `ignore_pr_authors` is a list of usernames that you want to ignore.
|
Where the `ignore_pr_authors` is a list of usernames that you want to ignore.
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
There is one specific case where bots will receive an automatic response - when they generated a PR with a _failed test_. In that case, the [`ci_feedback`](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) tool will be invoked.
|
There is one specific case where bots will receive an automatic response - when they generated a PR with a _failed test_. In that case, the [`ci_feedback`](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) tool will be invoked.
|
||||||
|
@ -30,7 +30,7 @@ verbosity_level=2
|
|||||||
This is useful for debugging or experimenting with different tools.
|
This is useful for debugging or experimenting with different tools.
|
||||||
|
|
||||||
3. **git provider**: The [git_provider](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L5) field in a configuration file determines the GIT provider that will be used by Qodo Merge. Currently, the following providers are supported:
|
3. **git provider**: The [git_provider](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L5) field in a configuration file determines the GIT provider that will be used by Qodo Merge. Currently, the following providers are supported:
|
||||||
`github` **(default)**, `gitlab`, `bitbucket`, `azure`, `codecommit`, `local`, and `gerrit`.
|
`github` **(default)**, `gitlab`, `bitbucket`, `azure`, `codecommit`, `local`,`gitea`, and `gerrit`.
|
||||||
|
|
||||||
### CLI Health Check
|
### CLI Health Check
|
||||||
|
|
||||||
@ -312,3 +312,16 @@ pr_commands = [
|
|||||||
"/improve",
|
"/improve",
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Gitea Webhook
|
||||||
|
|
||||||
|
After setting up a Gitea webhook, to control which commands will run automatically when a new MR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[gitea]
|
||||||
|
pr_commands = [
|
||||||
|
"/describe",
|
||||||
|
"/review",
|
||||||
|
"/improve",
|
||||||
|
]
|
||||||
|
```
|
||||||
|
@ -1,20 +1,22 @@
|
|||||||
The different tools and sub-tools used by Qodo Merge are adjustable via the **[configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml)**.
|
The different tools and sub-tools used by Qodo Merge are adjustable via a Git configuration file.
|
||||||
|
There are three main ways to set persistent configurations:
|
||||||
|
|
||||||
In addition to general configuration options, each tool has its own configurations. For example, the `review` tool will use parameters from the [pr_reviewer](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L16) section in the configuration file.
|
1. [Wiki](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#wiki-configuration-file) configuration page 💎
|
||||||
See the [Tools Guide](https://qodo-merge-docs.qodo.ai/tools/) for a detailed description of the different tools and their configurations.
|
2. [Local](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#local-configuration-file) configuration file
|
||||||
|
3. [Global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration file 💎
|
||||||
There are three ways to set persistent configurations:
|
|
||||||
|
|
||||||
1. Wiki configuration page 💎
|
|
||||||
2. Local configuration file
|
|
||||||
3. Global configuration file 💎
|
|
||||||
|
|
||||||
In terms of precedence, wiki configurations will override local configurations, and local configurations will override global configurations.
|
In terms of precedence, wiki configurations will override local configurations, and local configurations will override global configurations.
|
||||||
|
|
||||||
!!! tip "Tip1: edit only what you need"
|
|
||||||
|
For a list of all possible configurations, see the [configuration options](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml/) page.
|
||||||
|
In addition to general configuration options, each tool has its own configurations. For example, the `review` tool will use parameters from the [pr_reviewer](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L16) section in the configuration file.
|
||||||
|
|
||||||
|
!!! tip "Tip1: Edit only what you need"
|
||||||
Your configuration file should be minimal, and edit only the relevant values. Don't copy the entire configuration options, since it can lead to legacy problems when something changes.
|
Your configuration file should be minimal, and edit only the relevant values. Don't copy the entire configuration options, since it can lead to legacy problems when something changes.
|
||||||
!!! tip "Tip2: show relevant configurations"
|
!!! tip "Tip2: Show relevant configurations"
|
||||||
If you set `config.output_relevant_configurations=true`, each tool will also output in a collapsible section its relevant configurations. This can be useful for debugging, or getting to know the configurations better.
|
If you set `config.output_relevant_configurations` to True, each tool will also output in a collapsible section its relevant configurations. This can be useful for debugging, or getting to know the configurations better.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Wiki configuration file 💎
|
## Wiki configuration file 💎
|
||||||
|
|
||||||
@ -86,7 +88,7 @@ Create a dedicated project to hold a global configuration file that affects all
|
|||||||
1. Create a new project with both the name and key: PR_AGENT_SETTINGS.
|
1. Create a new project with both the name and key: PR_AGENT_SETTINGS.
|
||||||
2. Inside the PR_AGENT_SETTINGS project, create a repository named pr-agent-settings.
|
2. Inside the PR_AGENT_SETTINGS project, create a repository named pr-agent-settings.
|
||||||
3. In this repository, add a `.pr_agent.toml` configuration file—structured similarly to the global configuration file described above.
|
3. In this repository, add a `.pr_agent.toml` configuration file—structured similarly to the global configuration file described above.
|
||||||
4. Optionally, you can add organizational-level [global best practices file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file).
|
4. Optionally, you can add organizational-level [global best practices](https://qodo-merge-docs.qodo.ai/tools/improve/#global-hierarchical-best-practices).
|
||||||
|
|
||||||
Repositories across your entire Bitbucket organization will inherit the configuration from this file.
|
Repositories across your entire Bitbucket organization will inherit the configuration from this file.
|
||||||
|
|
||||||
|
@ -12,6 +12,7 @@ It includes information on how to adjust Qodo Merge configurations, define which
|
|||||||
- [GitHub App](./automations_and_usage.md#github-app)
|
- [GitHub App](./automations_and_usage.md#github-app)
|
||||||
- [GitHub Action](./automations_and_usage.md#github-action)
|
- [GitHub Action](./automations_and_usage.md#github-action)
|
||||||
- [GitLab Webhook](./automations_and_usage.md#gitlab-webhook)
|
- [GitLab Webhook](./automations_and_usage.md#gitlab-webhook)
|
||||||
|
- [Gitea Webhook](./automations_and_usage.md#gitea-webhook)
|
||||||
- [BitBucket App](./automations_and_usage.md#bitbucket-app)
|
- [BitBucket App](./automations_and_usage.md#bitbucket-app)
|
||||||
- [Azure DevOps Provider](./automations_and_usage.md#azure-devops-provider)
|
- [Azure DevOps Provider](./automations_and_usage.md#azure-devops-provider)
|
||||||
- [Managing Mail Notifications](./mail_notifications.md)
|
- [Managing Mail Notifications](./mail_notifications.md)
|
||||||
@ -24,3 +25,4 @@ It includes information on how to adjust Qodo Merge configurations, define which
|
|||||||
- [Patch Extra Lines](./additional_configurations.md#patch-extra-lines)
|
- [Patch Extra Lines](./additional_configurations.md#patch-extra-lines)
|
||||||
- [FAQ](https://qodo-merge-docs.qodo.ai/faq/)
|
- [FAQ](https://qodo-merge-docs.qodo.ai/faq/)
|
||||||
- [Qodo Merge Models](./qodo_merge_models)
|
- [Qodo Merge Models](./qodo_merge_models)
|
||||||
|
- [Qodo Merge Endpoints](./qm_endpoints)
|
||||||
|
369
docs/docs/usage-guide/qm_endpoints.md
Normal file
369
docs/docs/usage-guide/qm_endpoints.md
Normal file
@ -0,0 +1,369 @@
|
|||||||
|
|
||||||
|
# Overview
|
||||||
|
|
||||||
|
By default, Qodo Merge processes webhooks that respond to events or comments (for example, PR is opened), posting its responses directly on the PR page.
|
||||||
|
|
||||||
|
Qodo Merge now features two CLI endpoints that let you invoke its tools and receive responses directly (both as formatted markdown as well as a raw JSON), rather than having them posted to the PR page:
|
||||||
|
|
||||||
|
- **Pull Request Endpoint** - Accepts GitHub PR URL, along with the desired tool to invoke (**note**: only available on-premises, or single tenant).
|
||||||
|
- **Diff Endpoint** - Git agnostic option that accepts a comparison of two states, either as a list of “before” and “after” files’ contents, or as a unified diff file, along with the desired tool to invoke.
|
||||||
|
|
||||||
|
# Setup
|
||||||
|
|
||||||
|
## Enabling desired endpoints (for on-prem deployment)
|
||||||
|
|
||||||
|
:bulb: Add the following to your helm chart\secrets file:
|
||||||
|
|
||||||
|
Pull Request Endpoint:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[qm_pull_request_endpoint]
|
||||||
|
enabled = true
|
||||||
|
```
|
||||||
|
|
||||||
|
Diff Endpoint:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[qm_diff_endpoint]
|
||||||
|
enabled = true
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important:** This endpoint can only be enabled through the pod's main secret file, **not** through standard configuration files.
|
||||||
|
|
||||||
|
## Access Key
|
||||||
|
|
||||||
|
The endpoints require the user to provide an access key in each invocation. Choose one of the following options to retrieve such key.
|
||||||
|
|
||||||
|
### Option 1: Endpoint Key (On Premise / Single Tenant only)
|
||||||
|
|
||||||
|
Define an endpoint key in the helm chart of your pod configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[qm_pull_request_endpoint]
|
||||||
|
enabled = true
|
||||||
|
endpoint_key = "your-secure-key-here"
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[qm_diff_endpoint]
|
||||||
|
enabled = true
|
||||||
|
endpoint_key = "your-secure-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option 2: API Key for Cloud users (Diff Endpoint only)
|
||||||
|
|
||||||
|
Generate a long-lived API key by authenticating the user. We offer two different methods to achieve this:
|
||||||
|
|
||||||
|
### - Shell script
|
||||||
|
|
||||||
|
Download and run the following script: [gen_api_key.sh](https://github.com/qodo-ai/pr-agent/blob/5dfd696c2b1f43e1d620fe17b9dc10c25c2304f9/pr_agent/scripts/qm_endpoint_auth/gen_api_key.sh)
|
||||||
|
|
||||||
|
### - npx
|
||||||
|
|
||||||
|
1. Install node
|
||||||
|
2. Run: `npx @qodo/gen login`
|
||||||
|
|
||||||
|
Regardless of which method used, follow the instructions in the opened browser page. Once logged in successfully via the website, the script will return the generated API key:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
✅ Authentication successful! API key saved.
|
||||||
|
📋 Your API key: ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Each login generates a new API key, making any previous ones **obsolete**.
|
||||||
|
|
||||||
|
# Available Tools
|
||||||
|
Both endpoints support the following Qodo Merge tools:
|
||||||
|
|
||||||
|
[**Improve**](https://qodo-merge-docs.qodo.ai/tools/improve/) | [**Review**](https://qodo-merge-docs.qodo.ai/tools/review/) | [**Describe**](https://qodo-merge-docs.qodo.ai/tools/describe/) | [**Ask**](https://qodo-merge-docs.qodo.ai/tools/ask/) | [**Add Docs**](https://qodo-merge-docs.qodo.ai/tools/documentation/) | [**Analyze**](https://qodo-merge-docs.qodo.ai/tools/analyze/) | [**Config**](https://qodo-merge-docs.qodo.ai/tools/config/) | [**Generate Labels**](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) | [**Improve Component**](https://qodo-merge-docs.qodo.ai/tools/improve_component/) | [**Test**](https://qodo-merge-docs.qodo.ai/tools/test/) | [**Custom Prompt**](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/)
|
||||||
|
|
||||||
|
# How to Run
|
||||||
|
For all endpoints, there is a need to specify the access key in the header as the value next to the field: “X-API-Key”.
|
||||||
|
|
||||||
|
## Pull Request Endpoint
|
||||||
|
|
||||||
|
**URL:** `/api/v1/qm_pull_request`
|
||||||
|
|
||||||
|
### Request Format
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"pr_url": "<https://github.com/owner/repo/pull/123>",
|
||||||
|
"command": "<COMMAND> ARG_1 ARG_2 ..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage Examples
|
||||||
|
|
||||||
|
### cURL
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST "<your-server>/api/v1/qm_pull_request" \\
|
||||||
|
-H "Content-Type: application/json" \\
|
||||||
|
-H "X-API-Key: <your-key>"
|
||||||
|
-d '{
|
||||||
|
"pr_url": "<https://github.com/owner/repo/pull/123>",
|
||||||
|
"command": "improve"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python
|
||||||
|
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
import json
|
||||||
|
|
||||||
|
def call_qm_pull_request(pr_url: str, command: str, endpoint_key: str):
|
||||||
|
url = "<your-server>/api/v1/qm_pull_request"
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"pr_url": pr_url,
|
||||||
|
"command": command
|
||||||
|
}
|
||||||
|
|
||||||
|
response = requests.post(
|
||||||
|
url=url,
|
||||||
|
headers={"Content-Type": "application/json", "X-API-Key": endpoint_key},
|
||||||
|
data=json.dumps(payload)
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
response_str = result.get("response_str") # Formatted response
|
||||||
|
raw_data = result.get("raw_data") # Metadata and suggestions
|
||||||
|
return response_str, raw_data
|
||||||
|
else:
|
||||||
|
print(f"Error: {response.status_code} - {response.text}")
|
||||||
|
return None, None
|
||||||
|
```
|
||||||
|
|
||||||
|
## Diff Endpoint
|
||||||
|
|
||||||
|
**URL:** `/api/v1/qm_diff`
|
||||||
|
|
||||||
|
### Request Format
|
||||||
|
|
||||||
|
With before and after files’ contents:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"command": "<COMMAND> ARG_1 ARG_2 ...",
|
||||||
|
"diff_files": {
|
||||||
|
"<FILE_PATH>": ["<BEFORE_CONTENT>", "<AFTER_CONTENT>"],
|
||||||
|
"...": ["...", "..."]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively, with unified diff:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
{
|
||||||
|
"command": "<COMMAND> ARG_1 ARG_2 ...",
|
||||||
|
"diff": "<UNIFIED_DIFF_CONTENT>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Payloads
|
||||||
|
|
||||||
|
**Using before and after per file (recommended):**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"command": "improve_component hello",
|
||||||
|
"diff_files": {
|
||||||
|
"src/main.py": [
|
||||||
|
"def hello():\\n print('Hello')",
|
||||||
|
"def hello():\\n print('Hello World')\\n return 'success'"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
**Using unified diff:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"command": "improve",
|
||||||
|
"diff": "diff --git a/src/main.py b/src/main.py\\nindex 123..456 100644\\n--- a/src/main.py\\n+++ b/src/main.py\\n@@ -1,2 +1,3 @@\\n def hello():\\n- print('Hello')\\n+ print('Hello World')\\n+ return 'success'"
|
||||||
|
}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage Examples
|
||||||
|
|
||||||
|
### cURL
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST "<your-server>/api/v1/qm_diff" \\
|
||||||
|
-H "X-API-Key: <YOUR_KEY>" \\
|
||||||
|
-H "Content-Type: application/json" \\
|
||||||
|
-d @your_request.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python
|
||||||
|
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
import json
|
||||||
|
|
||||||
|
def call_qm_diff(api_key: str, payload: dict):
|
||||||
|
url = "<your-server>/api/v1/qm_diff"
|
||||||
|
|
||||||
|
response = requests.post(
|
||||||
|
url=url,
|
||||||
|
headers={"Content-Type": "application/json", "X-API-Key": api_key},
|
||||||
|
data=json.dumps(payload)
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
markdown_result = result.get("response_str") # Formatted markdown
|
||||||
|
raw_data = result.get("raw_data") # Metadata and suggestions
|
||||||
|
return markdown_result, raw_data
|
||||||
|
else:
|
||||||
|
print(f"Error: {response.status_code} - {response.text}")
|
||||||
|
return None, None
|
||||||
|
```
|
||||||
|
|
||||||
|
# Response Format
|
||||||
|
Both endpoints return identical JSON structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"response_str": "## PR Code Suggestions ✨\n\n<table>...",
|
||||||
|
"raw_data": {
|
||||||
|
<FIELD>: <VALUE>
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- **`response_str`** - Formatted markdown for display
|
||||||
|
- **`raw_data`** - Structured data with detailed suggestions and metadata, if applicable
|
||||||
|
|
||||||
|
# Complete Workflows Examples
|
||||||
|
### Pull Request Endpoint
|
||||||
|
|
||||||
|
Given the following “/improve” request:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
{
|
||||||
|
"command": "improve",
|
||||||
|
"pr_url": "https://github.com/qodo-ai/pr-agent/pull/1831"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Received the following response:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
{"response_str":"## PR Code Suggestions ✨\n\n<table><thead><tr><td><strong>Category
|
||||||
|
</strong></td><td align=left><strong>Suggestion
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
</strong></td><td align=center>
|
||||||
|
<strong>Impact</strong></td></tr><tbody><tr><td rowspan=1>Learned<br>best practice</td>
|
||||||
|
\n<td>\n\n\n\n<details><summary>Improve documentation clarity</summary>\n\n___\n
|
||||||
|
\n\n**The documentation parameter description contains a grammatical issue.
|
||||||
|
The <br>sentence \"This field remains empty if not applicable\" is unclear in context
|
||||||
|
and <br>should be clarified to better explain what happens when the feature is not
|
||||||
|
<br>applicable.**\n\n[docs/docs/tools/describe.md [128-129]]
|
||||||
|
(https://github.com/qodo-ai/pr-agent/pull/1831/files#diff-960aad71fec9617804a02c904da37db217b6ba8a48fec3ac8bda286511d534ebR128-R129)
|
||||||
|
\n\n```diff\n <td><b>enable_pr_diagram</b></td>\n-<td>If set to true, the tool
|
||||||
|
will generate a horizontal Mermaid flowchart summarizing the main pull request
|
||||||
|
changes. This field remains empty if not applicable. Default is false.</td>\n
|
||||||
|
+<td>If set to true, the tool will generate a horizontal Mermaid flowchart
|
||||||
|
summarizing the main pull request changes. No diagram will be generated if
|
||||||
|
changes cannot be effectively visualized. Default is false.</td>\n```\n\n
|
||||||
|
- [ ] **Apply / Chat** <!-- /improve --apply_suggestion=0 -->\n\n<details>
|
||||||
|
<summary>Suggestion importance[1-10]: 6</summary>\n\n__\n\nWhy: \nRelevant
|
||||||
|
best practice - Fix grammatical errors and typos in user-facing documentation
|
||||||
|
to maintain professionalism and clarity.\n\n</details></details></td><td
|
||||||
|
align=center>Low\n\n</td></tr>\n<tr><td align=\"center\" colspan=\"2\">\n\n
|
||||||
|
- [ ] More <!-- /improve --more_suggestions=true -->\n\n</td><td></td></tr>
|
||||||
|
</tbody></table>","raw_data":{"code_suggestions":[{"relevant_file":
|
||||||
|
"docs/docs/tools/describe.md\n","language":"markdown\n","relevant_best_practice":
|
||||||
|
"Fix grammatical errors and typos in user-facing documentation to maintain
|
||||||
|
professionalism and clarity.\n","existing_code":"<td><b>enable_pr_diagram</b>
|
||||||
|
</td>\n<td>If set to true, the tool will generate a horizontal Mermaid flowchart
|
||||||
|
summarizing the main pull request changes. This field remains empty if not applicable.
|
||||||
|
Default is false.</td>\n","suggestion_content":"The documentation parameter description
|
||||||
|
contains a grammatical issue. The sentence \"This field remains empty if not applicable\"
|
||||||
|
is unclear in context and should be clarified to better explain what happens when the
|
||||||
|
feature is not applicable.\n","improved_code":"<td><b>enable_pr_diagram</b></td>
|
||||||
|
\n<td>If set to true, the tool will generate a horizontal Mermaid flowchart summarizing
|
||||||
|
the main pull request changes. No diagram will be generated if changes cannot be effectively
|
||||||
|
visualized. Default is false.</td>\n","one_sentence_summary":"Improve documentation clarity\n",
|
||||||
|
"score":6,"score_why":"\nRelevant best practice - Fix grammatical errors and typos in
|
||||||
|
user-facing documentation to maintain professionalism and clarity.","label":"Learned best practice",
|
||||||
|
"relevant_lines_start":128,"relevant_lines_end":129,"enable_apply":true}]}}
|
||||||
|
```
|
||||||
|
|
||||||
|
In case user has failed authentication, due to not enabling the endpoint in the helm chart:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
HTTP/1.1 400 Bad Request
|
||||||
|
date: Tue, 03 Jun 2025 09:40:21 GMT
|
||||||
|
server: uvicorn
|
||||||
|
content-length: 3486
|
||||||
|
content-type: application/json
|
||||||
|
|
||||||
|
{"detail":{"error":"QM Pull Request endpoint is not enabled"}}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Diff Endpoint
|
||||||
|
|
||||||
|
Given the following “/improve” request’s payload:
|
||||||
|
|
||||||
|
[improve_example_short.json](https://codium.ai/images/pr_agent/improve_example_short.json)
|
||||||
|
|
||||||
|
Received the following response:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
{"response_str":"## PR Code Suggestions ✨\n\n<table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion
|
||||||
|
</strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=1>Possible issue</td>\n<td>\n\n\n\n<details>
|
||||||
|
<summary>Fix invalid repository URL</summary>\n\n___\n\n\n**The <code>base_branch</code> is set to <code>None</code> but then used
|
||||||
|
in the <code>repo_url</code> string <br>interpolation, which will cause a runtime error. Also, the repository URL format <br>is incorrect
|
||||||
|
as it includes the branch in the middle of the organization/repo <br>path.**\n\n[tests/e2e_tests/test_github_app.py [1]]
|
||||||
|
(file://tests/e2e_tests/test_github_app.py#L1-1)\n\ndiff\\n-base_branch = None\\n+base_branch = \\"main\\" # or any base branch you want\\n
|
||||||
|
new_branch = f\\"github_app_e2e_test-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}-where-am-I\\"\\n-repo_url =
|
||||||
|
f'Codium-ai/{base_branch}/pr-agent-tests'\\n+repo_url = 'Codium-ai/pr-agent-tests'\\n\n<details><summary>Suggestion importance[1-10]: 9</summary>
|
||||||
|
\n\n__\n\nWhy: The suggestion correctly identifies a critical runtime bug where base_branch = None is used in string interpolation,
|
||||||
|
which would produce an invalid repository URL Codium-ai/None/pr-agent-tests. This would cause the test to fail at runtime.\n\n\n</details></details>
|
||||||
|
</td><td align=center>High\n\n</td></tr></tbody></table>",
|
||||||
|
|
||||||
|
"raw_data":{"code_suggestions":[{"relevant_file":"tests/e2e_tests/test_github_app.py\n",
|
||||||
|
"language":"python\n","existing_code":"base_branch = None\nnew_branch = f\"github_app_e2e_test-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}
|
||||||
|
-where-am-I\"\nrepo_url = f'Codium-ai/{base_branch}/pr-agent-tests'\n","suggestion_content":"The base_branch is set to None but then used in the
|
||||||
|
repo_url string interpolation, which will cause a runtime error. Also, the repository URL format is incorrect as it includes the branch in the middle
|
||||||
|
of the organization/repo path.\n","improved_code":"base_branch = \"main\" # or any base branch you want\nnew_branch = f\"github_app_e2e_test-
|
||||||
|
{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}-where-am-I\"\nrepo_url = 'Codium-ai/pr-agent-tests'\n","one_sentence_summary":"Fix invalid repository
|
||||||
|
URL\n","label":"possible issue","score":9,"score_why":"The suggestion correctly identifies a critical runtime bug where base_branch = None is used in
|
||||||
|
string interpolation, which would produce an invalid repository URL Codium-ai/None/pr-agent-tests. This would cause the test to fail at runtime.\n",
|
||||||
|
"relevant_lines_start":1,"relevant_lines_end":1,"enable_apply":false}]}}
|
||||||
|
```
|
||||||
|
|
||||||
|
In case user has failed authentication:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
HTTP/1.1 400 Bad Request
|
||||||
|
date: Tue, 03 Jun 2025 08:45:36 GMT
|
||||||
|
server: uvicorn
|
||||||
|
content-length: 43
|
||||||
|
content-type: application/json
|
||||||
|
|
||||||
|
{"detail":{"error":"Invalid API key"}}
|
||||||
|
```
|
||||||
|
|
||||||
|
# Appendix: Endpoints Comparison Table
|
||||||
|
|
||||||
|
| **Feature** | **Pull Request Endpoint** | **Diff Endpoint** |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| **Input** | GitHub PR URL | File diffs / Unified diff |
|
||||||
|
| **Git Provider** | GitHub only | N/A |
|
||||||
|
| **Deployment** | On-premise/Single Tenant | All deployments |
|
||||||
|
| **Authentication** | Endpoint key only | Endpoint key or API key |
|
@ -22,34 +22,36 @@ nav:
|
|||||||
- Additional Configurations: 'usage-guide/additional_configurations.md'
|
- Additional Configurations: 'usage-guide/additional_configurations.md'
|
||||||
- Frequently Asked Questions: 'faq/index.md'
|
- Frequently Asked Questions: 'faq/index.md'
|
||||||
- 💎 Qodo Merge Models: 'usage-guide/qodo_merge_models.md'
|
- 💎 Qodo Merge Models: 'usage-guide/qodo_merge_models.md'
|
||||||
|
- 💎 Qodo Merge Endpoints: 'usage-guide/qm_endpoints.md'
|
||||||
- Tools:
|
- Tools:
|
||||||
- 'tools/index.md'
|
- 'tools/index.md'
|
||||||
- Describe: 'tools/describe.md'
|
- Describe: 'tools/describe.md'
|
||||||
- Review: 'tools/review.md'
|
- Review: 'tools/review.md'
|
||||||
- Improve: 'tools/improve.md'
|
- Improve: 'tools/improve.md'
|
||||||
- Ask: 'tools/ask.md'
|
- Ask: 'tools/ask.md'
|
||||||
- Update Changelog: 'tools/update_changelog.md'
|
- Help: 'tools/help.md'
|
||||||
- Help Docs: 'tools/help_docs.md'
|
- Help Docs: 'tools/help_docs.md'
|
||||||
- Help: 'tools/help.md'
|
- Update Changelog: 'tools/update_changelog.md'
|
||||||
- 💎 Analyze: 'tools/analyze.md'
|
- 💎 Add Documentation: 'tools/documentation.md'
|
||||||
- 💎 Test: 'tools/test.md'
|
- 💎 Analyze: 'tools/analyze.md'
|
||||||
- 💎 Improve Component: 'tools/improve_component.md'
|
- 💎 CI Feedback: 'tools/ci_feedback.md'
|
||||||
- 💎 Documentation: 'tools/documentation.md'
|
- 💎 Custom Prompt: 'tools/custom_prompt.md'
|
||||||
- 💎 Custom Labels: 'tools/custom_labels.md'
|
- 💎 Generate Labels: 'tools/custom_labels.md'
|
||||||
- 💎 Custom Prompt: 'tools/custom_prompt.md'
|
- 💎 Generate Tests: 'tools/test.md'
|
||||||
- 💎 CI Feedback: 'tools/ci_feedback.md'
|
- 💎 Implement: 'tools/implement.md'
|
||||||
- 💎 Similar Code: 'tools/similar_code.md'
|
- 💎 Improve Components: 'tools/improve_component.md'
|
||||||
- 💎 Implement: 'tools/implement.md'
|
- 💎 Scan Repo Discussions: 'tools/scan_repo_discussions.md'
|
||||||
- 💎 Scan Repo Discussions: 'tools/scan_repo_discussions.md'
|
- 💎 Similar Code: 'tools/similar_code.md'
|
||||||
- 💎 Repo Statistics: 'tools/repo_statistics.md'
|
|
||||||
- Core Abilities:
|
- Core Abilities:
|
||||||
- 'core-abilities/index.md'
|
- 'core-abilities/index.md'
|
||||||
- Auto best practices: 'core-abilities/auto_best_practices.md'
|
- Auto best practices: 'core-abilities/auto_best_practices.md'
|
||||||
|
- Chat on code suggestions: 'core-abilities/chat_on_code_suggestions.md'
|
||||||
- Code validation: 'core-abilities/code_validation.md'
|
- Code validation: 'core-abilities/code_validation.md'
|
||||||
- Compression strategy: 'core-abilities/compression_strategy.md'
|
- Compression strategy: 'core-abilities/compression_strategy.md'
|
||||||
- Dynamic context: 'core-abilities/dynamic_context.md'
|
- Dynamic context: 'core-abilities/dynamic_context.md'
|
||||||
- Fetching ticket context: 'core-abilities/fetching_ticket_context.md'
|
- Fetching ticket context: 'core-abilities/fetching_ticket_context.md'
|
||||||
- Impact evaluation: 'core-abilities/impact_evaluation.md'
|
- Impact evaluation: 'core-abilities/impact_evaluation.md'
|
||||||
|
- Incremental Update: 'core-abilities/incremental_update.md'
|
||||||
- Interactivity: 'core-abilities/interactivity.md'
|
- Interactivity: 'core-abilities/interactivity.md'
|
||||||
- Local and global metadata: 'core-abilities/metadata.md'
|
- Local and global metadata: 'core-abilities/metadata.md'
|
||||||
- RAG context enrichment: 'core-abilities/rag_context_enrichment.md'
|
- RAG context enrichment: 'core-abilities/rag_context_enrichment.md'
|
||||||
|
@ -53,43 +53,59 @@ MAX_TOKENS = {
|
|||||||
'vertex_ai/claude-3-5-haiku@20241022': 100000,
|
'vertex_ai/claude-3-5-haiku@20241022': 100000,
|
||||||
'vertex_ai/claude-3-sonnet@20240229': 100000,
|
'vertex_ai/claude-3-sonnet@20240229': 100000,
|
||||||
'vertex_ai/claude-3-opus@20240229': 100000,
|
'vertex_ai/claude-3-opus@20240229': 100000,
|
||||||
|
'vertex_ai/claude-opus-4@20250514': 200000,
|
||||||
'vertex_ai/claude-3-5-sonnet@20240620': 100000,
|
'vertex_ai/claude-3-5-sonnet@20240620': 100000,
|
||||||
'vertex_ai/claude-3-5-sonnet-v2@20241022': 100000,
|
'vertex_ai/claude-3-5-sonnet-v2@20241022': 100000,
|
||||||
'vertex_ai/claude-3-7-sonnet@20250219': 200000,
|
'vertex_ai/claude-3-7-sonnet@20250219': 200000,
|
||||||
|
'vertex_ai/claude-sonnet-4@20250514': 200000,
|
||||||
'vertex_ai/gemini-1.5-pro': 1048576,
|
'vertex_ai/gemini-1.5-pro': 1048576,
|
||||||
'vertex_ai/gemini-2.5-pro-preview-03-25': 1048576,
|
'vertex_ai/gemini-2.5-pro-preview-03-25': 1048576,
|
||||||
'vertex_ai/gemini-2.5-pro-preview-05-06': 1048576,
|
'vertex_ai/gemini-2.5-pro-preview-05-06': 1048576,
|
||||||
|
'vertex_ai/gemini-2.5-pro-preview-06-05': 1048576,
|
||||||
'vertex_ai/gemini-1.5-flash': 1048576,
|
'vertex_ai/gemini-1.5-flash': 1048576,
|
||||||
'vertex_ai/gemini-2.0-flash': 1048576,
|
'vertex_ai/gemini-2.0-flash': 1048576,
|
||||||
'vertex_ai/gemini-2.5-flash-preview-04-17': 1048576,
|
'vertex_ai/gemini-2.5-flash-preview-04-17': 1048576,
|
||||||
|
'vertex_ai/gemini-2.5-flash-preview-05-20': 1048576,
|
||||||
'vertex_ai/gemma2': 8200,
|
'vertex_ai/gemma2': 8200,
|
||||||
'gemini/gemini-1.5-pro': 1048576,
|
'gemini/gemini-1.5-pro': 1048576,
|
||||||
'gemini/gemini-1.5-flash': 1048576,
|
'gemini/gemini-1.5-flash': 1048576,
|
||||||
'gemini/gemini-2.0-flash': 1048576,
|
'gemini/gemini-2.0-flash': 1048576,
|
||||||
|
'gemini/gemini-2.5-flash-preview-04-17': 1048576,
|
||||||
|
'gemini/gemini-2.5-flash-preview-05-20': 1048576,
|
||||||
'gemini/gemini-2.5-pro-preview-03-25': 1048576,
|
'gemini/gemini-2.5-pro-preview-03-25': 1048576,
|
||||||
'gemini/gemini-2.5-pro-preview-05-06': 1048576,
|
'gemini/gemini-2.5-pro-preview-05-06': 1048576,
|
||||||
|
'gemini/gemini-2.5-pro-preview-06-05': 1048576,
|
||||||
'codechat-bison': 6144,
|
'codechat-bison': 6144,
|
||||||
'codechat-bison-32k': 32000,
|
'codechat-bison-32k': 32000,
|
||||||
'anthropic.claude-instant-v1': 100000,
|
'anthropic.claude-instant-v1': 100000,
|
||||||
'anthropic.claude-v1': 100000,
|
'anthropic.claude-v1': 100000,
|
||||||
'anthropic.claude-v2': 100000,
|
'anthropic.claude-v2': 100000,
|
||||||
'anthropic/claude-3-opus-20240229': 100000,
|
'anthropic/claude-3-opus-20240229': 100000,
|
||||||
|
'anthropic/claude-opus-4-20250514': 200000,
|
||||||
'anthropic/claude-3-5-sonnet-20240620': 100000,
|
'anthropic/claude-3-5-sonnet-20240620': 100000,
|
||||||
'anthropic/claude-3-5-sonnet-20241022': 100000,
|
'anthropic/claude-3-5-sonnet-20241022': 100000,
|
||||||
'anthropic/claude-3-7-sonnet-20250219': 200000,
|
'anthropic/claude-3-7-sonnet-20250219': 200000,
|
||||||
|
'anthropic/claude-sonnet-4-20250514': 200000,
|
||||||
'claude-3-7-sonnet-20250219': 200000,
|
'claude-3-7-sonnet-20250219': 200000,
|
||||||
'anthropic/claude-3-5-haiku-20241022': 100000,
|
'anthropic/claude-3-5-haiku-20241022': 100000,
|
||||||
'bedrock/anthropic.claude-instant-v1': 100000,
|
'bedrock/anthropic.claude-instant-v1': 100000,
|
||||||
'bedrock/anthropic.claude-v2': 100000,
|
'bedrock/anthropic.claude-v2': 100000,
|
||||||
'bedrock/anthropic.claude-v2:1': 100000,
|
'bedrock/anthropic.claude-v2:1': 100000,
|
||||||
'bedrock/anthropic.claude-3-sonnet-20240229-v1:0': 100000,
|
'bedrock/anthropic.claude-3-sonnet-20240229-v1:0': 100000,
|
||||||
|
'bedrock/anthropic.claude-opus-4-20250514-v1:0': 200000,
|
||||||
'bedrock/anthropic.claude-3-haiku-20240307-v1:0': 100000,
|
'bedrock/anthropic.claude-3-haiku-20240307-v1:0': 100000,
|
||||||
'bedrock/anthropic.claude-3-5-haiku-20241022-v1:0': 100000,
|
'bedrock/anthropic.claude-3-5-haiku-20241022-v1:0': 100000,
|
||||||
'bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0': 100000,
|
'bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0': 100000,
|
||||||
'bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0': 100000,
|
'bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0': 100000,
|
||||||
'bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0': 200000,
|
'bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0': 200000,
|
||||||
|
'bedrock/anthropic.claude-sonnet-4-20250514-v1:0': 200000,
|
||||||
|
"bedrock/us.anthropic.claude-opus-4-20250514-v1:0": 200000,
|
||||||
"bedrock/us.anthropic.claude-3-5-sonnet-20241022-v2:0": 100000,
|
"bedrock/us.anthropic.claude-3-5-sonnet-20241022-v2:0": 100000,
|
||||||
"bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0": 200000,
|
"bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0": 200000,
|
||||||
|
"bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||||
|
"bedrock/apac.anthropic.claude-3-5-sonnet-20241022-v2:0": 100000,
|
||||||
|
"bedrock/apac.anthropic.claude-3-7-sonnet-20250219-v1:0": 200000,
|
||||||
|
"bedrock/apac.anthropic.claude-sonnet-4-20250514-v1:0": 200000,
|
||||||
'claude-3-5-sonnet': 100000,
|
'claude-3-5-sonnet': 100000,
|
||||||
'groq/meta-llama/llama-4-scout-17b-16e-instruct': 131072,
|
'groq/meta-llama/llama-4-scout-17b-16e-instruct': 131072,
|
||||||
'groq/meta-llama/llama-4-maverick-17b-128e-instruct': 131072,
|
'groq/meta-llama/llama-4-maverick-17b-128e-instruct': 131072,
|
||||||
@ -102,9 +118,13 @@ MAX_TOKENS = {
|
|||||||
'xai/grok-2': 131072,
|
'xai/grok-2': 131072,
|
||||||
'xai/grok-2-1212': 131072,
|
'xai/grok-2-1212': 131072,
|
||||||
'xai/grok-2-latest': 131072,
|
'xai/grok-2-latest': 131072,
|
||||||
|
'xai/grok-3': 131072,
|
||||||
'xai/grok-3-beta': 131072,
|
'xai/grok-3-beta': 131072,
|
||||||
|
'xai/grok-3-fast': 131072,
|
||||||
'xai/grok-3-fast-beta': 131072,
|
'xai/grok-3-fast-beta': 131072,
|
||||||
|
'xai/grok-3-mini': 131072,
|
||||||
'xai/grok-3-mini-beta': 131072,
|
'xai/grok-3-mini-beta': 131072,
|
||||||
|
'xai/grok-3-mini-fast': 131072,
|
||||||
'xai/grok-3-mini-fast-beta': 131072,
|
'xai/grok-3-mini-fast-beta': 131072,
|
||||||
'ollama/llama3': 4096,
|
'ollama/llama3': 4096,
|
||||||
'watsonx/meta-llama/llama-3-8b-instruct': 4096,
|
'watsonx/meta-llama/llama-3-8b-instruct': 4096,
|
||||||
|
@ -1,13 +1,17 @@
|
|||||||
|
_LANGCHAIN_INSTALLED = False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from langchain_core.messages import HumanMessage, SystemMessage
|
from langchain_core.messages import HumanMessage, SystemMessage
|
||||||
from langchain_openai import AzureChatOpenAI, ChatOpenAI
|
from langchain_openai import AzureChatOpenAI, ChatOpenAI
|
||||||
|
_LANGCHAIN_INSTALLED = True
|
||||||
except: # we don't enforce langchain as a dependency, so if it's not installed, just move on
|
except: # we don't enforce langchain as a dependency, so if it's not installed, just move on
|
||||||
pass
|
pass
|
||||||
|
|
||||||
import functools
|
import functools
|
||||||
|
|
||||||
from openai import APIError, RateLimitError, Timeout
|
import openai
|
||||||
from retry import retry
|
from tenacity import retry, retry_if_exception_type, retry_if_not_exception_type, stop_after_attempt
|
||||||
|
from langchain_core.runnables import Runnable
|
||||||
|
|
||||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
@ -18,17 +22,14 @@ OPENAI_RETRIES = 5
|
|||||||
|
|
||||||
class LangChainOpenAIHandler(BaseAiHandler):
|
class LangChainOpenAIHandler(BaseAiHandler):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
# Initialize OpenAIHandler specific attributes here
|
if not _LANGCHAIN_INSTALLED:
|
||||||
|
error_msg = "LangChain is not installed. Please install it with `pip install langchain`."
|
||||||
|
get_logger().error(error_msg)
|
||||||
|
raise ImportError(error_msg)
|
||||||
|
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.azure = get_settings().get("OPENAI.API_TYPE", "").lower() == "azure"
|
self.azure = get_settings().get("OPENAI.API_TYPE", "").lower() == "azure"
|
||||||
|
|
||||||
# Create a default unused chat object to trigger early validation
|
|
||||||
self._create_chat(self.deployment_id)
|
|
||||||
|
|
||||||
def chat(self, messages: list, model: str, temperature: float):
|
|
||||||
chat = self._create_chat(self.deployment_id)
|
|
||||||
return chat.invoke(input=messages, model=model, temperature=temperature)
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def deployment_id(self):
|
def deployment_id(self):
|
||||||
"""
|
"""
|
||||||
@ -36,26 +37,10 @@ class LangChainOpenAIHandler(BaseAiHandler):
|
|||||||
"""
|
"""
|
||||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||||
|
|
||||||
@retry(exceptions=(APIError, Timeout, AttributeError, RateLimitError),
|
async def _create_chat_async(self, deployment_id=None):
|
||||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
|
||||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
|
||||||
try:
|
|
||||||
messages = [SystemMessage(content=system), HumanMessage(content=user)]
|
|
||||||
|
|
||||||
# get a chat completion from the formatted messages
|
|
||||||
resp = self.chat(messages, model=model, temperature=temperature)
|
|
||||||
finish_reason = "completed"
|
|
||||||
return resp.content, finish_reason
|
|
||||||
|
|
||||||
except (Exception) as e:
|
|
||||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
|
||||||
raise e
|
|
||||||
|
|
||||||
def _create_chat(self, deployment_id=None):
|
|
||||||
try:
|
try:
|
||||||
if self.azure:
|
if self.azure:
|
||||||
# using a partial function so we can set the deployment_id later to support fallback_deployments
|
# Using Azure OpenAI service
|
||||||
# but still need to access the other settings now so we can raise a proper exception if they're missing
|
|
||||||
return AzureChatOpenAI(
|
return AzureChatOpenAI(
|
||||||
openai_api_key=get_settings().openai.key,
|
openai_api_key=get_settings().openai.key,
|
||||||
openai_api_version=get_settings().openai.api_version,
|
openai_api_version=get_settings().openai.api_version,
|
||||||
@ -63,14 +48,64 @@ class LangChainOpenAIHandler(BaseAiHandler):
|
|||||||
azure_endpoint=get_settings().openai.api_base,
|
azure_endpoint=get_settings().openai.api_base,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# for llms that compatible with openai, should use custom api base
|
# Using standard OpenAI or other LLM services
|
||||||
openai_api_base = get_settings().get("OPENAI.API_BASE", None)
|
openai_api_base = get_settings().get("OPENAI.API_BASE", None)
|
||||||
if openai_api_base is None or len(openai_api_base) == 0:
|
if openai_api_base is None or len(openai_api_base) == 0:
|
||||||
return ChatOpenAI(openai_api_key=get_settings().openai.key)
|
return ChatOpenAI(openai_api_key=get_settings().openai.key)
|
||||||
else:
|
else:
|
||||||
return ChatOpenAI(openai_api_key=get_settings().openai.key, openai_api_base=openai_api_base)
|
return ChatOpenAI(
|
||||||
|
openai_api_key=get_settings().openai.key,
|
||||||
|
openai_api_base=openai_api_base
|
||||||
|
)
|
||||||
except AttributeError as e:
|
except AttributeError as e:
|
||||||
if getattr(e, "name"):
|
# Handle configuration errors
|
||||||
raise ValueError(f"OpenAI {e.name} is required") from e
|
error_msg = f"OpenAI {e.name} is required" if getattr(e, "name") else str(e)
|
||||||
|
get_logger().error(error_msg)
|
||||||
|
raise ValueError(error_msg) from e
|
||||||
|
|
||||||
|
@retry(
|
||||||
|
retry=retry_if_exception_type(openai.APIError) & retry_if_not_exception_type(openai.RateLimitError),
|
||||||
|
stop=stop_after_attempt(OPENAI_RETRIES),
|
||||||
|
)
|
||||||
|
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
|
||||||
|
if img_path:
|
||||||
|
get_logger().warning(f"Image path is not supported for LangChainOpenAIHandler. Ignoring image path: {img_path}")
|
||||||
|
try:
|
||||||
|
messages = [SystemMessage(content=system), HumanMessage(content=user)]
|
||||||
|
llm = await self._create_chat_async(deployment_id=self.deployment_id)
|
||||||
|
|
||||||
|
if not isinstance(llm, Runnable):
|
||||||
|
error_message = (
|
||||||
|
f"The Langchain LLM object ({type(llm)}) does not implement the Runnable interface. "
|
||||||
|
f"Please update your Langchain library to the latest version or "
|
||||||
|
f"check your LLM configuration to support async calls. "
|
||||||
|
f"PR-Agent is designed to utilize Langchain's async capabilities."
|
||||||
|
)
|
||||||
|
get_logger().error(error_message)
|
||||||
|
raise NotImplementedError(error_message)
|
||||||
|
|
||||||
|
# Handle parameters based on LLM type
|
||||||
|
if isinstance(llm, (ChatOpenAI, AzureChatOpenAI)):
|
||||||
|
# OpenAI models support all parameters
|
||||||
|
resp = await llm.ainvoke(
|
||||||
|
input=messages,
|
||||||
|
model=model,
|
||||||
|
temperature=temperature
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
raise e
|
# Other LLMs (like Gemini) only support input parameter
|
||||||
|
get_logger().info(f"Using simplified ainvoke for {type(llm)}")
|
||||||
|
resp = await llm.ainvoke(input=messages)
|
||||||
|
|
||||||
|
finish_reason = "completed"
|
||||||
|
return resp.content, finish_reason
|
||||||
|
|
||||||
|
except openai.RateLimitError as e:
|
||||||
|
get_logger().error(f"Rate limit error during LLM inference: {e}")
|
||||||
|
raise
|
||||||
|
except openai.APIError as e:
|
||||||
|
get_logger().warning(f"Error during LLM inference: {e}")
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().warning(f"Unknown error during LLM inference: {e}")
|
||||||
|
raise openai.APIError from e
|
||||||
|
@ -3,7 +3,7 @@ import litellm
|
|||||||
import openai
|
import openai
|
||||||
import requests
|
import requests
|
||||||
from litellm import acompletion
|
from litellm import acompletion
|
||||||
from tenacity import retry, retry_if_exception_type, stop_after_attempt
|
from tenacity import retry, retry_if_exception_type, retry_if_not_exception_type, stop_after_attempt
|
||||||
|
|
||||||
from pr_agent.algo import CLAUDE_EXTENDED_THINKING_MODELS, NO_SUPPORT_TEMPERATURE_MODELS, SUPPORT_REASONING_EFFORT_MODELS, USER_MESSAGE_ONLY_MODELS
|
from pr_agent.algo import CLAUDE_EXTENDED_THINKING_MODELS, NO_SUPPORT_TEMPERATURE_MODELS, SUPPORT_REASONING_EFFORT_MODELS, USER_MESSAGE_ONLY_MODELS
|
||||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||||
@ -274,8 +274,8 @@ class LiteLLMAIHandler(BaseAiHandler):
|
|||||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||||
|
|
||||||
@retry(
|
@retry(
|
||||||
retry=retry_if_exception_type((openai.APIError, openai.APIConnectionError, openai.APITimeoutError)), # No retry on RateLimitError
|
retry=retry_if_exception_type(openai.APIError) & retry_if_not_exception_type(openai.RateLimitError),
|
||||||
stop=stop_after_attempt(OPENAI_RETRIES)
|
stop=stop_after_attempt(OPENAI_RETRIES),
|
||||||
)
|
)
|
||||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
|
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
|
||||||
try:
|
try:
|
||||||
@ -371,13 +371,13 @@ class LiteLLMAIHandler(BaseAiHandler):
|
|||||||
get_logger().info(f"\nUser prompt:\n{user}")
|
get_logger().info(f"\nUser prompt:\n{user}")
|
||||||
|
|
||||||
response = await acompletion(**kwargs)
|
response = await acompletion(**kwargs)
|
||||||
except (openai.APIError, openai.APITimeoutError) as e:
|
except openai.RateLimitError as e:
|
||||||
get_logger().warning(f"Error during LLM inference: {e}")
|
|
||||||
raise
|
|
||||||
except (openai.RateLimitError) as e:
|
|
||||||
get_logger().error(f"Rate limit error during LLM inference: {e}")
|
get_logger().error(f"Rate limit error during LLM inference: {e}")
|
||||||
raise
|
raise
|
||||||
except (Exception) as e:
|
except openai.APIError as e:
|
||||||
|
get_logger().warning(f"Error during LLM inference: {e}")
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
get_logger().warning(f"Unknown error during LLM inference: {e}")
|
get_logger().warning(f"Unknown error during LLM inference: {e}")
|
||||||
raise openai.APIError from e
|
raise openai.APIError from e
|
||||||
if response is None or len(response["choices"]) == 0:
|
if response is None or len(response["choices"]) == 0:
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
from os import environ
|
from os import environ
|
||||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||||
import openai
|
import openai
|
||||||
from openai import APIError, AsyncOpenAI, RateLimitError, Timeout
|
from openai import AsyncOpenAI
|
||||||
from retry import retry
|
from tenacity import retry, retry_if_exception_type, retry_if_not_exception_type, stop_after_attempt
|
||||||
|
|
||||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
@ -38,10 +38,14 @@ class OpenAIHandler(BaseAiHandler):
|
|||||||
"""
|
"""
|
||||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||||
|
|
||||||
@retry(exceptions=(APIError, Timeout, AttributeError, RateLimitError),
|
@retry(
|
||||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
retry=retry_if_exception_type(openai.APIError) & retry_if_not_exception_type(openai.RateLimitError),
|
||||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
stop=stop_after_attempt(OPENAI_RETRIES),
|
||||||
|
)
|
||||||
|
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
|
||||||
try:
|
try:
|
||||||
|
if img_path:
|
||||||
|
get_logger().warning(f"Image path is not supported for OpenAIHandler. Ignoring image path: {img_path}")
|
||||||
get_logger().info("System: ", system)
|
get_logger().info("System: ", system)
|
||||||
get_logger().info("User: ", user)
|
get_logger().info("User: ", user)
|
||||||
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
||||||
@ -57,12 +61,12 @@ class OpenAIHandler(BaseAiHandler):
|
|||||||
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
|
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
|
||||||
model=model, usage=usage)
|
model=model, usage=usage)
|
||||||
return resp, finish_reason
|
return resp, finish_reason
|
||||||
except (APIError, Timeout) as e:
|
except openai.RateLimitError as e:
|
||||||
get_logger().error("Error during OpenAI inference: ", e)
|
get_logger().error(f"Rate limit error during LLM inference: {e}")
|
||||||
raise
|
raise
|
||||||
except (RateLimitError) as e:
|
except openai.APIError as e:
|
||||||
get_logger().error("Rate limit error during OpenAI inference: ", e)
|
get_logger().warning(f"Error during LLM inference: {e}")
|
||||||
raise
|
|
||||||
except (Exception) as e:
|
|
||||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
|
||||||
raise
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().warning(f"Unknown error during LLM inference: {e}")
|
||||||
|
raise openai.APIError from e
|
||||||
|
@ -58,6 +58,9 @@ def filter_ignored(files, platform = 'github'):
|
|||||||
files = files_o
|
files = files_o
|
||||||
elif platform == 'azure':
|
elif platform == 'azure':
|
||||||
files = [f for f in files if not r.match(f)]
|
files = [f for f in files if not r.match(f)]
|
||||||
|
elif platform == 'gitea':
|
||||||
|
files = [f for f in files if not r.match(f.get("filename", ""))]
|
||||||
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Could not filter file list: {e}")
|
print(f"Could not filter file list: {e}")
|
||||||
|
@ -1,4 +1,6 @@
|
|||||||
from threading import Lock
|
from threading import Lock
|
||||||
|
from math import ceil
|
||||||
|
import re
|
||||||
|
|
||||||
from jinja2 import Environment, StrictUndefined
|
from jinja2 import Environment, StrictUndefined
|
||||||
from tiktoken import encoding_for_model, get_encoding
|
from tiktoken import encoding_for_model, get_encoding
|
||||||
@ -7,6 +9,16 @@ from pr_agent.config_loader import get_settings
|
|||||||
from pr_agent.log import get_logger
|
from pr_agent.log import get_logger
|
||||||
|
|
||||||
|
|
||||||
|
class ModelTypeValidator:
|
||||||
|
@staticmethod
|
||||||
|
def is_openai_model(model_name: str) -> bool:
|
||||||
|
return 'gpt' in model_name or re.match(r"^o[1-9](-mini|-preview)?$", model_name)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def is_anthropic_model(model_name: str) -> bool:
|
||||||
|
return 'claude' in model_name
|
||||||
|
|
||||||
|
|
||||||
class TokenEncoder:
|
class TokenEncoder:
|
||||||
_encoder_instance = None
|
_encoder_instance = None
|
||||||
_model = None
|
_model = None
|
||||||
@ -40,6 +52,10 @@ class TokenHandler:
|
|||||||
method.
|
method.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
# Constants
|
||||||
|
CLAUDE_MODEL = "claude-3-7-sonnet-20250219"
|
||||||
|
CLAUDE_MAX_CONTENT_SIZE = 9_000_000 # Maximum allowed content size (9MB) for Claude API
|
||||||
|
|
||||||
def __init__(self, pr=None, vars: dict = {}, system="", user=""):
|
def __init__(self, pr=None, vars: dict = {}, system="", user=""):
|
||||||
"""
|
"""
|
||||||
Initializes the TokenHandler object.
|
Initializes the TokenHandler object.
|
||||||
@ -51,6 +67,7 @@ class TokenHandler:
|
|||||||
- user: The user string.
|
- user: The user string.
|
||||||
"""
|
"""
|
||||||
self.encoder = TokenEncoder.get_token_encoder()
|
self.encoder = TokenEncoder.get_token_encoder()
|
||||||
|
|
||||||
if pr is not None:
|
if pr is not None:
|
||||||
self.prompt_tokens = self._get_system_user_tokens(pr, self.encoder, vars, system, user)
|
self.prompt_tokens = self._get_system_user_tokens(pr, self.encoder, vars, system, user)
|
||||||
|
|
||||||
@ -79,22 +96,22 @@ class TokenHandler:
|
|||||||
get_logger().error(f"Error in _get_system_user_tokens: {e}")
|
get_logger().error(f"Error in _get_system_user_tokens: {e}")
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
def calc_claude_tokens(self, patch):
|
def _calc_claude_tokens(self, patch: str) -> int:
|
||||||
try:
|
try:
|
||||||
import anthropic
|
import anthropic
|
||||||
from pr_agent.algo import MAX_TOKENS
|
from pr_agent.algo import MAX_TOKENS
|
||||||
|
|
||||||
client = anthropic.Anthropic(api_key=get_settings(use_context=False).get('anthropic.key'))
|
client = anthropic.Anthropic(api_key=get_settings(use_context=False).get('anthropic.key'))
|
||||||
MaxTokens = MAX_TOKENS[get_settings().config.model]
|
max_tokens = MAX_TOKENS[get_settings().config.model]
|
||||||
|
|
||||||
# Check if the content size is too large (9MB limit)
|
if len(patch.encode('utf-8')) > self.CLAUDE_MAX_CONTENT_SIZE:
|
||||||
if len(patch.encode('utf-8')) > 9_000_000:
|
|
||||||
get_logger().warning(
|
get_logger().warning(
|
||||||
"Content too large for Anthropic token counting API, falling back to local tokenizer"
|
"Content too large for Anthropic token counting API, falling back to local tokenizer"
|
||||||
)
|
)
|
||||||
return MaxTokens
|
return max_tokens
|
||||||
|
|
||||||
response = client.messages.count_tokens(
|
response = client.messages.count_tokens(
|
||||||
model="claude-3-7-sonnet-20250219",
|
model=self.CLAUDE_MODEL,
|
||||||
system="system",
|
system="system",
|
||||||
messages=[{
|
messages=[{
|
||||||
"role": "user",
|
"role": "user",
|
||||||
@ -104,42 +121,51 @@ class TokenHandler:
|
|||||||
return response.input_tokens
|
return response.input_tokens
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error( f"Error in Anthropic token counting: {e}")
|
get_logger().error(f"Error in Anthropic token counting: {e}")
|
||||||
return MaxTokens
|
return max_tokens
|
||||||
|
|
||||||
def estimate_token_count_for_non_anth_claude_models(self, model, default_encoder_estimate):
|
def _apply_estimation_factor(self, model_name: str, default_estimate: int) -> int:
|
||||||
from math import ceil
|
factor = 1 + get_settings().get('config.model_token_count_estimate_factor', 0)
|
||||||
import re
|
get_logger().warning(f"{model_name}'s token count cannot be accurately estimated. Using factor of {factor}")
|
||||||
|
|
||||||
|
return ceil(factor * default_estimate)
|
||||||
|
|
||||||
model_is_from_o_series = re.match(r"^o[1-9](-mini|-preview)?$", model)
|
def _get_token_count_by_model_type(self, patch: str, default_estimate: int) -> int:
|
||||||
if ('gpt' in get_settings().config.model.lower() or model_is_from_o_series) and get_settings(use_context=False).get('openai.key'):
|
"""
|
||||||
return default_encoder_estimate
|
Get token count based on model type.
|
||||||
#else: Model is not an OpenAI one - therefore, cannot provide an accurate token count and instead, return a higher number as best effort.
|
|
||||||
|
|
||||||
elbow_factor = 1 + get_settings().get('config.model_token_count_estimate_factor', 0)
|
Args:
|
||||||
get_logger().warning(f"{model}'s expected token count cannot be accurately estimated. Using {elbow_factor} of encoder output as best effort estimate")
|
patch: The text to count tokens for.
|
||||||
return ceil(elbow_factor * default_encoder_estimate)
|
default_estimate: The default token count estimate.
|
||||||
|
|
||||||
def count_tokens(self, patch: str, force_accurate=False) -> int:
|
Returns:
|
||||||
|
int: The calculated token count.
|
||||||
|
"""
|
||||||
|
model_name = get_settings().config.model.lower()
|
||||||
|
|
||||||
|
if ModelTypeValidator.is_openai_model(model_name) and get_settings(use_context=False).get('openai.key'):
|
||||||
|
return default_estimate
|
||||||
|
|
||||||
|
if ModelTypeValidator.is_anthropic_model(model_name) and get_settings(use_context=False).get('anthropic.key'):
|
||||||
|
return self._calc_claude_tokens(patch)
|
||||||
|
|
||||||
|
return self._apply_estimation_factor(model_name, default_estimate)
|
||||||
|
|
||||||
|
def count_tokens(self, patch: str, force_accurate: bool = False) -> int:
|
||||||
"""
|
"""
|
||||||
Counts the number of tokens in a given patch string.
|
Counts the number of tokens in a given patch string.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
- patch: The patch string.
|
- patch: The patch string.
|
||||||
|
- force_accurate: If True, uses a more precise calculation method.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The number of tokens in the patch string.
|
The number of tokens in the patch string.
|
||||||
"""
|
"""
|
||||||
encoder_estimate = len(self.encoder.encode(patch, disallowed_special=()))
|
encoder_estimate = len(self.encoder.encode(patch, disallowed_special=()))
|
||||||
|
|
||||||
#If an estimate is enough (for example, in cases where the maximal allowed tokens is way below the known limits), return it.
|
# If an estimate is enough (for example, in cases where the maximal allowed tokens is way below the known limits), return it.
|
||||||
if not force_accurate:
|
if not force_accurate:
|
||||||
return encoder_estimate
|
return encoder_estimate
|
||||||
|
|
||||||
#else, force_accurate==True: User requested providing an accurate estimation:
|
return self._get_token_count_by_model_type(patch, encoder_estimate)
|
||||||
model = get_settings().config.model.lower()
|
|
||||||
if 'claude' in model and get_settings(use_context=False).get('anthropic.key'):
|
|
||||||
return self.calc_claude_tokens(patch) # API call to Anthropic for accurate token counting for Claude models
|
|
||||||
|
|
||||||
#else: Non Anthropic provided model:
|
|
||||||
return self.estimate_token_count_for_non_anth_claude_models(model, encoder_estimate)
|
|
||||||
|
@ -945,12 +945,66 @@ def clip_tokens(text: str, max_tokens: int, add_three_dots=True, num_input_token
|
|||||||
"""
|
"""
|
||||||
Clip the number of tokens in a string to a maximum number of tokens.
|
Clip the number of tokens in a string to a maximum number of tokens.
|
||||||
|
|
||||||
|
This function limits text to a specified token count by calculating the approximate
|
||||||
|
character-to-token ratio and truncating the text accordingly. A safety factor of 0.9
|
||||||
|
(10% reduction) is applied to ensure the result stays within the token limit.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
text (str): The string to clip.
|
text (str): The string to clip. If empty or None, returns the input unchanged.
|
||||||
max_tokens (int): The maximum number of tokens allowed in the string.
|
max_tokens (int): The maximum number of tokens allowed in the string.
|
||||||
add_three_dots (bool, optional): A boolean indicating whether to add three dots at the end of the clipped
|
If negative, returns an empty string.
|
||||||
|
add_three_dots (bool, optional): Whether to add "\\n...(truncated)" at the end
|
||||||
|
of the clipped text to indicate truncation.
|
||||||
|
Defaults to True.
|
||||||
|
num_input_tokens (int, optional): Pre-computed number of tokens in the input text.
|
||||||
|
If provided, skips token encoding step for efficiency.
|
||||||
|
If None, tokens will be counted using TokenEncoder.
|
||||||
|
Defaults to None.
|
||||||
|
delete_last_line (bool, optional): Whether to remove the last line from the
|
||||||
|
clipped content before adding truncation indicator.
|
||||||
|
Useful for ensuring clean breaks at line boundaries.
|
||||||
|
Defaults to False.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
str: The clipped string.
|
str: The clipped string. Returns original text if:
|
||||||
|
- Text is empty/None
|
||||||
|
- Token count is within limit
|
||||||
|
- An error occurs during processing
|
||||||
|
|
||||||
|
Returns empty string if max_tokens <= 0.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
Basic usage:
|
||||||
|
>>> text = "This is a sample text that might be too long"
|
||||||
|
>>> result = clip_tokens(text, max_tokens=10)
|
||||||
|
>>> print(result)
|
||||||
|
This is a sample...
|
||||||
|
(truncated)
|
||||||
|
|
||||||
|
Without truncation indicator:
|
||||||
|
>>> result = clip_tokens(text, max_tokens=10, add_three_dots=False)
|
||||||
|
>>> print(result)
|
||||||
|
This is a sample
|
||||||
|
|
||||||
|
With pre-computed token count:
|
||||||
|
>>> result = clip_tokens(text, max_tokens=5, num_input_tokens=15)
|
||||||
|
>>> print(result)
|
||||||
|
This...
|
||||||
|
(truncated)
|
||||||
|
|
||||||
|
With line deletion:
|
||||||
|
>>> multiline_text = "Line 1\\nLine 2\\nLine 3"
|
||||||
|
>>> result = clip_tokens(multiline_text, max_tokens=3, delete_last_line=True)
|
||||||
|
>>> print(result)
|
||||||
|
Line 1
|
||||||
|
Line 2
|
||||||
|
...
|
||||||
|
(truncated)
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
The function uses a safety factor of 0.9 (10% reduction) to ensure the
|
||||||
|
result stays within the token limit, as character-to-token ratios can vary.
|
||||||
|
If token encoding fails, the original text is returned with a warning logged.
|
||||||
"""
|
"""
|
||||||
if not text:
|
if not text:
|
||||||
return text
|
return text
|
||||||
|
@ -81,3 +81,62 @@ def _find_pyproject() -> Optional[Path]:
|
|||||||
pyproject_path = _find_pyproject()
|
pyproject_path = _find_pyproject()
|
||||||
if pyproject_path is not None:
|
if pyproject_path is not None:
|
||||||
get_settings().load_file(pyproject_path, env=f'tool.{PR_AGENT_TOML_KEY}')
|
get_settings().load_file(pyproject_path, env=f'tool.{PR_AGENT_TOML_KEY}')
|
||||||
|
|
||||||
|
|
||||||
|
def apply_secrets_manager_config():
|
||||||
|
"""
|
||||||
|
Retrieve configuration from AWS Secrets Manager and override existing settings
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Dynamic imports to avoid circular dependency (secret_providers imports config_loader)
|
||||||
|
from pr_agent.secret_providers import get_secret_provider
|
||||||
|
from pr_agent.log import get_logger
|
||||||
|
|
||||||
|
secret_provider = get_secret_provider()
|
||||||
|
if not secret_provider:
|
||||||
|
return
|
||||||
|
|
||||||
|
if (hasattr(secret_provider, 'get_all_secrets') and
|
||||||
|
get_settings().get("CONFIG.SECRET_PROVIDER") == 'aws_secrets_manager'):
|
||||||
|
try:
|
||||||
|
secrets = secret_provider.get_all_secrets()
|
||||||
|
if secrets:
|
||||||
|
apply_secrets_to_config(secrets)
|
||||||
|
get_logger().info("Applied AWS Secrets Manager configuration")
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Failed to apply AWS Secrets Manager config: {e}")
|
||||||
|
except Exception as e:
|
||||||
|
try:
|
||||||
|
from pr_agent.log import get_logger
|
||||||
|
get_logger().debug(f"Secret provider not configured: {e}")
|
||||||
|
except:
|
||||||
|
# Fail completely silently if log module is not available
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def apply_secrets_to_config(secrets: dict):
|
||||||
|
"""
|
||||||
|
Apply secret dictionary to configuration
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Dynamic import to avoid potential circular dependency
|
||||||
|
from pr_agent.log import get_logger
|
||||||
|
except:
|
||||||
|
def get_logger():
|
||||||
|
class DummyLogger:
|
||||||
|
def debug(self, msg): pass
|
||||||
|
return DummyLogger()
|
||||||
|
|
||||||
|
for key, value in secrets.items():
|
||||||
|
if '.' in key: # nested key like "openai.key"
|
||||||
|
parts = key.split('.')
|
||||||
|
if len(parts) == 2:
|
||||||
|
section, setting = parts
|
||||||
|
section_upper = section.upper()
|
||||||
|
setting_upper = setting.upper()
|
||||||
|
|
||||||
|
# Set only when no existing value (prioritize environment variables)
|
||||||
|
current_value = get_settings().get(f"{section_upper}.{setting_upper}")
|
||||||
|
if current_value is None or current_value == "":
|
||||||
|
get_settings().set(f"{section_upper}.{setting_upper}", value)
|
||||||
|
get_logger().debug(f"Set {section}.{setting} from AWS Secrets Manager")
|
||||||
|
@ -8,9 +8,11 @@ from pr_agent.git_providers.bitbucket_server_provider import \
|
|||||||
from pr_agent.git_providers.codecommit_provider import CodeCommitProvider
|
from pr_agent.git_providers.codecommit_provider import CodeCommitProvider
|
||||||
from pr_agent.git_providers.gerrit_provider import GerritProvider
|
from pr_agent.git_providers.gerrit_provider import GerritProvider
|
||||||
from pr_agent.git_providers.git_provider import GitProvider
|
from pr_agent.git_providers.git_provider import GitProvider
|
||||||
|
from pr_agent.git_providers.gitea_provider import GiteaProvider
|
||||||
from pr_agent.git_providers.github_provider import GithubProvider
|
from pr_agent.git_providers.github_provider import GithubProvider
|
||||||
from pr_agent.git_providers.gitlab_provider import GitLabProvider
|
from pr_agent.git_providers.gitlab_provider import GitLabProvider
|
||||||
from pr_agent.git_providers.local_git_provider import LocalGitProvider
|
from pr_agent.git_providers.local_git_provider import LocalGitProvider
|
||||||
|
from pr_agent.git_providers.gitea_provider import GiteaProvider
|
||||||
|
|
||||||
_GIT_PROVIDERS = {
|
_GIT_PROVIDERS = {
|
||||||
'github': GithubProvider,
|
'github': GithubProvider,
|
||||||
@ -21,6 +23,7 @@ _GIT_PROVIDERS = {
|
|||||||
'codecommit': CodeCommitProvider,
|
'codecommit': CodeCommitProvider,
|
||||||
'local': LocalGitProvider,
|
'local': LocalGitProvider,
|
||||||
'gerrit': GerritProvider,
|
'gerrit': GerritProvider,
|
||||||
|
'gitea': GiteaProvider
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -618,7 +618,7 @@ class AzureDevopsProvider(GitProvider):
|
|||||||
return pr_id
|
return pr_id
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
if get_settings().config.verbosity_level >= 2:
|
if get_settings().config.verbosity_level >= 2:
|
||||||
get_logger().info(f"Failed to get pr id, error: {e}")
|
get_logger().info(f"Failed to get PR id, error: {e}")
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
def publish_file_comments(self, file_comments: list) -> bool:
|
def publish_file_comments(self, file_comments: list) -> bool:
|
||||||
|
992
pr_agent/git_providers/gitea_provider.py
Normal file
992
pr_agent/git_providers/gitea_provider.py
Normal file
@ -0,0 +1,992 @@
|
|||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
from typing import Any, Dict, List, Optional, Set, Tuple
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
import giteapy
|
||||||
|
from giteapy.rest import ApiException
|
||||||
|
|
||||||
|
from pr_agent.algo.file_filter import filter_ignored
|
||||||
|
from pr_agent.algo.language_handler import is_valid_file
|
||||||
|
from pr_agent.algo.types import EDIT_TYPE
|
||||||
|
from pr_agent.algo.utils import (clip_tokens,
|
||||||
|
find_line_number_of_relevant_line_in_file)
|
||||||
|
from pr_agent.config_loader import get_settings
|
||||||
|
from pr_agent.git_providers.git_provider import (MAX_FILES_ALLOWED_FULL,
|
||||||
|
FilePatchInfo, GitProvider,
|
||||||
|
IncrementalPR)
|
||||||
|
from pr_agent.log import get_logger
|
||||||
|
|
||||||
|
|
||||||
|
class GiteaProvider(GitProvider):
|
||||||
|
def __init__(self, url: Optional[str] = None):
|
||||||
|
super().__init__()
|
||||||
|
self.logger = get_logger()
|
||||||
|
|
||||||
|
if not url:
|
||||||
|
self.logger.error("PR URL not provided.")
|
||||||
|
raise ValueError("PR URL not provided.")
|
||||||
|
|
||||||
|
self.base_url = get_settings().get("GITEA.URL", "https://gitea.com").rstrip("/")
|
||||||
|
self.pr_url = ""
|
||||||
|
self.issue_url = ""
|
||||||
|
|
||||||
|
gitea_access_token = get_settings().get("GITEA.PERSONAL_ACCESS_TOKEN", None)
|
||||||
|
if not gitea_access_token:
|
||||||
|
self.logger.error("Gitea access token not found in settings.")
|
||||||
|
raise ValueError("Gitea access token not found in settings.")
|
||||||
|
|
||||||
|
self.repo_settings = get_settings().get("GITEA.REPO_SETTING", None)
|
||||||
|
configuration = giteapy.Configuration()
|
||||||
|
configuration.host = "{}/api/v1".format(self.base_url)
|
||||||
|
configuration.api_key['Authorization'] = f'token {gitea_access_token}'
|
||||||
|
|
||||||
|
client = giteapy.ApiClient(configuration)
|
||||||
|
self.repo_api = RepoApi(client)
|
||||||
|
self.owner = None
|
||||||
|
self.repo = None
|
||||||
|
self.pr_number = None
|
||||||
|
self.issue_number = None
|
||||||
|
self.max_comment_chars = 65000
|
||||||
|
self.enabled_pr = False
|
||||||
|
self.enabled_issue = False
|
||||||
|
self.temp_comments = []
|
||||||
|
self.pr = None
|
||||||
|
self.git_files = []
|
||||||
|
self.file_contents = {}
|
||||||
|
self.file_diffs = {}
|
||||||
|
self.sha = None
|
||||||
|
self.diff_files = []
|
||||||
|
self.incremental = IncrementalPR(False)
|
||||||
|
self.comments_list = []
|
||||||
|
self.unreviewed_files_set = dict()
|
||||||
|
|
||||||
|
if "pulls" in url:
|
||||||
|
self.pr_url = url
|
||||||
|
self.__set_repo_and_owner_from_pr()
|
||||||
|
self.enabled_pr = True
|
||||||
|
self.pr = self.repo_api.get_pull_request(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
pr_number=self.pr_number
|
||||||
|
)
|
||||||
|
self.git_files = self.repo_api.get_change_file_pull_request(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
pr_number=self.pr_number
|
||||||
|
)
|
||||||
|
# Optional ignore with user custom
|
||||||
|
self.git_files = filter_ignored(self.git_files, platform="gitea")
|
||||||
|
|
||||||
|
self.sha = self.pr.head.sha if self.pr.head.sha else ""
|
||||||
|
self.__add_file_content()
|
||||||
|
self.__add_file_diff()
|
||||||
|
self.pr_commits = self.repo_api.list_all_commits(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo
|
||||||
|
)
|
||||||
|
self.last_commit = self.pr_commits[-1]
|
||||||
|
self.base_sha = self.pr.base.sha if self.pr.base.sha else ""
|
||||||
|
self.base_ref = self.pr.base.ref if self.pr.base.ref else ""
|
||||||
|
elif "issues" in url:
|
||||||
|
self.issue_url = url
|
||||||
|
self.__set_repo_and_owner_from_issue()
|
||||||
|
self.enabled_issue = True
|
||||||
|
else:
|
||||||
|
self.pr_commits = None
|
||||||
|
|
||||||
|
def __add_file_content(self):
|
||||||
|
for file in self.git_files:
|
||||||
|
file_path = file.get("filename")
|
||||||
|
# Ignore file from default settings
|
||||||
|
if not is_valid_file(file_path):
|
||||||
|
continue
|
||||||
|
|
||||||
|
if file_path and self.sha:
|
||||||
|
try:
|
||||||
|
content = self.repo_api.get_file_content(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
commit_sha=self.sha,
|
||||||
|
filepath=file_path
|
||||||
|
)
|
||||||
|
self.file_contents[file_path] = content
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error getting file content for {file_path}: {str(e)}")
|
||||||
|
self.file_contents[file_path] = ""
|
||||||
|
|
||||||
|
def __add_file_diff(self):
|
||||||
|
try:
|
||||||
|
diff_contents = self.repo_api.get_pull_request_diff(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
pr_number=self.pr_number
|
||||||
|
)
|
||||||
|
|
||||||
|
lines = diff_contents.splitlines()
|
||||||
|
current_file = None
|
||||||
|
current_patch = []
|
||||||
|
file_patches = {}
|
||||||
|
for line in lines:
|
||||||
|
if line.startswith('diff --git'):
|
||||||
|
if current_file and current_patch:
|
||||||
|
file_patches[current_file] = '\n'.join(current_patch)
|
||||||
|
current_patch = []
|
||||||
|
current_file = line.split(' b/')[-1]
|
||||||
|
elif line.startswith('@@'):
|
||||||
|
current_patch = [line]
|
||||||
|
elif current_patch:
|
||||||
|
current_patch.append(line)
|
||||||
|
|
||||||
|
if current_file and current_patch:
|
||||||
|
file_patches[current_file] = '\n'.join(current_patch)
|
||||||
|
|
||||||
|
self.file_diffs = file_patches
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Error getting diff content: {str(e)}")
|
||||||
|
|
||||||
|
def _parse_pr_url(self, pr_url: str) -> Tuple[str, str, int]:
|
||||||
|
parsed_url = urlparse(pr_url)
|
||||||
|
|
||||||
|
if parsed_url.path.startswith('/api/v1'):
|
||||||
|
parsed_url = urlparse(pr_url.replace("/api/v1", ""))
|
||||||
|
|
||||||
|
path_parts = parsed_url.path.strip('/').split('/')
|
||||||
|
if len(path_parts) < 4 or path_parts[2] != 'pulls':
|
||||||
|
raise ValueError("The provided URL does not appear to be a Gitea PR URL")
|
||||||
|
|
||||||
|
try:
|
||||||
|
pr_number = int(path_parts[3])
|
||||||
|
except ValueError as e:
|
||||||
|
raise ValueError("Unable to convert PR number to integer") from e
|
||||||
|
|
||||||
|
owner = path_parts[0]
|
||||||
|
repo = path_parts[1]
|
||||||
|
|
||||||
|
return owner, repo, pr_number
|
||||||
|
|
||||||
|
def _parse_issue_url(self, issue_url: str) -> Tuple[str, str, int]:
|
||||||
|
parsed_url = urlparse(issue_url)
|
||||||
|
|
||||||
|
if parsed_url.path.startswith('/api/v1'):
|
||||||
|
parsed_url = urlparse(issue_url.replace("/api/v1", ""))
|
||||||
|
|
||||||
|
path_parts = parsed_url.path.strip('/').split('/')
|
||||||
|
if len(path_parts) < 4 or path_parts[2] != 'issues':
|
||||||
|
raise ValueError("The provided URL does not appear to be a Gitea issue URL")
|
||||||
|
|
||||||
|
try:
|
||||||
|
issue_number = int(path_parts[3])
|
||||||
|
except ValueError as e:
|
||||||
|
raise ValueError("Unable to convert issue number to integer") from e
|
||||||
|
|
||||||
|
owner = path_parts[0]
|
||||||
|
repo = path_parts[1]
|
||||||
|
|
||||||
|
return owner, repo, issue_number
|
||||||
|
|
||||||
|
def __set_repo_and_owner_from_pr(self):
|
||||||
|
"""Extract owner and repo from the PR URL"""
|
||||||
|
try:
|
||||||
|
owner, repo, pr_number = self._parse_pr_url(self.pr_url)
|
||||||
|
self.owner = owner
|
||||||
|
self.repo = repo
|
||||||
|
self.pr_number = pr_number
|
||||||
|
self.logger.info(f"Owner: {self.owner}, Repo: {self.repo}, PR Number: {self.pr_number}")
|
||||||
|
except ValueError as e:
|
||||||
|
self.logger.error(f"Error parsing PR URL: {str(e)}")
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {str(e)}")
|
||||||
|
|
||||||
|
def __set_repo_and_owner_from_issue(self):
|
||||||
|
"""Extract owner and repo from the issue URL"""
|
||||||
|
try:
|
||||||
|
owner, repo, issue_number = self._parse_issue_url(self.issue_url)
|
||||||
|
self.owner = owner
|
||||||
|
self.repo = repo
|
||||||
|
self.issue_number = issue_number
|
||||||
|
self.logger.info(f"Owner: {self.owner}, Repo: {self.repo}, Issue Number: {self.issue_number}")
|
||||||
|
except ValueError as e:
|
||||||
|
self.logger.error(f"Error parsing issue URL: {str(e)}")
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {str(e)}")
|
||||||
|
|
||||||
|
def get_pr_url(self) -> str:
|
||||||
|
return self.pr_url
|
||||||
|
|
||||||
|
def get_issue_url(self) -> str:
|
||||||
|
return self.issue_url
|
||||||
|
|
||||||
|
def publish_comment(self, comment: str,is_temporary: bool = False) -> None:
|
||||||
|
"""Publish a comment to the pull request"""
|
||||||
|
if is_temporary and not get_settings().config.publish_output_progress:
|
||||||
|
get_logger().debug(f"Skipping publish_comment for temporary comment")
|
||||||
|
return None
|
||||||
|
|
||||||
|
if self.enabled_issue:
|
||||||
|
index = self.issue_number
|
||||||
|
elif self.enabled_pr:
|
||||||
|
index = self.pr_number
|
||||||
|
else:
|
||||||
|
self.logger.error("Neither PR nor issue URL provided.")
|
||||||
|
return None
|
||||||
|
|
||||||
|
comment = self.limit_output_characters(comment, self.max_comment_chars)
|
||||||
|
response = self.repo_api.create_comment(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
index=index,
|
||||||
|
comment=comment
|
||||||
|
)
|
||||||
|
|
||||||
|
if not response:
|
||||||
|
self.logger.error("Failed to publish comment")
|
||||||
|
return None
|
||||||
|
|
||||||
|
if is_temporary:
|
||||||
|
self.temp_comments.append(comment)
|
||||||
|
|
||||||
|
comment_obj = {
|
||||||
|
"is_temporary": is_temporary,
|
||||||
|
"comment": comment,
|
||||||
|
"comment_id": response.id if isinstance(response, tuple) else response.id
|
||||||
|
}
|
||||||
|
self.comments_list.append(comment_obj)
|
||||||
|
self.logger.info("Comment published")
|
||||||
|
return comment_obj
|
||||||
|
|
||||||
|
def edit_comment(self, comment, body : str):
|
||||||
|
body = self.limit_output_characters(body, self.max_comment_chars)
|
||||||
|
try:
|
||||||
|
self.repo_api.edit_comment(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
comment_id=comment.get("comment_id") if isinstance(comment, dict) else comment.id,
|
||||||
|
comment=body
|
||||||
|
)
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error editing comment: {e}")
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def publish_inline_comment(self,body: str, relevant_file: str, relevant_line_in_file: str, original_suggestion=None):
|
||||||
|
"""Publish an inline comment on a specific line"""
|
||||||
|
body = self.limit_output_characters(body, self.max_comment_chars)
|
||||||
|
position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files,
|
||||||
|
relevant_file.strip('`'),
|
||||||
|
relevant_line_in_file,
|
||||||
|
)
|
||||||
|
if position == -1:
|
||||||
|
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||||
|
subject_type = "FILE"
|
||||||
|
else:
|
||||||
|
subject_type = "LINE"
|
||||||
|
|
||||||
|
path = relevant_file.strip()
|
||||||
|
payload = dict(body=body, path=path, old_position=position,new_position = absolute_position) if subject_type == "LINE" else {}
|
||||||
|
self.publish_inline_comments([payload])
|
||||||
|
|
||||||
|
|
||||||
|
def publish_inline_comments(self, comments: List[Dict[str, Any]],body : str = "Inline comment") -> None:
|
||||||
|
response = self.repo_api.create_inline_comment(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
pr_number=self.pr_number if self.enabled_pr else self.issue_number,
|
||||||
|
body=body,
|
||||||
|
commit_id=self.last_commit.sha if self.last_commit else "",
|
||||||
|
comments=comments
|
||||||
|
)
|
||||||
|
|
||||||
|
if not response:
|
||||||
|
self.logger.error("Failed to publish inline comment")
|
||||||
|
return None
|
||||||
|
|
||||||
|
self.logger.info("Inline comment published")
|
||||||
|
|
||||||
|
def publish_code_suggestions(self, suggestions: List[Dict[str, Any]]):
|
||||||
|
"""Publish code suggestions"""
|
||||||
|
for suggestion in suggestions:
|
||||||
|
body = suggestion.get("body","")
|
||||||
|
if not body:
|
||||||
|
self.logger.error("No body provided for the suggestion")
|
||||||
|
continue
|
||||||
|
|
||||||
|
path = suggestion.get("relevant_file","")
|
||||||
|
new_position = suggestion.get("relevant_lines_start",0)
|
||||||
|
old_position = suggestion.get("relevant_lines_start",0) if "original_suggestion" not in suggestion else suggestion["original_suggestion"].get("relevant_lines_start",0)
|
||||||
|
title_body = suggestion["original_suggestion"].get("suggestion_content","") if "original_suggestion" in suggestion else ""
|
||||||
|
payload = dict(body=body, path=path, old_position=old_position,new_position = new_position)
|
||||||
|
if title_body:
|
||||||
|
title_body = f"**Suggestion:** {title_body}"
|
||||||
|
self.publish_inline_comments([payload],title_body)
|
||||||
|
else:
|
||||||
|
self.publish_inline_comments([payload])
|
||||||
|
|
||||||
|
def add_eyes_reaction(self, issue_comment_id: int, disable_eyes: bool = False) -> Optional[int]:
|
||||||
|
"""Add eyes reaction to a comment"""
|
||||||
|
try:
|
||||||
|
if disable_eyes:
|
||||||
|
return None
|
||||||
|
|
||||||
|
comments = self.repo_api.list_all_comments(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
index=self.pr_number if self.enabled_pr else self.issue_number
|
||||||
|
)
|
||||||
|
|
||||||
|
comment_ids = [comment.id for comment in comments]
|
||||||
|
if issue_comment_id not in comment_ids:
|
||||||
|
self.logger.error(f"Comment ID {issue_comment_id} not found. Available IDs: {comment_ids}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
response = self.repo_api.add_reaction_comment(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
comment_id=issue_comment_id,
|
||||||
|
reaction="eyes"
|
||||||
|
)
|
||||||
|
|
||||||
|
if not response:
|
||||||
|
self.logger.error("Failed to add eyes reaction")
|
||||||
|
return None
|
||||||
|
|
||||||
|
return response[0].id if isinstance(response, tuple) else response.id
|
||||||
|
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error adding eyes reaction: {e}")
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def remove_reaction(self, comment_id: int) -> None:
|
||||||
|
"""Remove reaction from a comment"""
|
||||||
|
try:
|
||||||
|
response = self.repo_api.remove_reaction_comment(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
comment_id=comment_id
|
||||||
|
)
|
||||||
|
if not response:
|
||||||
|
self.logger.error("Failed to remove reaction")
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error removing reaction: {e}")
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {e}")
|
||||||
|
|
||||||
|
def get_commit_messages(self)-> str:
|
||||||
|
"""Get commit messages for the PR"""
|
||||||
|
max_tokens = get_settings().get("CONFIG.MAX_COMMITS_TOKENS", None)
|
||||||
|
pr_commits = self.repo_api.get_pr_commits(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
pr_number=self.pr_number
|
||||||
|
)
|
||||||
|
|
||||||
|
if not pr_commits:
|
||||||
|
self.logger.error("Failed to get commit messages")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
try:
|
||||||
|
commit_messages = [commit["commit"]["message"] for commit in pr_commits if commit]
|
||||||
|
|
||||||
|
if not commit_messages:
|
||||||
|
self.logger.error("No commit messages found")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
commit_message = "".join(commit_messages)
|
||||||
|
if max_tokens:
|
||||||
|
commit_message = clip_tokens(commit_message, max_tokens)
|
||||||
|
|
||||||
|
return commit_message
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Error processing commit messages: {str(e)}")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def _get_file_content_from_base(self, filename: str) -> str:
|
||||||
|
return self.repo_api.get_file_content(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
commit_sha=self.base_sha,
|
||||||
|
filepath=filename
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_file_content_from_latest_commit(self, filename: str) -> str:
|
||||||
|
return self.repo_api.get_file_content(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
commit_sha=self.last_commit.sha,
|
||||||
|
filepath=filename
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_diff_files(self) -> List[FilePatchInfo]:
|
||||||
|
"""Get files that were modified in the PR"""
|
||||||
|
if self.diff_files:
|
||||||
|
return self.diff_files
|
||||||
|
|
||||||
|
invalid_files_names = []
|
||||||
|
counter_valid = 0
|
||||||
|
diff_files = []
|
||||||
|
for file in self.git_files:
|
||||||
|
filename = file.get("filename")
|
||||||
|
if not filename:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not is_valid_file(filename):
|
||||||
|
invalid_files_names.append(filename)
|
||||||
|
continue
|
||||||
|
|
||||||
|
counter_valid += 1
|
||||||
|
avoid_load = False
|
||||||
|
patch = self.file_diffs.get(filename,"")
|
||||||
|
head_file = ""
|
||||||
|
base_file = ""
|
||||||
|
|
||||||
|
if counter_valid >= MAX_FILES_ALLOWED_FULL and patch and not self.incremental.is_incremental:
|
||||||
|
avoid_load = True
|
||||||
|
if counter_valid == MAX_FILES_ALLOWED_FULL:
|
||||||
|
self.logger.info("Too many files in PR, will avoid loading full content for rest of files")
|
||||||
|
|
||||||
|
if avoid_load:
|
||||||
|
head_file = ""
|
||||||
|
else:
|
||||||
|
# Get file content from this pr
|
||||||
|
head_file = self.file_contents.get(filename,"")
|
||||||
|
|
||||||
|
if self.incremental.is_incremental and self.unreviewed_files_set:
|
||||||
|
base_file = self._get_file_content_from_latest_commit(filename)
|
||||||
|
self.unreviewed_files_set[filename] = patch
|
||||||
|
else:
|
||||||
|
if avoid_load:
|
||||||
|
base_file = ""
|
||||||
|
else:
|
||||||
|
base_file = self._get_file_content_from_base(filename)
|
||||||
|
|
||||||
|
num_plus_lines = file.get("additions",0)
|
||||||
|
num_minus_lines = file.get("deletions",0)
|
||||||
|
status = file.get("status","")
|
||||||
|
|
||||||
|
if status == 'added':
|
||||||
|
edit_type = EDIT_TYPE.ADDED
|
||||||
|
elif status == 'removed' or status == 'deleted':
|
||||||
|
edit_type = EDIT_TYPE.DELETED
|
||||||
|
elif status == 'renamed':
|
||||||
|
edit_type = EDIT_TYPE.RENAMED
|
||||||
|
elif status == 'modified' or status == 'changed':
|
||||||
|
edit_type = EDIT_TYPE.MODIFIED
|
||||||
|
else:
|
||||||
|
self.logger.error(f"Unknown edit type: {status}")
|
||||||
|
edit_type = EDIT_TYPE.UNKNOWN
|
||||||
|
|
||||||
|
file_patch_info = FilePatchInfo(
|
||||||
|
base_file=base_file,
|
||||||
|
head_file=head_file,
|
||||||
|
patch=patch,
|
||||||
|
filename=filename,
|
||||||
|
num_minus_lines=num_minus_lines,
|
||||||
|
num_plus_lines=num_plus_lines,
|
||||||
|
edit_type=edit_type
|
||||||
|
)
|
||||||
|
diff_files.append(file_patch_info)
|
||||||
|
|
||||||
|
if invalid_files_names:
|
||||||
|
self.logger.info(f"Filtered out files with invalid extensions: {invalid_files_names}")
|
||||||
|
|
||||||
|
self.diff_files = diff_files
|
||||||
|
return diff_files
|
||||||
|
|
||||||
|
def get_line_link(self, relevant_file, relevant_line_start, relevant_line_end = None) -> str:
|
||||||
|
if relevant_line_start == -1:
|
||||||
|
link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}"
|
||||||
|
elif relevant_line_end:
|
||||||
|
link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}#L{relevant_line_start}-L{relevant_line_end}"
|
||||||
|
else:
|
||||||
|
link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}#L{relevant_line_start}"
|
||||||
|
|
||||||
|
self.logger.info(f"Generated link: {link}")
|
||||||
|
return link
|
||||||
|
|
||||||
|
def get_files(self) -> List[Dict[str, Any]]:
|
||||||
|
"""Get all files in the PR"""
|
||||||
|
return [file.get("filename","") for file in self.git_files]
|
||||||
|
|
||||||
|
def get_num_of_files(self) -> int:
|
||||||
|
"""Get number of files changed in the PR"""
|
||||||
|
return len(self.git_files)
|
||||||
|
|
||||||
|
def get_issue_comments(self) -> List[Dict[str, Any]]:
|
||||||
|
"""Get all comments in the PR"""
|
||||||
|
index = self.issue_number if self.enabled_issue else self.pr_number
|
||||||
|
comments = self.repo_api.list_all_comments(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
index=index
|
||||||
|
)
|
||||||
|
if not comments:
|
||||||
|
self.logger.error("Failed to get comments")
|
||||||
|
return []
|
||||||
|
|
||||||
|
return comments
|
||||||
|
|
||||||
|
def get_languages(self) -> Set[str]:
|
||||||
|
"""Get programming languages used in the repository"""
|
||||||
|
languages = self.repo_api.get_languages(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo
|
||||||
|
)
|
||||||
|
|
||||||
|
return languages
|
||||||
|
|
||||||
|
def get_pr_branch(self) -> str:
|
||||||
|
"""Get the branch name of the PR"""
|
||||||
|
if not self.pr:
|
||||||
|
self.logger.error("Failed to get PR branch")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
if not self.pr.head:
|
||||||
|
self.logger.error("PR head not found")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
return self.pr.head.ref if self.pr.head.ref else ""
|
||||||
|
|
||||||
|
def get_pr_description_full(self) -> str:
|
||||||
|
"""Get full PR description with metadata"""
|
||||||
|
if not self.pr:
|
||||||
|
self.logger.error("Failed to get PR description")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
return self.pr.body if self.pr.body else ""
|
||||||
|
|
||||||
|
def get_pr_labels(self,update=False) -> List[str]:
|
||||||
|
"""Get labels assigned to the PR"""
|
||||||
|
if not update:
|
||||||
|
if not self.pr.labels:
|
||||||
|
self.logger.error("Failed to get PR labels")
|
||||||
|
return []
|
||||||
|
return [label.name for label in self.pr.labels]
|
||||||
|
|
||||||
|
labels = self.repo_api.get_issue_labels(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
issue_number=self.pr_number
|
||||||
|
)
|
||||||
|
if not labels:
|
||||||
|
self.logger.error("Failed to get PR labels")
|
||||||
|
return []
|
||||||
|
|
||||||
|
return [label.name for label in labels]
|
||||||
|
|
||||||
|
def get_repo_settings(self) -> str:
|
||||||
|
"""Get repository settings"""
|
||||||
|
if not self.repo_settings:
|
||||||
|
self.logger.error("Repository settings not found")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
response = self.repo_api.get_file_content(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
commit_sha=self.sha,
|
||||||
|
filepath=self.repo_settings
|
||||||
|
)
|
||||||
|
if not response:
|
||||||
|
self.logger.error("Failed to get repository settings")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
def get_user_id(self) -> str:
|
||||||
|
"""Get the ID of the authenticated user"""
|
||||||
|
return f"{self.pr.user.id}" if self.pr else ""
|
||||||
|
|
||||||
|
def is_supported(self, capability) -> bool:
|
||||||
|
"""Check if the provider is supported"""
|
||||||
|
return True
|
||||||
|
|
||||||
|
def publish_description(self, pr_title: str, pr_body: str) -> None:
|
||||||
|
"""Publish PR description"""
|
||||||
|
response = self.repo_api.edit_pull_request(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
pr_number=self.pr_number if self.enabled_pr else self.issue_number,
|
||||||
|
title=pr_title,
|
||||||
|
body=pr_body
|
||||||
|
)
|
||||||
|
|
||||||
|
if not response:
|
||||||
|
self.logger.error("Failed to publish PR description")
|
||||||
|
return None
|
||||||
|
|
||||||
|
self.logger.info("PR description published successfully")
|
||||||
|
if self.enabled_pr:
|
||||||
|
self.pr = self.repo_api.get_pull_request(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
pr_number=self.pr_number
|
||||||
|
)
|
||||||
|
|
||||||
|
def publish_labels(self, labels: List[int]) -> None:
|
||||||
|
"""Publish labels to the PR"""
|
||||||
|
if not labels:
|
||||||
|
self.logger.error("No labels provided to publish")
|
||||||
|
return None
|
||||||
|
|
||||||
|
response = self.repo_api.add_labels(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
issue_number=self.pr_number if self.enabled_pr else self.issue_number,
|
||||||
|
labels=labels
|
||||||
|
)
|
||||||
|
|
||||||
|
if response:
|
||||||
|
self.logger.info("Labels added successfully")
|
||||||
|
|
||||||
|
def remove_comment(self, comment) -> None:
|
||||||
|
"""Remove a specific comment"""
|
||||||
|
if not comment:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
comment_id = comment.get("comment_id") if isinstance(comment, dict) else comment.id
|
||||||
|
if not comment_id:
|
||||||
|
self.logger.error("Comment ID not found")
|
||||||
|
return None
|
||||||
|
self.repo_api.remove_comment(
|
||||||
|
owner=self.owner,
|
||||||
|
repo=self.repo,
|
||||||
|
comment_id=comment_id
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.comments_list and comment in self.comments_list:
|
||||||
|
self.comments_list.remove(comment)
|
||||||
|
|
||||||
|
self.logger.info(f"Comment removed successfully: {comment}")
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error removing comment: {e}")
|
||||||
|
raise e
|
||||||
|
|
||||||
|
def remove_initial_comment(self) -> None:
|
||||||
|
"""Remove the initial comment"""
|
||||||
|
for comment in self.comments_list:
|
||||||
|
try:
|
||||||
|
if not comment.get("is_temporary"):
|
||||||
|
continue
|
||||||
|
self.remove_comment(comment)
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Error removing comment: {e}")
|
||||||
|
continue
|
||||||
|
self.logger.info(f"Removed initial comment: {comment.get('comment_id')}")
|
||||||
|
|
||||||
|
|
||||||
|
class RepoApi(giteapy.RepositoryApi):
|
||||||
|
def __init__(self, client: giteapy.ApiClient):
|
||||||
|
self.repository = giteapy.RepositoryApi(client)
|
||||||
|
self.issue = giteapy.IssueApi(client)
|
||||||
|
self.logger = get_logger()
|
||||||
|
super().__init__(client)
|
||||||
|
|
||||||
|
def create_inline_comment(self, owner: str, repo: str, pr_number: int, body : str ,commit_id : str, comments: List[Dict[str, Any]]) -> None:
|
||||||
|
body = {
|
||||||
|
"body": body,
|
||||||
|
"comments": comments,
|
||||||
|
"commit_id": commit_id,
|
||||||
|
}
|
||||||
|
return self.api_client.call_api(
|
||||||
|
'/repos/{owner}/{repo}/pulls/{pr_number}/reviews',
|
||||||
|
'POST',
|
||||||
|
path_params={'owner': owner, 'repo': repo, 'pr_number': pr_number},
|
||||||
|
body=body,
|
||||||
|
response_type='Repository',
|
||||||
|
auth_settings=['AuthorizationHeaderToken']
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_comment(self, owner: str, repo: str, index: int, comment: str):
|
||||||
|
body = {
|
||||||
|
"body": comment
|
||||||
|
}
|
||||||
|
return self.issue.issue_create_comment(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo,
|
||||||
|
index=index,
|
||||||
|
body=body
|
||||||
|
)
|
||||||
|
|
||||||
|
def edit_comment(self, owner: str, repo: str, comment_id: int, comment: str):
|
||||||
|
body = {
|
||||||
|
"body": comment
|
||||||
|
}
|
||||||
|
return self.issue.issue_edit_comment(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo,
|
||||||
|
id=comment_id,
|
||||||
|
body=body
|
||||||
|
)
|
||||||
|
|
||||||
|
def remove_comment(self, owner: str, repo: str, comment_id: int):
|
||||||
|
return self.issue.issue_delete_comment(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo,
|
||||||
|
id=comment_id
|
||||||
|
)
|
||||||
|
|
||||||
|
def list_all_comments(self, owner: str, repo: str, index: int):
|
||||||
|
return self.issue.issue_get_comments(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo,
|
||||||
|
index=index
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_pull_request_diff(self, owner: str, repo: str, pr_number: int) -> str:
|
||||||
|
"""Get the diff content of a pull request using direct API call"""
|
||||||
|
try:
|
||||||
|
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||||
|
url = f'/repos/{owner}/{repo}/pulls/{pr_number}.diff'
|
||||||
|
if token:
|
||||||
|
url = f'{url}?token={token}'
|
||||||
|
|
||||||
|
response = self.api_client.call_api(
|
||||||
|
url,
|
||||||
|
'GET',
|
||||||
|
path_params={},
|
||||||
|
response_type=None,
|
||||||
|
_return_http_data_only=False,
|
||||||
|
_preload_content=False
|
||||||
|
)
|
||||||
|
|
||||||
|
if hasattr(response, 'data'):
|
||||||
|
raw_data = response.data.read()
|
||||||
|
return raw_data.decode('utf-8')
|
||||||
|
elif isinstance(response, tuple):
|
||||||
|
raw_data = response[0].read()
|
||||||
|
return raw_data.decode('utf-8')
|
||||||
|
else:
|
||||||
|
error_msg = f"Unexpected response format received from API: {type(response)}"
|
||||||
|
self.logger.error(error_msg)
|
||||||
|
raise RuntimeError(error_msg)
|
||||||
|
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error getting diff: {str(e)}")
|
||||||
|
raise e
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {str(e)}")
|
||||||
|
raise e
|
||||||
|
|
||||||
|
def get_pull_request(self, owner: str, repo: str, pr_number: int):
|
||||||
|
"""Get pull request details including description"""
|
||||||
|
return self.repository.repo_get_pull_request(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo,
|
||||||
|
index=pr_number
|
||||||
|
)
|
||||||
|
|
||||||
|
def edit_pull_request(self, owner: str, repo: str, pr_number: int,title : str, body: str):
|
||||||
|
"""Edit pull request description"""
|
||||||
|
body = {
|
||||||
|
"body": body,
|
||||||
|
"title" : title
|
||||||
|
}
|
||||||
|
return self.repository.repo_edit_pull_request(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo,
|
||||||
|
index=pr_number,
|
||||||
|
body=body
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_change_file_pull_request(self, owner: str, repo: str, pr_number: int):
|
||||||
|
"""Get changed files in the pull request"""
|
||||||
|
try:
|
||||||
|
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||||
|
url = f'/repos/{owner}/{repo}/pulls/{pr_number}/files'
|
||||||
|
if token:
|
||||||
|
url = f'{url}?token={token}'
|
||||||
|
|
||||||
|
response = self.api_client.call_api(
|
||||||
|
url,
|
||||||
|
'GET',
|
||||||
|
path_params={},
|
||||||
|
response_type=None,
|
||||||
|
_return_http_data_only=False,
|
||||||
|
_preload_content=False
|
||||||
|
)
|
||||||
|
|
||||||
|
if hasattr(response, 'data'):
|
||||||
|
raw_data = response.data.read()
|
||||||
|
diff_content = raw_data.decode('utf-8')
|
||||||
|
return json.loads(diff_content) if isinstance(diff_content, str) else diff_content
|
||||||
|
elif isinstance(response, tuple):
|
||||||
|
raw_data = response[0].read()
|
||||||
|
diff_content = raw_data.decode('utf-8')
|
||||||
|
return json.loads(diff_content) if isinstance(diff_content, str) else diff_content
|
||||||
|
|
||||||
|
return []
|
||||||
|
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error getting changed files: {e}")
|
||||||
|
return []
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def get_languages(self, owner: str, repo: str):
|
||||||
|
"""Get programming languages used in the repository"""
|
||||||
|
try:
|
||||||
|
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||||
|
url = f'/repos/{owner}/{repo}/languages'
|
||||||
|
if token:
|
||||||
|
url = f'{url}?token={token}'
|
||||||
|
|
||||||
|
response = self.api_client.call_api(
|
||||||
|
url,
|
||||||
|
'GET',
|
||||||
|
path_params={},
|
||||||
|
response_type=None,
|
||||||
|
_return_http_data_only=False,
|
||||||
|
_preload_content=False
|
||||||
|
)
|
||||||
|
|
||||||
|
if hasattr(response, 'data'):
|
||||||
|
raw_data = response.data.read()
|
||||||
|
return json.loads(raw_data.decode('utf-8'))
|
||||||
|
elif isinstance(response, tuple):
|
||||||
|
raw_data = response[0].read()
|
||||||
|
return json.loads(raw_data.decode('utf-8'))
|
||||||
|
|
||||||
|
return {}
|
||||||
|
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error getting languages: {e}")
|
||||||
|
return {}
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {e}")
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def get_file_content(self, owner: str, repo: str, commit_sha: str, filepath: str) -> str:
|
||||||
|
"""Get raw file content from a specific commit"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||||
|
url = f'/repos/{owner}/{repo}/raw/{filepath}'
|
||||||
|
if token:
|
||||||
|
url = f'{url}?token={token}&ref={commit_sha}'
|
||||||
|
|
||||||
|
response = self.api_client.call_api(
|
||||||
|
url,
|
||||||
|
'GET',
|
||||||
|
path_params={},
|
||||||
|
response_type=None,
|
||||||
|
_return_http_data_only=False,
|
||||||
|
_preload_content=False
|
||||||
|
)
|
||||||
|
|
||||||
|
if hasattr(response, 'data'):
|
||||||
|
raw_data = response.data.read()
|
||||||
|
return raw_data.decode('utf-8')
|
||||||
|
elif isinstance(response, tuple):
|
||||||
|
raw_data = response[0].read()
|
||||||
|
return raw_data.decode('utf-8')
|
||||||
|
|
||||||
|
return ""
|
||||||
|
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error getting file: {filepath}, content: {e}")
|
||||||
|
return ""
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {e}")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def get_issue_labels(self, owner: str, repo: str, issue_number: int):
|
||||||
|
"""Get labels assigned to the issue"""
|
||||||
|
return self.issue.issue_get_labels(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo,
|
||||||
|
index=issue_number
|
||||||
|
)
|
||||||
|
|
||||||
|
def list_all_commits(self, owner: str, repo: str):
|
||||||
|
return self.repository.repo_get_all_commits(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo
|
||||||
|
)
|
||||||
|
|
||||||
|
def add_reviewer(self, owner: str, repo: str, pr_number: int, reviewers: List[str]):
|
||||||
|
body = {
|
||||||
|
"reviewers": reviewers
|
||||||
|
}
|
||||||
|
return self.api_client.call_api(
|
||||||
|
'/repos/{owner}/{repo}/pulls/{pr_number}/requested_reviewers',
|
||||||
|
'POST',
|
||||||
|
path_params={'owner': owner, 'repo': repo, 'pr_number': pr_number},
|
||||||
|
body=body,
|
||||||
|
response_type='Repository',
|
||||||
|
auth_settings=['AuthorizationHeaderToken']
|
||||||
|
)
|
||||||
|
|
||||||
|
def add_reaction_comment(self, owner: str, repo: str, comment_id: int, reaction: str):
|
||||||
|
body = {
|
||||||
|
"content": reaction
|
||||||
|
}
|
||||||
|
return self.api_client.call_api(
|
||||||
|
'/repos/{owner}/{repo}/issues/comments/{id}/reactions',
|
||||||
|
'POST',
|
||||||
|
path_params={'owner': owner, 'repo': repo, 'id': comment_id},
|
||||||
|
body=body,
|
||||||
|
response_type='Repository',
|
||||||
|
auth_settings=['AuthorizationHeaderToken']
|
||||||
|
)
|
||||||
|
|
||||||
|
def remove_reaction_comment(self, owner: str, repo: str, comment_id: int):
|
||||||
|
return self.api_client.call_api(
|
||||||
|
'/repos/{owner}/{repo}/issues/comments/{id}/reactions',
|
||||||
|
'DELETE',
|
||||||
|
path_params={'owner': owner, 'repo': repo, 'id': comment_id},
|
||||||
|
response_type='Repository',
|
||||||
|
auth_settings=['AuthorizationHeaderToken']
|
||||||
|
)
|
||||||
|
|
||||||
|
def add_labels(self, owner: str, repo: str, issue_number: int, labels: List[int]):
|
||||||
|
body = {
|
||||||
|
"labels": labels
|
||||||
|
}
|
||||||
|
return self.issue.issue_add_label(
|
||||||
|
owner=owner,
|
||||||
|
repo=repo,
|
||||||
|
index=issue_number,
|
||||||
|
body=body
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_pr_commits(self, owner: str, repo: str, pr_number: int):
|
||||||
|
"""Get all commits in a pull request"""
|
||||||
|
try:
|
||||||
|
token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
|
||||||
|
url = f'/repos/{owner}/{repo}/pulls/{pr_number}/commits'
|
||||||
|
if token:
|
||||||
|
url = f'{url}?token={token}'
|
||||||
|
|
||||||
|
response = self.api_client.call_api(
|
||||||
|
url,
|
||||||
|
'GET',
|
||||||
|
path_params={},
|
||||||
|
response_type=None,
|
||||||
|
_return_http_data_only=False,
|
||||||
|
_preload_content=False
|
||||||
|
)
|
||||||
|
|
||||||
|
if hasattr(response, 'data'):
|
||||||
|
raw_data = response.data.read()
|
||||||
|
commits_data = json.loads(raw_data.decode('utf-8'))
|
||||||
|
return commits_data
|
||||||
|
elif isinstance(response, tuple):
|
||||||
|
raw_data = response[0].read()
|
||||||
|
commits_data = json.loads(raw_data.decode('utf-8'))
|
||||||
|
return commits_data
|
||||||
|
|
||||||
|
return []
|
||||||
|
|
||||||
|
except ApiException as e:
|
||||||
|
self.logger.error(f"Error getting PR commits: {e}")
|
||||||
|
return []
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Unexpected error: {e}")
|
||||||
|
return []
|
@ -96,7 +96,7 @@ class GithubProvider(GitProvider):
|
|||||||
parsed_url = urlparse(given_url)
|
parsed_url = urlparse(given_url)
|
||||||
repo_path = (parsed_url.path.split('.git')[0])[1:] # /<owner>/<repo>.git -> <owner>/<repo>
|
repo_path = (parsed_url.path.split('.git')[0])[1:] # /<owner>/<repo>.git -> <owner>/<repo>
|
||||||
if not repo_path:
|
if not repo_path:
|
||||||
get_logger().error(f"url is neither an issues url nor a pr url nor a valid git url: {given_url}. Returning empty result.")
|
get_logger().error(f"url is neither an issues url nor a PR url nor a valid git url: {given_url}. Returning empty result.")
|
||||||
return ""
|
return ""
|
||||||
return repo_path
|
return repo_path
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
@ -13,5 +13,12 @@ def get_secret_provider():
|
|||||||
return GoogleCloudStorageSecretProvider()
|
return GoogleCloudStorageSecretProvider()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise ValueError(f"Failed to initialize google_cloud_storage secret provider {provider_id}") from e
|
raise ValueError(f"Failed to initialize google_cloud_storage secret provider {provider_id}") from e
|
||||||
|
elif provider_id == 'aws_secrets_manager':
|
||||||
|
try:
|
||||||
|
from pr_agent.secret_providers.aws_secrets_manager_provider import \
|
||||||
|
AWSSecretsManagerProvider
|
||||||
|
return AWSSecretsManagerProvider()
|
||||||
|
except Exception as e:
|
||||||
|
raise ValueError(f"Failed to initialize aws_secrets_manager secret provider {provider_id}") from e
|
||||||
else:
|
else:
|
||||||
raise ValueError("Unknown SECRET_PROVIDER")
|
raise ValueError("Unknown SECRET_PROVIDER")
|
||||||
|
57
pr_agent/secret_providers/aws_secrets_manager_provider.py
Normal file
57
pr_agent/secret_providers/aws_secrets_manager_provider.py
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
import json
|
||||||
|
import boto3
|
||||||
|
from botocore.exceptions import ClientError
|
||||||
|
|
||||||
|
from pr_agent.config_loader import get_settings
|
||||||
|
from pr_agent.log import get_logger
|
||||||
|
from pr_agent.secret_providers.secret_provider import SecretProvider
|
||||||
|
|
||||||
|
|
||||||
|
class AWSSecretsManagerProvider(SecretProvider):
|
||||||
|
def __init__(self):
|
||||||
|
try:
|
||||||
|
region_name = get_settings().get("aws_secrets_manager.region_name") or \
|
||||||
|
get_settings().get("aws.AWS_REGION_NAME")
|
||||||
|
if region_name:
|
||||||
|
self.client = boto3.client('secretsmanager', region_name=region_name)
|
||||||
|
else:
|
||||||
|
self.client = boto3.client('secretsmanager')
|
||||||
|
|
||||||
|
self.secret_arn = get_settings().get("aws_secrets_manager.secret_arn")
|
||||||
|
if not self.secret_arn:
|
||||||
|
raise ValueError("AWS Secrets Manager ARN is not configured")
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Failed to initialize AWS Secrets Manager Provider: {e}")
|
||||||
|
raise e
|
||||||
|
|
||||||
|
def get_secret(self, secret_name: str) -> str:
|
||||||
|
"""
|
||||||
|
Retrieve individual secret by name (for webhook tokens)
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
response = self.client.get_secret_value(SecretId=secret_name)
|
||||||
|
return response['SecretString']
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().warning(f"Failed to get secret {secret_name} from AWS Secrets Manager: {e}")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def get_all_secrets(self) -> dict:
|
||||||
|
"""
|
||||||
|
Retrieve all secrets for configuration override
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
response = self.client.get_secret_value(SecretId=self.secret_arn)
|
||||||
|
return json.loads(response['SecretString'])
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Failed to get secrets from AWS Secrets Manager {self.secret_arn}: {e}")
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def store_secret(self, secret_name: str, secret_value: str):
|
||||||
|
try:
|
||||||
|
self.client.put_secret_value(
|
||||||
|
SecretId=secret_name,
|
||||||
|
SecretString=secret_value
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Failed to store secret {secret_name} in AWS Secrets Manager: {e}")
|
||||||
|
raise e
|
@ -30,5 +30,9 @@
|
|||||||
"url": "/webhook"
|
"url": "/webhook"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
"links": {
|
||||||
|
"privacy": "https://qodo.ai/privacy-policy",
|
||||||
|
"terms": "https://qodo.ai/terms"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
128
pr_agent/servers/gitea_app.py
Normal file
128
pr_agent/servers/gitea_app.py
Normal file
@ -0,0 +1,128 @@
|
|||||||
|
import asyncio
|
||||||
|
import copy
|
||||||
|
import os
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
from fastapi import APIRouter, FastAPI, HTTPException, Request, Response
|
||||||
|
from starlette.background import BackgroundTasks
|
||||||
|
from starlette.middleware import Middleware
|
||||||
|
from starlette_context import context
|
||||||
|
from starlette_context.middleware import RawContextMiddleware
|
||||||
|
|
||||||
|
from pr_agent.agent.pr_agent import PRAgent
|
||||||
|
from pr_agent.config_loader import get_settings, global_settings
|
||||||
|
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||||
|
from pr_agent.servers.utils import verify_signature
|
||||||
|
|
||||||
|
# Setup logging and router
|
||||||
|
setup_logger(fmt=LoggingFormat.JSON, level=get_settings().get("CONFIG.LOG_LEVEL", "DEBUG"))
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
@router.post("/api/v1/gitea_webhooks")
|
||||||
|
async def handle_gitea_webhooks(background_tasks: BackgroundTasks, request: Request, response: Response):
|
||||||
|
"""Handle incoming Gitea webhook requests"""
|
||||||
|
get_logger().debug("Received a Gitea webhook")
|
||||||
|
|
||||||
|
body = await get_body(request)
|
||||||
|
|
||||||
|
# Set context for the request
|
||||||
|
context["settings"] = copy.deepcopy(global_settings)
|
||||||
|
context["git_provider"] = {}
|
||||||
|
|
||||||
|
# Handle the webhook in background
|
||||||
|
background_tasks.add_task(handle_request, body, event=request.headers.get("X-Gitea-Event", None))
|
||||||
|
return {}
|
||||||
|
|
||||||
|
async def get_body(request: Request):
|
||||||
|
"""Parse and verify webhook request body"""
|
||||||
|
try:
|
||||||
|
body = await request.json()
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error("Error parsing request body", artifact={'error': e})
|
||||||
|
raise HTTPException(status_code=400, detail="Error parsing request body") from e
|
||||||
|
|
||||||
|
|
||||||
|
# Verify webhook signature
|
||||||
|
webhook_secret = getattr(get_settings().gitea, 'webhook_secret', None)
|
||||||
|
if webhook_secret:
|
||||||
|
body_bytes = await request.body()
|
||||||
|
signature_header = request.headers.get('x-gitea-signature', None)
|
||||||
|
if not signature_header:
|
||||||
|
get_logger().error("Missing signature header")
|
||||||
|
raise HTTPException(status_code=400, detail="Missing signature header")
|
||||||
|
|
||||||
|
try:
|
||||||
|
verify_signature(body_bytes, webhook_secret, f"sha256={signature_header}")
|
||||||
|
except Exception as ex:
|
||||||
|
get_logger().error(f"Invalid signature: {ex}")
|
||||||
|
raise HTTPException(status_code=401, detail="Invalid signature")
|
||||||
|
|
||||||
|
return body
|
||||||
|
|
||||||
|
async def handle_request(body: Dict[str, Any], event: str):
|
||||||
|
"""Process Gitea webhook events"""
|
||||||
|
action = body.get("action")
|
||||||
|
if not action:
|
||||||
|
get_logger().debug("No action found in request body")
|
||||||
|
return {}
|
||||||
|
|
||||||
|
agent = PRAgent()
|
||||||
|
|
||||||
|
# Handle different event types
|
||||||
|
if event == "pull_request":
|
||||||
|
if action in ["opened", "reopened", "synchronized"]:
|
||||||
|
await handle_pr_event(body, event, action, agent)
|
||||||
|
elif event == "issue_comment":
|
||||||
|
if action == "created":
|
||||||
|
await handle_comment_event(body, event, action, agent)
|
||||||
|
|
||||||
|
return {}
|
||||||
|
|
||||||
|
async def handle_pr_event(body: Dict[str, Any], event: str, action: str, agent: PRAgent):
|
||||||
|
"""Handle pull request events"""
|
||||||
|
pr = body.get("pull_request", {})
|
||||||
|
if not pr:
|
||||||
|
return
|
||||||
|
|
||||||
|
api_url = pr.get("url")
|
||||||
|
if not api_url:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Handle PR based on action
|
||||||
|
if action in ["opened", "reopened"]:
|
||||||
|
commands = get_settings().get("gitea.pr_commands", [])
|
||||||
|
for command in commands:
|
||||||
|
await agent.handle_request(api_url, command)
|
||||||
|
elif action == "synchronized":
|
||||||
|
# Handle push to PR
|
||||||
|
await agent.handle_request(api_url, "/review --incremental")
|
||||||
|
|
||||||
|
async def handle_comment_event(body: Dict[str, Any], event: str, action: str, agent: PRAgent):
|
||||||
|
"""Handle comment events"""
|
||||||
|
comment = body.get("comment", {})
|
||||||
|
if not comment:
|
||||||
|
return
|
||||||
|
|
||||||
|
comment_body = comment.get("body", "")
|
||||||
|
if not comment_body or not comment_body.startswith("/"):
|
||||||
|
return
|
||||||
|
|
||||||
|
pr_url = body.get("pull_request", {}).get("url")
|
||||||
|
if not pr_url:
|
||||||
|
return
|
||||||
|
|
||||||
|
await agent.handle_request(pr_url, comment_body)
|
||||||
|
|
||||||
|
# FastAPI app setup
|
||||||
|
middleware = [Middleware(RawContextMiddleware)]
|
||||||
|
app = FastAPI(middleware=middleware)
|
||||||
|
app.include_router(router)
|
||||||
|
|
||||||
|
def start():
|
||||||
|
"""Start the Gitea webhook server"""
|
||||||
|
port = int(os.environ.get("PORT", "3000"))
|
||||||
|
import uvicorn
|
||||||
|
uvicorn.run(app, host="0.0.0.0", port=port)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
start()
|
@ -5,6 +5,17 @@ from starlette_context.middleware import RawContextMiddleware
|
|||||||
|
|
||||||
from pr_agent.servers.github_app import router
|
from pr_agent.servers.github_app import router
|
||||||
|
|
||||||
|
try:
|
||||||
|
from pr_agent.config_loader import apply_secrets_manager_config
|
||||||
|
apply_secrets_manager_config()
|
||||||
|
except Exception as e:
|
||||||
|
try:
|
||||||
|
from pr_agent.log import get_logger
|
||||||
|
get_logger().debug(f"AWS Secrets Manager initialization failed, falling back to environment variables: {e}")
|
||||||
|
except:
|
||||||
|
# Fail completely silently if log module is not available
|
||||||
|
pass
|
||||||
|
|
||||||
middleware = [Middleware(RawContextMiddleware)]
|
middleware = [Middleware(RawContextMiddleware)]
|
||||||
app = FastAPI(middleware=middleware)
|
app = FastAPI(middleware=middleware)
|
||||||
app.include_router(router)
|
app.include_router(router)
|
||||||
|
@ -68,6 +68,11 @@ webhook_secret = "<WEBHOOK SECRET>" # Optional, may be commented out.
|
|||||||
personal_access_token = ""
|
personal_access_token = ""
|
||||||
shared_secret = "" # webhook secret
|
shared_secret = "" # webhook secret
|
||||||
|
|
||||||
|
[gitea]
|
||||||
|
# Gitea personal access token
|
||||||
|
personal_access_token=""
|
||||||
|
webhook_secret="" # webhook secret
|
||||||
|
|
||||||
[bitbucket]
|
[bitbucket]
|
||||||
# For Bitbucket authentication
|
# For Bitbucket authentication
|
||||||
auth_type = "bearer" # "bearer" or "basic"
|
auth_type = "bearer" # "bearer" or "basic"
|
||||||
@ -111,4 +116,13 @@ api_base = "" # Your Azure OpenAI service base URL (e.g., https://openai.xyz.co
|
|||||||
|
|
||||||
[openrouter]
|
[openrouter]
|
||||||
key = ""
|
key = ""
|
||||||
api_base = ""
|
api_base = ""
|
||||||
|
|
||||||
|
[aws]
|
||||||
|
AWS_ACCESS_KEY_ID = ""
|
||||||
|
AWS_SECRET_ACCESS_KEY = ""
|
||||||
|
AWS_REGION_NAME = ""
|
||||||
|
|
||||||
|
[aws_secrets_manager]
|
||||||
|
secret_arn = "" # The ARN of the AWS Secrets Manager secret containing PR-Agent configuration
|
||||||
|
region_name = "" # Optional: specific AWS region (defaults to AWS_REGION_NAME or Lambda region)
|
||||||
|
@ -39,7 +39,7 @@ allow_dynamic_context=true
|
|||||||
max_extra_lines_before_dynamic_context = 10 # will try to include up to 10 extra lines before the hunk in the patch, until we reach an enclosing function or class
|
max_extra_lines_before_dynamic_context = 10 # will try to include up to 10 extra lines before the hunk in the patch, until we reach an enclosing function or class
|
||||||
patch_extra_lines_before = 5 # Number of extra lines (+3 default ones) to include before each hunk in the patch
|
patch_extra_lines_before = 5 # Number of extra lines (+3 default ones) to include before each hunk in the patch
|
||||||
patch_extra_lines_after = 1 # Number of extra lines (+3 default ones) to include after each hunk in the patch
|
patch_extra_lines_after = 1 # Number of extra lines (+3 default ones) to include after each hunk in the patch
|
||||||
secret_provider=""
|
secret_provider="" # "" (disabled), "google_cloud_storage", or "aws_secrets_manager" for secure secret management
|
||||||
cli_mode=false
|
cli_mode=false
|
||||||
ai_disclaimer_title="" # Pro feature, title for a collapsible disclaimer to AI outputs
|
ai_disclaimer_title="" # Pro feature, title for a collapsible disclaimer to AI outputs
|
||||||
ai_disclaimer="" # Pro feature, full text for the AI disclaimer
|
ai_disclaimer="" # Pro feature, full text for the AI disclaimer
|
||||||
@ -64,6 +64,7 @@ reasoning_effort = "medium" # "low", "medium", "high"
|
|||||||
enable_auto_approval=false # Set to true to enable auto-approval of PRs under certain conditions
|
enable_auto_approval=false # Set to true to enable auto-approval of PRs under certain conditions
|
||||||
auto_approve_for_low_review_effort=-1 # -1 to disable, [1-5] to set the threshold for auto-approval
|
auto_approve_for_low_review_effort=-1 # -1 to disable, [1-5] to set the threshold for auto-approval
|
||||||
auto_approve_for_no_suggestions=false # If true, the PR will be auto-approved if there are no suggestions
|
auto_approve_for_no_suggestions=false # If true, the PR will be auto-approved if there are no suggestions
|
||||||
|
ensure_ticket_compliance=false # Set to true to disable auto-approval of PRs if the ticket is not compliant
|
||||||
# extended thinking for Claude reasoning models
|
# extended thinking for Claude reasoning models
|
||||||
enable_claude_extended_thinking = false # Set to true to enable extended thinking feature
|
enable_claude_extended_thinking = false # Set to true to enable extended thinking feature
|
||||||
extended_thinking_budget_tokens = 2048
|
extended_thinking_budget_tokens = 2048
|
||||||
@ -81,6 +82,7 @@ require_ticket_analysis_review=true
|
|||||||
# general options
|
# general options
|
||||||
persistent_comment=true
|
persistent_comment=true
|
||||||
extra_instructions = ""
|
extra_instructions = ""
|
||||||
|
num_max_findings = 3
|
||||||
final_update_message = true
|
final_update_message = true
|
||||||
# review labels
|
# review labels
|
||||||
enable_review_labels_security=true
|
enable_review_labels_security=true
|
||||||
@ -102,6 +104,7 @@ enable_pr_type=true
|
|||||||
final_update_message = true
|
final_update_message = true
|
||||||
enable_help_text=false
|
enable_help_text=false
|
||||||
enable_help_comment=true
|
enable_help_comment=true
|
||||||
|
enable_pr_diagram=false # adds a section with a diagram of the PR changes
|
||||||
# describe as comment
|
# describe as comment
|
||||||
publish_description_as_comment=false
|
publish_description_as_comment=false
|
||||||
publish_description_as_comment_persistent=true
|
publish_description_as_comment_persistent=true
|
||||||
@ -278,6 +281,15 @@ push_commands = [
|
|||||||
"/review",
|
"/review",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[gitea_app]
|
||||||
|
url = "https://gitea.com"
|
||||||
|
handle_push_trigger = false
|
||||||
|
pr_commands = [
|
||||||
|
"/describe",
|
||||||
|
"/review",
|
||||||
|
"/improve",
|
||||||
|
]
|
||||||
|
|
||||||
[bitbucket_app]
|
[bitbucket_app]
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/describe --pr_description.final_update_message=false",
|
"/describe --pr_description.final_update_message=false",
|
||||||
|
@ -46,6 +46,9 @@ class PRDescription(BaseModel):
|
|||||||
type: List[PRType] = Field(description="one or more types that describe the PR content. Return the label member value (e.g. 'Bug fix', not 'bug_fix')")
|
type: List[PRType] = Field(description="one or more types that describe the PR content. Return the label member value (e.g. 'Bug fix', not 'bug_fix')")
|
||||||
description: str = Field(description="summarize the PR changes in up to four bullet points, each up to 8 words. For large PRs, add sub-bullets if needed. Order bullets by importance, with each bullet highlighting a key change group.")
|
description: str = Field(description="summarize the PR changes in up to four bullet points, each up to 8 words. For large PRs, add sub-bullets if needed. Order bullets by importance, with each bullet highlighting a key change group.")
|
||||||
title: str = Field(description="a concise and descriptive title that captures the PR's main theme")
|
title: str = Field(description="a concise and descriptive title that captures the PR's main theme")
|
||||||
|
{%- if enable_pr_diagram %}
|
||||||
|
changes_diagram: str = Field(description="a horizontal diagram that represents the main PR changes, in the format of a valid mermaid LR flowchart. The diagram should be concise and easy to read. Leave empty if no diagram is relevant. To create robust Mermaid diagrams, follow this two-step process: (1) Declare the nodes: nodeID["node description"]. (2) Then define the links: nodeID1 -- "link text" --> nodeID2 ")
|
||||||
|
{%- endif %}
|
||||||
{%- if enable_semantic_files_types %}
|
{%- if enable_semantic_files_types %}
|
||||||
pr_files: List[FileDescription] = Field(max_items=20, description="a list of all the files that were changed in the PR, and summary of their changes. Each file must be analyzed regardless of change size.")
|
pr_files: List[FileDescription] = Field(max_items=20, description="a list of all the files that were changed in the PR, and summary of their changes. Each file must be analyzed regardless of change size.")
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
@ -62,6 +65,13 @@ description: |
|
|||||||
...
|
...
|
||||||
title: |
|
title: |
|
||||||
...
|
...
|
||||||
|
{%- if enable_pr_diagram %}
|
||||||
|
changes_diagram: |
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
...
|
||||||
|
```
|
||||||
|
{%- endif %}
|
||||||
{%- if enable_semantic_files_types %}
|
{%- if enable_semantic_files_types %}
|
||||||
pr_files:
|
pr_files:
|
||||||
- filename: |
|
- filename: |
|
||||||
@ -143,6 +153,13 @@ description: |
|
|||||||
...
|
...
|
||||||
title: |
|
title: |
|
||||||
...
|
...
|
||||||
|
{%- if enable_pr_diagram %}
|
||||||
|
changes_diagram: |
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
...
|
||||||
|
```
|
||||||
|
{%- endif %}
|
||||||
{%- if enable_semantic_files_types %}
|
{%- if enable_semantic_files_types %}
|
||||||
pr_files:
|
pr_files:
|
||||||
- filename: |
|
- filename: |
|
||||||
@ -164,4 +181,4 @@ pr_files:
|
|||||||
|
|
||||||
Response (should be a valid YAML, and nothing else):
|
Response (should be a valid YAML, and nothing else):
|
||||||
```yaml
|
```yaml
|
||||||
"""
|
"""
|
@ -98,7 +98,7 @@ class Review(BaseModel):
|
|||||||
{%- if question_str %}
|
{%- if question_str %}
|
||||||
insights_from_user_answers: str = Field(description="shortly summarize the insights you gained from the user's answers to the questions")
|
insights_from_user_answers: str = Field(description="shortly summarize the insights you gained from the user's answers to the questions")
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A short and diverse list (0-3 issues) of high-priority bugs, problems or performance concerns introduced in the PR code, which the PR reviewer should further focus on and validate during the review process.")
|
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A short and diverse list (0-{{ num_max_findings }} issues) of high-priority bugs, problems or performance concerns introduced in the PR code, which the PR reviewer should further focus on and validate during the review process.")
|
||||||
{%- if require_security_review %}
|
{%- if require_security_review %}
|
||||||
security_concerns: str = Field(description="Does this PR code introduce possible vulnerabilities such as exposure of sensitive information (e.g., API keys, secrets, passwords), or security concerns like SQL injection, XSS, CSRF, and others ? Answer 'No' (without explaining why) if there are no possible issues. If there are security concerns or issues, start your answer with a short header, such as: 'Sensitive information exposure: ...', 'SQL injection: ...' etc. Explain your answer. Be specific and give examples if possible")
|
security_concerns: str = Field(description="Does this PR code introduce possible vulnerabilities such as exposure of sensitive information (e.g., API keys, secrets, passwords), or security concerns like SQL injection, XSS, CSRF, and others ? Answer 'No' (without explaining why) if there are no possible issues. If there are security concerns or issues, start your answer with a short header, such as: 'Sensitive information exposure: ...', 'SQL injection: ...' etc. Explain your answer. Be specific and give examples if possible")
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
|
@ -72,7 +72,8 @@ class PRDescription:
|
|||||||
"enable_semantic_files_types": get_settings().pr_description.enable_semantic_files_types,
|
"enable_semantic_files_types": get_settings().pr_description.enable_semantic_files_types,
|
||||||
"related_tickets": "",
|
"related_tickets": "",
|
||||||
"include_file_summary_changes": len(self.git_provider.get_diff_files()) <= self.COLLAPSIBLE_FILE_LIST_THRESHOLD,
|
"include_file_summary_changes": len(self.git_provider.get_diff_files()) <= self.COLLAPSIBLE_FILE_LIST_THRESHOLD,
|
||||||
'duplicate_prompt_examples': get_settings().config.get('duplicate_prompt_examples', False),
|
"duplicate_prompt_examples": get_settings().config.get("duplicate_prompt_examples", False),
|
||||||
|
"enable_pr_diagram": get_settings().pr_description.get("enable_pr_diagram", False),
|
||||||
}
|
}
|
||||||
|
|
||||||
self.user_description = self.git_provider.get_user_description()
|
self.user_description = self.git_provider.get_user_description()
|
||||||
@ -199,7 +200,7 @@ class PRDescription:
|
|||||||
|
|
||||||
async def _prepare_prediction(self, model: str) -> None:
|
async def _prepare_prediction(self, model: str) -> None:
|
||||||
if get_settings().pr_description.use_description_markers and 'pr_agent:' not in self.user_description:
|
if get_settings().pr_description.use_description_markers and 'pr_agent:' not in self.user_description:
|
||||||
get_logger().info("Markers were enabled, but user description does not contain markers. skipping AI prediction")
|
get_logger().info("Markers were enabled, but user description does not contain markers. Skipping AI prediction")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
large_pr_handling = get_settings().pr_description.enable_large_pr_handling and "pr_description_only_files_prompts" in get_settings()
|
large_pr_handling = get_settings().pr_description.enable_large_pr_handling and "pr_description_only_files_prompts" in get_settings()
|
||||||
@ -456,6 +457,12 @@ class PRDescription:
|
|||||||
self.data['labels'] = self.data.pop('labels')
|
self.data['labels'] = self.data.pop('labels')
|
||||||
if 'description' in self.data:
|
if 'description' in self.data:
|
||||||
self.data['description'] = self.data.pop('description')
|
self.data['description'] = self.data.pop('description')
|
||||||
|
if 'changes_diagram' in self.data:
|
||||||
|
changes_diagram = self.data.pop('changes_diagram').strip()
|
||||||
|
if changes_diagram.startswith('```'):
|
||||||
|
if not changes_diagram.endswith('```'): # fallback for missing closing
|
||||||
|
changes_diagram += '\n```'
|
||||||
|
self.data['changes_diagram'] = '\n'+ changes_diagram
|
||||||
if 'pr_files' in self.data:
|
if 'pr_files' in self.data:
|
||||||
self.data['pr_files'] = self.data.pop('pr_files')
|
self.data['pr_files'] = self.data.pop('pr_files')
|
||||||
|
|
||||||
@ -707,7 +714,7 @@ class PRDescription:
|
|||||||
pr_body += """</tr></tbody></table>"""
|
pr_body += """</tr></tbody></table>"""
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Error processing pr files to markdown {self.pr_id}: {str(e)}")
|
get_logger().error(f"Error processing PR files to markdown {self.pr_id}: {str(e)}")
|
||||||
pass
|
pass
|
||||||
return pr_body, pr_comments
|
return pr_body, pr_comments
|
||||||
|
|
||||||
@ -820,4 +827,4 @@ def replace_code_tags(text):
|
|||||||
parts = text.split('`')
|
parts = text.split('`')
|
||||||
for i in range(1, len(parts), 2):
|
for i in range(1, len(parts), 2):
|
||||||
parts[i] = '<code>' + parts[i] + '</code>'
|
parts[i] = '<code>' + parts[i] + '</code>'
|
||||||
return ''.join(parts)
|
return ''.join(parts)
|
@ -81,6 +81,7 @@ class PRReviewer:
|
|||||||
"language": self.main_language,
|
"language": self.main_language,
|
||||||
"diff": "", # empty diff for initial calculation
|
"diff": "", # empty diff for initial calculation
|
||||||
"num_pr_files": self.git_provider.get_num_of_files(),
|
"num_pr_files": self.git_provider.get_num_of_files(),
|
||||||
|
"num_max_findings": get_settings().pr_reviewer.num_max_findings,
|
||||||
"require_score": get_settings().pr_reviewer.require_score_review,
|
"require_score": get_settings().pr_reviewer.require_score_review,
|
||||||
"require_tests": get_settings().pr_reviewer.require_tests_review,
|
"require_tests": get_settings().pr_reviewer.require_tests_review,
|
||||||
"require_estimate_effort_to_review": get_settings().pr_reviewer.require_estimate_effort_to_review,
|
"require_estimate_effort_to_review": get_settings().pr_reviewer.require_estimate_effort_to_review,
|
||||||
@ -316,7 +317,9 @@ class PRReviewer:
|
|||||||
get_logger().exception(f"Failed to remove previous review comment, error: {e}")
|
get_logger().exception(f"Failed to remove previous review comment, error: {e}")
|
||||||
|
|
||||||
def _can_run_incremental_review(self) -> bool:
|
def _can_run_incremental_review(self) -> bool:
|
||||||
"""Checks if we can run incremental review according the various configurations and previous review"""
|
"""
|
||||||
|
Checks if we can run incremental review according the various configurations and previous review.
|
||||||
|
"""
|
||||||
# checking if running is auto mode but there are no new commits
|
# checking if running is auto mode but there are no new commits
|
||||||
if self.is_auto and not self.incremental.first_new_commit_sha:
|
if self.is_auto and not self.incremental.first_new_commit_sha:
|
||||||
get_logger().info(f"Incremental review is enabled for {self.pr_url} but there are no new commits")
|
get_logger().info(f"Incremental review is enabled for {self.pr_url} but there are no new commits")
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
aiohttp==3.9.5
|
aiohttp==3.9.5
|
||||||
anthropic>=0.48
|
anthropic>=0.52.0
|
||||||
#anthropic[vertex]==0.47.1
|
#anthropic[vertex]==0.47.1
|
||||||
atlassian-python-api==3.41.4
|
atlassian-python-api==3.41.4
|
||||||
azure-devops==7.1.0b3
|
azure-devops==7.1.0b3
|
||||||
@ -13,7 +13,7 @@ google-cloud-aiplatform==1.38.0
|
|||||||
google-generativeai==0.8.3
|
google-generativeai==0.8.3
|
||||||
google-cloud-storage==2.10.0
|
google-cloud-storage==2.10.0
|
||||||
Jinja2==3.1.2
|
Jinja2==3.1.2
|
||||||
litellm==1.69.3
|
litellm==1.70.4
|
||||||
loguru==0.7.2
|
loguru==0.7.2
|
||||||
msrest==0.7.1
|
msrest==0.7.1
|
||||||
openai>=1.55.3
|
openai>=1.55.3
|
||||||
@ -31,6 +31,7 @@ gunicorn==22.0.0
|
|||||||
pytest-cov==5.0.0
|
pytest-cov==5.0.0
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
html2text==2024.2.26
|
html2text==2024.2.26
|
||||||
|
giteapy==1.0.8
|
||||||
# Uncomment the following lines to enable the 'similar issue' tool
|
# Uncomment the following lines to enable the 'similar issue' tool
|
||||||
# pinecone-client
|
# pinecone-client
|
||||||
# pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
|
# pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
|
||||||
|
90
tests/e2e_tests/langchain_ai_handler.py
Normal file
90
tests/e2e_tests/langchain_ai_handler.py
Normal file
@ -0,0 +1,90 @@
|
|||||||
|
import asyncio
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
from pr_agent.algo.ai_handlers.langchain_ai_handler import LangChainOpenAIHandler
|
||||||
|
from pr_agent.config_loader import get_settings
|
||||||
|
|
||||||
|
def check_settings():
|
||||||
|
print('Checking settings...')
|
||||||
|
settings = get_settings()
|
||||||
|
|
||||||
|
# Check OpenAI settings
|
||||||
|
if not hasattr(settings, 'openai'):
|
||||||
|
print('OpenAI settings not found')
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not hasattr(settings.openai, 'key'):
|
||||||
|
print('OpenAI API key not found')
|
||||||
|
return False
|
||||||
|
|
||||||
|
print('OpenAI API key found')
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def measure_performance(handler, num_requests=3):
|
||||||
|
print(f'\nRunning performance test with {num_requests} requests...')
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
# Create multiple requests
|
||||||
|
tasks = [
|
||||||
|
handler.chat_completion(
|
||||||
|
model='gpt-3.5-turbo',
|
||||||
|
system='You are a helpful assistant',
|
||||||
|
user=f'Test message {i}',
|
||||||
|
temperature=0.2
|
||||||
|
) for i in range(num_requests)
|
||||||
|
]
|
||||||
|
|
||||||
|
# Execute requests concurrently
|
||||||
|
responses = await asyncio.gather(*tasks)
|
||||||
|
|
||||||
|
end_time = time.time()
|
||||||
|
total_time = end_time - start_time
|
||||||
|
avg_time = total_time / num_requests
|
||||||
|
|
||||||
|
print(f'Performance results:')
|
||||||
|
print(f'Total time: {total_time:.2f} seconds')
|
||||||
|
print(f'Average time per request: {avg_time:.2f} seconds')
|
||||||
|
print(f'Requests per second: {num_requests/total_time:.2f}')
|
||||||
|
|
||||||
|
return responses
|
||||||
|
|
||||||
|
async def test():
|
||||||
|
print('Starting test...')
|
||||||
|
|
||||||
|
# Check settings first
|
||||||
|
if not check_settings():
|
||||||
|
print('Please set up your environment variables or configuration file')
|
||||||
|
print('Required: OPENAI_API_KEY')
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
handler = LangChainOpenAIHandler()
|
||||||
|
print('Handler created')
|
||||||
|
|
||||||
|
# Basic functionality test
|
||||||
|
response = await handler.chat_completion(
|
||||||
|
model='gpt-3.5-turbo',
|
||||||
|
system='You are a helpful assistant',
|
||||||
|
user='Hello',
|
||||||
|
temperature=0.2,
|
||||||
|
img_path='test.jpg'
|
||||||
|
)
|
||||||
|
print('Response:', response)
|
||||||
|
|
||||||
|
# Performance test
|
||||||
|
await measure_performance(handler)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print('Error:', str(e))
|
||||||
|
print('Error type:', type(e))
|
||||||
|
print('Error details:', e.__dict__ if hasattr(e, '__dict__') else 'No additional details')
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
print('Environment variables:')
|
||||||
|
print('OPENAI_API_KEY:', 'Set' if os.getenv('OPENAI_API_KEY') else 'Not set')
|
||||||
|
print('OPENAI_API_TYPE:', os.getenv('OPENAI_API_TYPE', 'Not set'))
|
||||||
|
print('OPENAI_API_BASE:', os.getenv('OPENAI_API_BASE', 'Not set'))
|
||||||
|
|
||||||
|
asyncio.run(test())
|
||||||
|
|
||||||
|
|
185
tests/e2e_tests/test_gitea_app.py
Normal file
185
tests/e2e_tests/test_gitea_app.py
Normal file
@ -0,0 +1,185 @@
|
|||||||
|
import os
|
||||||
|
import time
|
||||||
|
import requests
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from pr_agent.config_loader import get_settings
|
||||||
|
from pr_agent.log import get_logger, setup_logger
|
||||||
|
from tests.e2e_tests.e2e_utils import (FILE_PATH,
|
||||||
|
IMPROVE_START_WITH_REGEX_PATTERN,
|
||||||
|
NEW_FILE_CONTENT, NUM_MINUTES,
|
||||||
|
PR_HEADER_START_WITH, REVIEW_START_WITH)
|
||||||
|
|
||||||
|
log_level = os.environ.get("LOG_LEVEL", "INFO")
|
||||||
|
setup_logger(log_level)
|
||||||
|
logger = get_logger()
|
||||||
|
|
||||||
|
def test_e2e_run_gitea_app():
|
||||||
|
repo_name = 'pr-agent-tests'
|
||||||
|
owner = 'codiumai'
|
||||||
|
base_branch = "main"
|
||||||
|
new_branch = f"gitea_app_e2e_test-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}"
|
||||||
|
get_settings().config.git_provider = "gitea"
|
||||||
|
|
||||||
|
headers = None
|
||||||
|
pr_number = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
gitea_url = get_settings().get("GITEA.URL", None)
|
||||||
|
gitea_token = get_settings().get("GITEA.TOKEN", None)
|
||||||
|
|
||||||
|
if not gitea_url:
|
||||||
|
logger.error("GITEA.URL is not set in the configuration")
|
||||||
|
logger.info("Please set GITEA.URL in .env file or environment variables")
|
||||||
|
assert False, "GITEA.URL is not set in the configuration"
|
||||||
|
|
||||||
|
if not gitea_token:
|
||||||
|
logger.error("GITEA.TOKEN is not set in the configuration")
|
||||||
|
logger.info("Please set GITEA.TOKEN in .env file or environment variables")
|
||||||
|
assert False, "GITEA.TOKEN is not set in the configuration"
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
'Authorization': f'token {gitea_token}',
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'Accept': 'application/json'
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(f"Creating a new branch {new_branch} from {base_branch}")
|
||||||
|
|
||||||
|
response = requests.get(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/branches/{base_branch}",
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
base_branch_data = response.json()
|
||||||
|
base_commit_sha = base_branch_data['commit']['id']
|
||||||
|
|
||||||
|
branch_data = {
|
||||||
|
'ref': f"refs/heads/{new_branch}",
|
||||||
|
'sha': base_commit_sha
|
||||||
|
}
|
||||||
|
response = requests.post(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/git/refs",
|
||||||
|
headers=headers,
|
||||||
|
json=branch_data
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
logger.info(f"Updating file {FILE_PATH} in branch {new_branch}")
|
||||||
|
|
||||||
|
import base64
|
||||||
|
file_content_encoded = base64.b64encode(NEW_FILE_CONTENT.encode()).decode()
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = requests.get(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/contents/{FILE_PATH}?ref={new_branch}",
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
existing_file = response.json()
|
||||||
|
file_sha = existing_file.get('sha')
|
||||||
|
|
||||||
|
file_data = {
|
||||||
|
'message': 'Update cli_pip.py',
|
||||||
|
'content': file_content_encoded,
|
||||||
|
'sha': file_sha,
|
||||||
|
'branch': new_branch
|
||||||
|
}
|
||||||
|
except:
|
||||||
|
file_data = {
|
||||||
|
'message': 'Add cli_pip.py',
|
||||||
|
'content': file_content_encoded,
|
||||||
|
'branch': new_branch
|
||||||
|
}
|
||||||
|
|
||||||
|
response = requests.put(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/contents/{FILE_PATH}",
|
||||||
|
headers=headers,
|
||||||
|
json=file_data
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
logger.info(f"Creating a pull request from {new_branch} to {base_branch}")
|
||||||
|
pr_data = {
|
||||||
|
'title': f'Test PR from {new_branch}',
|
||||||
|
'body': 'update cli_pip.py',
|
||||||
|
'head': new_branch,
|
||||||
|
'base': base_branch
|
||||||
|
}
|
||||||
|
response = requests.post(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/pulls",
|
||||||
|
headers=headers,
|
||||||
|
json=pr_data
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
pr = response.json()
|
||||||
|
pr_number = pr['number']
|
||||||
|
|
||||||
|
for i in range(NUM_MINUTES):
|
||||||
|
logger.info(f"Waiting for the PR to get all the tool results...")
|
||||||
|
time.sleep(60)
|
||||||
|
|
||||||
|
response = requests.get(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/issues/{pr_number}/comments",
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
comments = response.json()
|
||||||
|
|
||||||
|
if len(comments) >= 5:
|
||||||
|
valid_review = False
|
||||||
|
for comment in comments:
|
||||||
|
if comment['body'].startswith('## PR Reviewer Guide 🔍'):
|
||||||
|
valid_review = True
|
||||||
|
break
|
||||||
|
if valid_review:
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
logger.error("REVIEW feedback is invalid")
|
||||||
|
raise Exception("REVIEW feedback is invalid")
|
||||||
|
else:
|
||||||
|
logger.info(f"Waiting for the PR to get all the tool results. {i + 1} minute(s) passed")
|
||||||
|
else:
|
||||||
|
assert False, f"After {NUM_MINUTES} minutes, the PR did not get all the tool results"
|
||||||
|
|
||||||
|
logger.info(f"Cleaning up: closing PR and deleting branch {new_branch}")
|
||||||
|
|
||||||
|
close_data = {'state': 'closed'}
|
||||||
|
response = requests.patch(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/pulls/{pr_number}",
|
||||||
|
headers=headers,
|
||||||
|
json=close_data
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
response = requests.delete(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/git/refs/heads/{new_branch}",
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
logger.info(f"Succeeded in running e2e test for Gitea app on the PR")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to run e2e test for Gitea app: {e}")
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
try:
|
||||||
|
if headers is None or gitea_url is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
if pr_number is not None:
|
||||||
|
requests.patch(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/pulls/{pr_number}",
|
||||||
|
headers=headers,
|
||||||
|
json={'state': 'closed'}
|
||||||
|
)
|
||||||
|
|
||||||
|
requests.delete(
|
||||||
|
f"{gitea_url}/api/v1/repos/{owner}/{repo_name}/git/refs/heads/{new_branch}",
|
||||||
|
headers=headers
|
||||||
|
)
|
||||||
|
except Exception as cleanup_error:
|
||||||
|
logger.error(f"Failed to clean up after test: {cleanup_error}")
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
test_e2e_run_gitea_app()
|
89
tests/unittest/test_aws_secrets_manager_provider.py
Normal file
89
tests/unittest/test_aws_secrets_manager_provider.py
Normal file
@ -0,0 +1,89 @@
|
|||||||
|
import json
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
from botocore.exceptions import ClientError
|
||||||
|
|
||||||
|
from pr_agent.secret_providers.aws_secrets_manager_provider import AWSSecretsManagerProvider
|
||||||
|
|
||||||
|
|
||||||
|
class TestAWSSecretsManagerProvider:
|
||||||
|
|
||||||
|
def _provider(self):
|
||||||
|
"""Create provider following existing pattern"""
|
||||||
|
with patch('pr_agent.secret_providers.aws_secrets_manager_provider.get_settings') as mock_get_settings, \
|
||||||
|
patch('pr_agent.secret_providers.aws_secrets_manager_provider.boto3.client') as mock_boto3_client:
|
||||||
|
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.side_effect = lambda k, d=None: {
|
||||||
|
'aws_secrets_manager.secret_arn': 'arn:aws:secretsmanager:us-east-1:123456789012:secret:test-secret',
|
||||||
|
'aws_secrets_manager.region_name': 'us-east-1',
|
||||||
|
'aws.AWS_REGION_NAME': 'us-east-1'
|
||||||
|
}.get(k, d)
|
||||||
|
settings.aws_secrets_manager.secret_arn = 'arn:aws:secretsmanager:us-east-1:123456789012:secret:test-secret'
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
# Mock boto3 client
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_boto3_client.return_value = mock_client
|
||||||
|
|
||||||
|
provider = AWSSecretsManagerProvider()
|
||||||
|
provider.client = mock_client # Set client directly for testing
|
||||||
|
return provider, mock_client
|
||||||
|
|
||||||
|
# Positive test cases
|
||||||
|
def test_get_secret_success(self):
|
||||||
|
provider, mock_client = self._provider()
|
||||||
|
mock_client.get_secret_value.return_value = {'SecretString': 'test-secret-value'}
|
||||||
|
|
||||||
|
result = provider.get_secret('test-secret-name')
|
||||||
|
assert result == 'test-secret-value'
|
||||||
|
mock_client.get_secret_value.assert_called_once_with(SecretId='test-secret-name')
|
||||||
|
|
||||||
|
def test_get_all_secrets_success(self):
|
||||||
|
provider, mock_client = self._provider()
|
||||||
|
secret_data = {'openai.key': 'sk-test', 'github.webhook_secret': 'webhook-secret'}
|
||||||
|
mock_client.get_secret_value.return_value = {'SecretString': json.dumps(secret_data)}
|
||||||
|
|
||||||
|
result = provider.get_all_secrets()
|
||||||
|
assert result == secret_data
|
||||||
|
|
||||||
|
# Negative test cases (following Google Cloud Storage pattern)
|
||||||
|
def test_get_secret_failure(self):
|
||||||
|
provider, mock_client = self._provider()
|
||||||
|
mock_client.get_secret_value.side_effect = Exception("AWS error")
|
||||||
|
|
||||||
|
result = provider.get_secret('nonexistent-secret')
|
||||||
|
assert result == "" # Confirm empty string is returned
|
||||||
|
|
||||||
|
def test_get_all_secrets_failure(self):
|
||||||
|
provider, mock_client = self._provider()
|
||||||
|
mock_client.get_secret_value.side_effect = Exception("AWS error")
|
||||||
|
|
||||||
|
result = provider.get_all_secrets()
|
||||||
|
assert result == {} # Confirm empty dictionary is returned
|
||||||
|
|
||||||
|
def test_store_secret_update_existing(self):
|
||||||
|
provider, mock_client = self._provider()
|
||||||
|
mock_client.update_secret.return_value = {}
|
||||||
|
|
||||||
|
provider.store_secret('test-secret', 'test-value')
|
||||||
|
mock_client.put_secret_value.assert_called_once_with(
|
||||||
|
SecretId='test-secret',
|
||||||
|
SecretString='test-value'
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_init_failure_invalid_config(self):
|
||||||
|
with patch('pr_agent.secret_providers.aws_secrets_manager_provider.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.aws_secrets_manager.secret_arn = None # Configuration error
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
AWSSecretsManagerProvider()
|
||||||
|
|
||||||
|
def test_store_secret_failure(self):
|
||||||
|
provider, mock_client = self._provider()
|
||||||
|
mock_client.put_secret_value.side_effect = Exception("AWS error")
|
||||||
|
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
provider.store_secret('test-secret', 'test-value')
|
@ -1,13 +1,302 @@
|
|||||||
|
|
||||||
# Generated by CodiumAI
|
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
from pr_agent.algo.utils import clip_tokens
|
from pr_agent.algo.utils import clip_tokens
|
||||||
|
from pr_agent.algo.token_handler import TokenEncoder
|
||||||
|
|
||||||
|
|
||||||
class TestClipTokens:
|
class TestClipTokens:
|
||||||
def test_clip(self):
|
"""Comprehensive test suite for the clip_tokens function."""
|
||||||
|
|
||||||
|
def test_empty_input_text(self):
|
||||||
|
"""Test that empty input returns empty string."""
|
||||||
|
assert clip_tokens("", 10) == ""
|
||||||
|
assert clip_tokens(None, 10) is None
|
||||||
|
|
||||||
|
def test_text_under_token_limit(self):
|
||||||
|
"""Test that text under the token limit is returned unchanged."""
|
||||||
|
text = "Short text"
|
||||||
|
max_tokens = 100
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
assert result == text
|
||||||
|
|
||||||
|
def test_text_exactly_at_token_limit(self):
|
||||||
|
"""Test text that is exactly at the token limit."""
|
||||||
|
text = "This is exactly at the limit"
|
||||||
|
# Mock the token encoder to return exact limit
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 10 # Exactly 10 tokens
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, 10)
|
||||||
|
assert result == text
|
||||||
|
|
||||||
|
def test_text_over_token_limit_with_three_dots(self):
|
||||||
|
"""Test text over token limit with three dots addition."""
|
||||||
|
text = "This is a longer text that should be clipped when it exceeds the token limit"
|
||||||
|
max_tokens = 5
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
assert result.endswith("\n...(truncated)")
|
||||||
|
assert len(result) < len(text)
|
||||||
|
|
||||||
|
def test_text_over_token_limit_without_three_dots(self):
|
||||||
|
"""Test text over token limit without three dots addition."""
|
||||||
|
text = "This is a longer text that should be clipped"
|
||||||
|
max_tokens = 5
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens, add_three_dots=False)
|
||||||
|
assert not result.endswith("\n...(truncated)")
|
||||||
|
assert len(result) < len(text)
|
||||||
|
|
||||||
|
def test_negative_max_tokens(self):
|
||||||
|
"""Test that negative max_tokens returns empty string."""
|
||||||
|
text = "Some text"
|
||||||
|
result = clip_tokens(text, -1)
|
||||||
|
assert result == ""
|
||||||
|
|
||||||
|
result = clip_tokens(text, -100)
|
||||||
|
assert result == ""
|
||||||
|
|
||||||
|
def test_zero_max_tokens(self):
|
||||||
|
"""Test that zero max_tokens returns empty string."""
|
||||||
|
text = "Some text"
|
||||||
|
result = clip_tokens(text, 0)
|
||||||
|
assert result == ""
|
||||||
|
|
||||||
|
def test_delete_last_line_functionality(self):
|
||||||
|
"""Test the delete_last_line parameter functionality."""
|
||||||
|
text = "Line 1\nLine 2\nLine 3\nLine 4"
|
||||||
|
max_tokens = 5
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
# Without delete_last_line
|
||||||
|
result_normal = clip_tokens(text, max_tokens, delete_last_line=False)
|
||||||
|
|
||||||
|
# With delete_last_line
|
||||||
|
result_deleted = clip_tokens(text, max_tokens, delete_last_line=True)
|
||||||
|
|
||||||
|
# The result with delete_last_line should be shorter or equal
|
||||||
|
assert len(result_deleted) <= len(result_normal)
|
||||||
|
|
||||||
|
def test_pre_computed_num_input_tokens(self):
|
||||||
|
"""Test using pre-computed num_input_tokens parameter."""
|
||||||
|
text = "This is a test text"
|
||||||
|
max_tokens = 10
|
||||||
|
num_input_tokens = 15
|
||||||
|
|
||||||
|
# Should not call the encoder when num_input_tokens is provided
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_encoder.return_value = None # Should not be called
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens, num_input_tokens=num_input_tokens)
|
||||||
|
assert result.endswith("\n...(truncated)")
|
||||||
|
mock_encoder.assert_not_called()
|
||||||
|
|
||||||
|
def test_pre_computed_tokens_under_limit(self):
|
||||||
|
"""Test pre-computed tokens under the limit."""
|
||||||
|
text = "Short text"
|
||||||
|
max_tokens = 20
|
||||||
|
num_input_tokens = 5
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_encoder.return_value = None # Should not be called
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens, num_input_tokens=num_input_tokens)
|
||||||
|
assert result == text
|
||||||
|
mock_encoder.assert_not_called()
|
||||||
|
|
||||||
|
def test_special_characters_and_unicode(self):
|
||||||
|
"""Test text with special characters and Unicode content."""
|
||||||
|
text = "Special chars: @#$%^&*()_+ áéíóú 中문 🚀 emoji"
|
||||||
|
max_tokens = 5
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
assert isinstance(result, str)
|
||||||
|
assert len(result) < len(text)
|
||||||
|
|
||||||
|
def test_multiline_text_handling(self):
|
||||||
|
"""Test handling of multiline text."""
|
||||||
|
text = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5"
|
||||||
|
max_tokens = 5
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
assert isinstance(result, str)
|
||||||
|
|
||||||
|
def test_very_long_text(self):
|
||||||
|
"""Test with very long text."""
|
||||||
|
text = "A" * 10000 # Very long text
|
||||||
|
max_tokens = 10
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 5000 # Many tokens
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
assert len(result) < len(text)
|
||||||
|
assert result.endswith("\n...(truncated)")
|
||||||
|
|
||||||
|
def test_encoder_exception_handling(self):
|
||||||
|
"""Test handling of encoder exceptions."""
|
||||||
|
text = "Test text"
|
||||||
|
max_tokens = 10
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_encoder.side_effect = Exception("Encoder error")
|
||||||
|
|
||||||
|
# Should return original text when encoder fails
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
assert result == text
|
||||||
|
|
||||||
|
def test_zero_division_scenario(self):
|
||||||
|
"""Test scenario that could lead to division by zero."""
|
||||||
|
text = "Test"
|
||||||
|
max_tokens = 10
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [] # Empty tokens (could cause division by zero)
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
# Should handle gracefully and return original text
|
||||||
|
assert result == text
|
||||||
|
|
||||||
|
def test_various_edge_cases(self):
|
||||||
|
"""Test various edge cases."""
|
||||||
|
# Single character
|
||||||
|
assert clip_tokens("A", 1000) == "A"
|
||||||
|
|
||||||
|
# Only whitespace
|
||||||
|
text = " \n \t "
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 10
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, 5)
|
||||||
|
assert isinstance(result, str)
|
||||||
|
|
||||||
|
# Text with only newlines
|
||||||
|
text = "\n\n\n\n"
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 10
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, 2, delete_last_line=True)
|
||||||
|
assert isinstance(result, str)
|
||||||
|
|
||||||
|
def test_parameter_combinations(self):
|
||||||
|
"""Test different parameter combinations."""
|
||||||
|
text = "Multi\nline\ntext\nfor\ntesting"
|
||||||
|
max_tokens = 5
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 20
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
# Test all combinations
|
||||||
|
combinations = [
|
||||||
|
(True, True), # add_three_dots=True, delete_last_line=True
|
||||||
|
(True, False), # add_three_dots=True, delete_last_line=False
|
||||||
|
(False, True), # add_three_dots=False, delete_last_line=True
|
||||||
|
(False, False), # add_three_dots=False, delete_last_line=False
|
||||||
|
]
|
||||||
|
|
||||||
|
for add_dots, delete_line in combinations:
|
||||||
|
result = clip_tokens(text, max_tokens,
|
||||||
|
add_three_dots=add_dots,
|
||||||
|
delete_last_line=delete_line)
|
||||||
|
assert isinstance(result, str)
|
||||||
|
if add_dots and len(result) > 0:
|
||||||
|
assert result.endswith("\n...(truncated)") or result == text
|
||||||
|
|
||||||
|
def test_num_output_chars_zero_scenario(self):
|
||||||
|
"""Test scenario where num_output_chars becomes zero or negative."""
|
||||||
|
text = "Short"
|
||||||
|
max_tokens = 1
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 1000 # Many tokens for short text
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
# When num_output_chars is 0 or negative, should return empty string
|
||||||
|
assert result == ""
|
||||||
|
|
||||||
|
def test_logging_on_exception(self):
|
||||||
|
"""Test that exceptions are properly logged."""
|
||||||
|
text = "Test text"
|
||||||
|
max_tokens = 10
|
||||||
|
|
||||||
|
# Patch the logger at the module level where it's imported
|
||||||
|
with patch('pr_agent.algo.utils.get_logger') as mock_logger:
|
||||||
|
mock_log_instance = MagicMock()
|
||||||
|
mock_logger.return_value = mock_log_instance
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_encoder.side_effect = Exception("Test exception")
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
|
||||||
|
# Should log the warning
|
||||||
|
mock_log_instance.warning.assert_called_once()
|
||||||
|
# Should return original text
|
||||||
|
assert result == text
|
||||||
|
|
||||||
|
def test_factor_safety_calculation(self):
|
||||||
|
"""Test that the 0.9 factor (10% reduction) works correctly."""
|
||||||
|
text = "Test text that should be reduced by 10 percent for safety"
|
||||||
|
max_tokens = 10
|
||||||
|
|
||||||
|
with patch.object(TokenEncoder, 'get_token_encoder') as mock_encoder:
|
||||||
|
mock_tokenizer = MagicMock()
|
||||||
|
mock_tokenizer.encode.return_value = [1] * 20 # 20 tokens
|
||||||
|
mock_encoder.return_value = mock_tokenizer
|
||||||
|
|
||||||
|
result = clip_tokens(text, max_tokens)
|
||||||
|
|
||||||
|
# The result should be shorter due to the 0.9 factor
|
||||||
|
# Characters per token = len(text) / 20
|
||||||
|
# Expected chars = int(0.9 * (len(text) / 20) * 10)
|
||||||
|
expected_chars = int(0.9 * (len(text) / 20) * 10)
|
||||||
|
|
||||||
|
# Result should be around expected_chars length (plus truncation text)
|
||||||
|
if result.endswith("\n...(truncated)"):
|
||||||
|
actual_content = result[:-len("\n...(truncated)")]
|
||||||
|
assert len(actual_content) <= expected_chars + 5 # Some tolerance
|
||||||
|
|
||||||
|
# Test the original basic functionality to ensure backward compatibility
|
||||||
|
def test_clip_original_functionality(self):
|
||||||
|
"""Test original functionality from the existing test."""
|
||||||
text = "line1\nline2\nline3\nline4\nline5\nline6"
|
text = "line1\nline2\nline3\nline4\nline5\nline6"
|
||||||
max_tokens = 25
|
max_tokens = 25
|
||||||
result = clip_tokens(text, max_tokens)
|
result = clip_tokens(text, max_tokens)
|
||||||
@ -16,4 +305,4 @@ class TestClipTokens:
|
|||||||
max_tokens = 10
|
max_tokens = 10
|
||||||
result = clip_tokens(text, max_tokens)
|
result = clip_tokens(text, max_tokens)
|
||||||
expected_results = 'line1\nline2\nline3\n\n...(truncated)'
|
expected_results = 'line1\nline2\nline3\n\n...(truncated)'
|
||||||
assert result == expected_results
|
assert result == expected_results
|
120
tests/unittest/test_config_loader_secrets.py
Normal file
120
tests/unittest/test_config_loader_secrets.py
Normal file
@ -0,0 +1,120 @@
|
|||||||
|
import pytest
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
from pr_agent.config_loader import apply_secrets_manager_config, apply_secrets_to_config
|
||||||
|
|
||||||
|
|
||||||
|
class TestConfigLoaderSecrets:
|
||||||
|
|
||||||
|
def test_apply_secrets_manager_config_success(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_secret_provider') as mock_get_provider, \
|
||||||
|
patch('pr_agent.config_loader.apply_secrets_to_config') as mock_apply_secrets, \
|
||||||
|
patch('pr_agent.config_loader.get_settings') as mock_get_settings:
|
||||||
|
|
||||||
|
# Mock secret provider
|
||||||
|
mock_provider = MagicMock()
|
||||||
|
mock_provider.get_all_secrets.return_value = {'openai.key': 'sk-test'}
|
||||||
|
mock_get_provider.return_value = mock_provider
|
||||||
|
|
||||||
|
# Mock settings
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = "aws_secrets_manager"
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
apply_secrets_manager_config()
|
||||||
|
|
||||||
|
mock_apply_secrets.assert_called_once_with({'openai.key': 'sk-test'})
|
||||||
|
|
||||||
|
def test_apply_secrets_manager_config_no_provider(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_secret_provider') as mock_get_provider:
|
||||||
|
mock_get_provider.return_value = None
|
||||||
|
|
||||||
|
# Confirm no exception is raised
|
||||||
|
apply_secrets_manager_config()
|
||||||
|
|
||||||
|
def test_apply_secrets_manager_config_not_aws(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_secret_provider') as mock_get_provider, \
|
||||||
|
patch('pr_agent.config_loader.get_settings') as mock_get_settings:
|
||||||
|
|
||||||
|
# Mock Google Cloud Storage provider
|
||||||
|
mock_provider = MagicMock()
|
||||||
|
mock_get_provider.return_value = mock_provider
|
||||||
|
|
||||||
|
# Mock settings (Google Cloud Storage)
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = "google_cloud_storage"
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
# Confirm execution is skipped for non-AWS Secrets Manager
|
||||||
|
apply_secrets_manager_config()
|
||||||
|
|
||||||
|
# Confirm get_all_secrets is not called
|
||||||
|
assert not hasattr(mock_provider, 'get_all_secrets') or \
|
||||||
|
not mock_provider.get_all_secrets.called
|
||||||
|
|
||||||
|
def test_apply_secrets_to_config_nested_keys(self):
|
||||||
|
with patch('pr_agent.config_loader.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = None # No existing value
|
||||||
|
settings.set = MagicMock()
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
secrets = {
|
||||||
|
'openai.key': 'sk-test',
|
||||||
|
'github.webhook_secret': 'webhook-secret'
|
||||||
|
}
|
||||||
|
|
||||||
|
apply_secrets_to_config(secrets)
|
||||||
|
|
||||||
|
# Confirm settings are applied correctly
|
||||||
|
settings.set.assert_any_call('OPENAI.KEY', 'sk-test')
|
||||||
|
settings.set.assert_any_call('GITHUB.WEBHOOK_SECRET', 'webhook-secret')
|
||||||
|
|
||||||
|
def test_apply_secrets_to_config_existing_value_preserved(self):
|
||||||
|
with patch('pr_agent.config_loader.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = "existing-value" # Existing value present
|
||||||
|
settings.set = MagicMock()
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
secrets = {'openai.key': 'sk-test'}
|
||||||
|
|
||||||
|
apply_secrets_to_config(secrets)
|
||||||
|
|
||||||
|
# Confirm settings are not overridden when existing value present
|
||||||
|
settings.set.assert_not_called()
|
||||||
|
|
||||||
|
def test_apply_secrets_to_config_single_key(self):
|
||||||
|
with patch('pr_agent.config_loader.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = None
|
||||||
|
settings.set = MagicMock()
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
secrets = {'simple_key': 'simple_value'}
|
||||||
|
|
||||||
|
apply_secrets_to_config(secrets)
|
||||||
|
|
||||||
|
# Confirm non-dot notation keys are ignored
|
||||||
|
settings.set.assert_not_called()
|
||||||
|
|
||||||
|
def test_apply_secrets_to_config_multiple_dots(self):
|
||||||
|
with patch('pr_agent.config_loader.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = None
|
||||||
|
settings.set = MagicMock()
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
secrets = {'section.subsection.key': 'value'}
|
||||||
|
|
||||||
|
apply_secrets_to_config(secrets)
|
||||||
|
|
||||||
|
# Confirm keys with multiple dots are ignored
|
||||||
|
settings.set.assert_not_called()
|
||||||
|
|
||||||
|
def test_apply_secrets_manager_config_exception_handling(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_secret_provider') as mock_get_provider:
|
||||||
|
mock_get_provider.side_effect = Exception("Provider error")
|
||||||
|
|
||||||
|
# Confirm processing continues even when exception occurs
|
||||||
|
apply_secrets_manager_config() # Confirm no exception is raised
|
@ -1,4 +1,7 @@
|
|||||||
# Generated by CodiumAI
|
# Generated by CodiumAI
|
||||||
|
import textwrap
|
||||||
|
from unittest.mock import Mock
|
||||||
|
|
||||||
from pr_agent.algo.utils import PRReviewHeader, convert_to_markdown_v2
|
from pr_agent.algo.utils import PRReviewHeader, convert_to_markdown_v2
|
||||||
from pr_agent.tools.pr_description import insert_br_after_x_chars
|
from pr_agent.tools.pr_description import insert_br_after_x_chars
|
||||||
|
|
||||||
@ -48,9 +51,174 @@ class TestConvertToMarkdown:
|
|||||||
input_data = {'review': {
|
input_data = {'review': {
|
||||||
'estimated_effort_to_review_[1-5]': '1, because the changes are minimal and straightforward, focusing on a single functionality addition.\n',
|
'estimated_effort_to_review_[1-5]': '1, because the changes are minimal and straightforward, focusing on a single functionality addition.\n',
|
||||||
'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}}
|
'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}}
|
||||||
|
|
||||||
|
expected_output = textwrap.dedent(f"""\
|
||||||
|
{PRReviewHeader.REGULAR.value} 🔍
|
||||||
|
|
||||||
|
Here are some key observations to aid the review process:
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr><td>⏱️ <strong>Estimated effort to review</strong>: 1 🔵⚪⚪⚪⚪</td></tr>
|
||||||
|
<tr><td>🧪 <strong>No relevant tests</strong></td></tr>
|
||||||
|
<tr><td> <strong>Possible issues</strong>: No
|
||||||
|
</td></tr>
|
||||||
|
<tr><td>🔒 <strong>No security concerns identified</strong></td></tr>
|
||||||
|
</table>
|
||||||
|
""")
|
||||||
|
|
||||||
|
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
|
||||||
|
|
||||||
|
def test_simple_dictionary_input_without_gfm_supported(self):
|
||||||
|
input_data = {'review': {
|
||||||
|
'estimated_effort_to_review_[1-5]': '1, because the changes are minimal and straightforward, focusing on a single functionality addition.\n',
|
||||||
|
'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}}
|
||||||
|
|
||||||
|
expected_output = textwrap.dedent("""\
|
||||||
|
## PR Reviewer Guide 🔍
|
||||||
|
|
||||||
|
Here are some key observations to aid the review process:
|
||||||
|
|
||||||
|
### ⏱️ Estimated effort to review: 1 🔵⚪⚪⚪⚪
|
||||||
|
|
||||||
|
### 🧪 No relevant tests
|
||||||
|
|
||||||
|
### Possible issues: No
|
||||||
|
|
||||||
|
|
||||||
expected_output = f'{PRReviewHeader.REGULAR.value} 🔍\n\nHere are some key observations to aid the review process:\n\n<table>\n<tr><td>⏱️ <strong>Estimated effort to review</strong>: 1 🔵⚪⚪⚪⚪</td></tr>\n<tr><td>🧪 <strong>No relevant tests</strong></td></tr>\n<tr><td> <strong>Possible issues</strong>: No\n</td></tr>\n<tr><td>🔒 <strong>No security concerns identified</strong></td></tr>\n</table>'
|
### 🔒 No security concerns identified
|
||||||
|
""")
|
||||||
|
|
||||||
|
assert convert_to_markdown_v2(input_data, gfm_supported=False).strip() == expected_output.strip()
|
||||||
|
|
||||||
|
def test_key_issues_to_review(self):
|
||||||
|
input_data = {'review': {
|
||||||
|
'key_issues_to_review': [
|
||||||
|
{
|
||||||
|
'relevant_file' : 'src/utils.py',
|
||||||
|
'issue_header' : 'Code Smell',
|
||||||
|
'issue_content' : 'The function is too long and complex.',
|
||||||
|
'start_line': 30,
|
||||||
|
'end_line': 50,
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}}
|
||||||
|
mock_git_provider = Mock()
|
||||||
|
reference_link = 'https://github.com/qodo/pr-agent/pull/1/files#diff-hashvalue-R174'
|
||||||
|
mock_git_provider.get_line_link.return_value = reference_link
|
||||||
|
|
||||||
|
expected_output = textwrap.dedent(f"""\
|
||||||
|
## PR Reviewer Guide 🔍
|
||||||
|
|
||||||
|
Here are some key observations to aid the review process:
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr><td>⚡ <strong>Recommended focus areas for review</strong><br><br>
|
||||||
|
|
||||||
|
<a href='{reference_link}'><strong>Code Smell</strong></a><br>The function is too long and complex.
|
||||||
|
|
||||||
|
</td></tr>
|
||||||
|
</table>
|
||||||
|
""")
|
||||||
|
|
||||||
|
assert convert_to_markdown_v2(input_data, git_provider=mock_git_provider).strip() == expected_output.strip()
|
||||||
|
mock_git_provider.get_line_link.assert_called_with('src/utils.py', 30, 50)
|
||||||
|
|
||||||
|
def test_ticket_compliance(self):
|
||||||
|
input_data = {'review': {
|
||||||
|
'ticket_compliance_check': [
|
||||||
|
{
|
||||||
|
'ticket_url': 'https://example.com/ticket/123',
|
||||||
|
'ticket_requirements': '- Requirement 1\n- Requirement 2\n',
|
||||||
|
'fully_compliant_requirements': '- Requirement 1\n- Requirement 2\n',
|
||||||
|
'not_compliant_requirements': '',
|
||||||
|
'requires_further_human_verification': '',
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}}
|
||||||
|
|
||||||
|
expected_output = textwrap.dedent("""\
|
||||||
|
## PR Reviewer Guide 🔍
|
||||||
|
|
||||||
|
Here are some key observations to aid the review process:
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr><td>
|
||||||
|
|
||||||
|
**🎫 Ticket compliance analysis ✅**
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
**[123](https://example.com/ticket/123) - Fully compliant**
|
||||||
|
|
||||||
|
Compliant requirements:
|
||||||
|
|
||||||
|
- Requirement 1
|
||||||
|
- Requirement 2
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
</td></tr>
|
||||||
|
</table>
|
||||||
|
""")
|
||||||
|
|
||||||
|
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
|
||||||
|
|
||||||
|
def test_can_be_split(self):
|
||||||
|
input_data = {'review': {
|
||||||
|
'can_be_split': [
|
||||||
|
{
|
||||||
|
'relevant_files': [
|
||||||
|
'src/file1.py',
|
||||||
|
'src/file2.py'
|
||||||
|
],
|
||||||
|
'title': 'Refactoring',
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'relevant_files': [
|
||||||
|
'src/file3.py'
|
||||||
|
],
|
||||||
|
'title': 'Bug Fix',
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
expected_output = textwrap.dedent("""\
|
||||||
|
## PR Reviewer Guide 🔍
|
||||||
|
|
||||||
|
Here are some key observations to aid the review process:
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr><td>🔀 <strong>Multiple PR themes</strong><br><br>
|
||||||
|
|
||||||
|
<details><summary>
|
||||||
|
Sub-PR theme: <b>Refactoring</b></summary>
|
||||||
|
|
||||||
|
___
|
||||||
|
|
||||||
|
Relevant files:
|
||||||
|
|
||||||
|
- src/file1.py
|
||||||
|
- src/file2.py
|
||||||
|
___
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details><summary>
|
||||||
|
Sub-PR theme: <b>Bug Fix</b></summary>
|
||||||
|
|
||||||
|
___
|
||||||
|
|
||||||
|
Relevant files:
|
||||||
|
|
||||||
|
- src/file3.py
|
||||||
|
___
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
</td></tr>
|
||||||
|
</table>
|
||||||
|
""")
|
||||||
|
|
||||||
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
|
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
|
||||||
|
|
||||||
|
21
tests/unittest/test_fix_json_escape_char.py
Normal file
21
tests/unittest/test_fix_json_escape_char.py
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
from pr_agent.algo.utils import fix_json_escape_char
|
||||||
|
|
||||||
|
|
||||||
|
class TestFixJsonEscapeChar:
|
||||||
|
def test_valid_json(self):
|
||||||
|
"""Return unchanged when input JSON is already valid"""
|
||||||
|
text = '{"a": 1, "b": "ok"}'
|
||||||
|
expected_output = {"a": 1, "b": "ok"}
|
||||||
|
assert fix_json_escape_char(text) == expected_output
|
||||||
|
|
||||||
|
def test_single_control_char(self):
|
||||||
|
"""Remove a single ASCII control-character"""
|
||||||
|
text = '{"msg": "hel\x01lo"}'
|
||||||
|
expected_output = {"msg": "hel lo"}
|
||||||
|
assert fix_json_escape_char(text) == expected_output
|
||||||
|
|
||||||
|
def test_multiple_control_chars(self):
|
||||||
|
"""Remove multiple control-characters recursively"""
|
||||||
|
text = '{"x": "A\x02B\x03C"}'
|
||||||
|
expected_output = {"x": "A B C"}
|
||||||
|
assert fix_json_escape_char(text) == expected_output
|
67
tests/unittest/test_get_max_tokens.py
Normal file
67
tests/unittest/test_get_max_tokens.py
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
import pytest
|
||||||
|
from pr_agent.algo.utils import get_max_tokens, MAX_TOKENS
|
||||||
|
import pr_agent.algo.utils as utils
|
||||||
|
|
||||||
|
class TestGetMaxTokens:
|
||||||
|
|
||||||
|
# Test if the file is in MAX_TOKENS
|
||||||
|
def test_model_max_tokens(self, monkeypatch):
|
||||||
|
fake_settings = type('', (), {
|
||||||
|
'config': type('', (), {
|
||||||
|
'custom_model_max_tokens': 0,
|
||||||
|
'max_model_tokens': 0
|
||||||
|
})()
|
||||||
|
})()
|
||||||
|
|
||||||
|
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
|
||||||
|
|
||||||
|
model = "gpt-3.5-turbo"
|
||||||
|
expected = MAX_TOKENS[model]
|
||||||
|
|
||||||
|
assert get_max_tokens(model) == expected
|
||||||
|
|
||||||
|
# Test situations where the model is not registered and exists as a custom model
|
||||||
|
def test_model_has_custom(self, monkeypatch):
|
||||||
|
fake_settings = type('', (), {
|
||||||
|
'config': type('', (), {
|
||||||
|
'custom_model_max_tokens': 5000,
|
||||||
|
'max_model_tokens': 0 # 제한 없음
|
||||||
|
})()
|
||||||
|
})()
|
||||||
|
|
||||||
|
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
|
||||||
|
|
||||||
|
model = "custom-model"
|
||||||
|
expected = 5000
|
||||||
|
|
||||||
|
assert get_max_tokens(model) == expected
|
||||||
|
|
||||||
|
def test_model_not_max_tokens_and_not_has_custom(self, monkeypatch):
|
||||||
|
fake_settings = type('', (), {
|
||||||
|
'config': type('', (), {
|
||||||
|
'custom_model_max_tokens': 0,
|
||||||
|
'max_model_tokens': 0
|
||||||
|
})()
|
||||||
|
})()
|
||||||
|
|
||||||
|
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
|
||||||
|
|
||||||
|
model = "custom-model"
|
||||||
|
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
get_max_tokens(model)
|
||||||
|
|
||||||
|
def test_model_max_tokens_with__limit(self, monkeypatch):
|
||||||
|
fake_settings = type('', (), {
|
||||||
|
'config': type('', (), {
|
||||||
|
'custom_model_max_tokens': 0,
|
||||||
|
'max_model_tokens': 10000
|
||||||
|
})()
|
||||||
|
})()
|
||||||
|
|
||||||
|
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
|
||||||
|
|
||||||
|
model = "gpt-3.5-turbo" # this model setting is 160000
|
||||||
|
expected = 10000
|
||||||
|
|
||||||
|
assert get_max_tokens(model) == expected
|
126
tests/unittest/test_gitea_provider.py
Normal file
126
tests/unittest/test_gitea_provider.py
Normal file
@ -0,0 +1,126 @@
|
|||||||
|
# from unittest.mock import MagicMock, patch
|
||||||
|
#
|
||||||
|
# import pytest
|
||||||
|
#
|
||||||
|
# from pr_agent.algo.types import EDIT_TYPE
|
||||||
|
# from pr_agent.git_providers.gitea_provider import GiteaProvider
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# class TestGiteaProvider:
|
||||||
|
# """Unit-tests for GiteaProvider following project style (explicit object construction, minimal patching)."""
|
||||||
|
#
|
||||||
|
# def _provider(self):
|
||||||
|
# """Create provider instance with patched settings and avoid real HTTP calls."""
|
||||||
|
# with patch('pr_agent.git_providers.gitea_provider.get_settings') as mock_get_settings, \
|
||||||
|
# patch('requests.get') as mock_get:
|
||||||
|
# settings = MagicMock()
|
||||||
|
# settings.get.side_effect = lambda k, d=None: {
|
||||||
|
# 'GITEA.URL': 'https://gitea.example.com',
|
||||||
|
# 'GITEA.PERSONAL_ACCESS_TOKEN': 'test-token'
|
||||||
|
# }.get(k, d)
|
||||||
|
# mock_get_settings.return_value = settings
|
||||||
|
# # Stub the PR fetch triggered during provider initialization
|
||||||
|
# pr_resp = MagicMock()
|
||||||
|
# pr_resp.json.return_value = {
|
||||||
|
# 'title': 'stub',
|
||||||
|
# 'body': 'stub',
|
||||||
|
# 'head': {'ref': 'main'},
|
||||||
|
# 'user': {'id': 1}
|
||||||
|
# }
|
||||||
|
# pr_resp.raise_for_status = MagicMock()
|
||||||
|
# mock_get.return_value = pr_resp
|
||||||
|
# return GiteaProvider('https://gitea.example.com/owner/repo/pulls/123')
|
||||||
|
#
|
||||||
|
# # ---------------- URL parsing ----------------
|
||||||
|
# def test_parse_pr_url_valid(self):
|
||||||
|
# owner, repo, pr_num = self._provider()._parse_pr_url('https://gitea.example.com/owner/repo/pulls/123')
|
||||||
|
# assert (owner, repo, pr_num) == ('owner', 'repo', '123')
|
||||||
|
#
|
||||||
|
# def test_parse_pr_url_invalid(self):
|
||||||
|
# with pytest.raises(ValueError):
|
||||||
|
# GiteaProvider._parse_pr_url('https://gitea.example.com/owner/repo')
|
||||||
|
#
|
||||||
|
# # ---------------- simple getters ----------------
|
||||||
|
# def test_get_files(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# mock_resp = MagicMock()
|
||||||
|
# mock_resp.json.return_value = [{'filename': 'a.txt'}, {'filename': 'b.txt'}]
|
||||||
|
# mock_resp.raise_for_status = MagicMock()
|
||||||
|
# with patch('requests.get', return_value=mock_resp) as mock_get:
|
||||||
|
# assert provider.get_files() == ['a.txt', 'b.txt']
|
||||||
|
# mock_get.assert_called_once()
|
||||||
|
#
|
||||||
|
# def test_get_diff_files(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# mock_resp = MagicMock()
|
||||||
|
# mock_resp.json.return_value = [
|
||||||
|
# {'filename': 'f1', 'previous_filename': 'old_f1', 'status': 'renamed', 'patch': ''},
|
||||||
|
# {'filename': 'f2', 'status': 'added', 'patch': ''},
|
||||||
|
# {'filename': 'f3', 'status': 'deleted', 'patch': ''},
|
||||||
|
# {'filename': 'f4', 'status': 'modified', 'patch': ''}
|
||||||
|
# ]
|
||||||
|
# mock_resp.raise_for_status = MagicMock()
|
||||||
|
# with patch('requests.get', return_value=mock_resp):
|
||||||
|
# res = provider.get_diff_files()
|
||||||
|
# assert [f.edit_type for f in res] == [EDIT_TYPE.RENAMED, EDIT_TYPE.ADDED, EDIT_TYPE.DELETED,
|
||||||
|
# EDIT_TYPE.MODIFIED]
|
||||||
|
#
|
||||||
|
# # ---------------- publishing methods ----------------
|
||||||
|
# def test_publish_description(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# mock_resp = MagicMock();
|
||||||
|
# mock_resp.raise_for_status = MagicMock()
|
||||||
|
# with patch('requests.patch', return_value=mock_resp) as mock_patch:
|
||||||
|
# provider.publish_description('t', 'b');
|
||||||
|
# mock_patch.assert_called_once()
|
||||||
|
#
|
||||||
|
# def test_publish_comment(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# mock_resp = MagicMock();
|
||||||
|
# mock_resp.raise_for_status = MagicMock()
|
||||||
|
# with patch('requests.post', return_value=mock_resp) as mock_post:
|
||||||
|
# provider.publish_comment('c');
|
||||||
|
# mock_post.assert_called_once()
|
||||||
|
#
|
||||||
|
# def test_publish_inline_comment(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# mock_resp = MagicMock();
|
||||||
|
# mock_resp.raise_for_status = MagicMock()
|
||||||
|
# with patch('requests.post', return_value=mock_resp) as mock_post:
|
||||||
|
# provider.publish_inline_comment('body', 'file', '10');
|
||||||
|
# mock_post.assert_called_once()
|
||||||
|
#
|
||||||
|
# # ---------------- labels & reactions ----------------
|
||||||
|
# def test_get_pr_labels(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# mock_resp = MagicMock();
|
||||||
|
# mock_resp.raise_for_status = MagicMock();
|
||||||
|
# mock_resp.json.return_value = [{'name': 'l1'}]
|
||||||
|
# with patch('requests.get', return_value=mock_resp):
|
||||||
|
# assert provider.get_pr_labels() == ['l1']
|
||||||
|
#
|
||||||
|
# def test_add_eyes_reaction(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# mock_resp = MagicMock();
|
||||||
|
# mock_resp.raise_for_status = MagicMock();
|
||||||
|
# mock_resp.json.return_value = {'id': 7}
|
||||||
|
# with patch('requests.post', return_value=mock_resp):
|
||||||
|
# assert provider.add_eyes_reaction(1) == 7
|
||||||
|
#
|
||||||
|
# # ---------------- commit messages & url helpers ----------------
|
||||||
|
# def test_get_commit_messages(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# mock_resp = MagicMock();
|
||||||
|
# mock_resp.raise_for_status = MagicMock()
|
||||||
|
# mock_resp.json.return_value = [
|
||||||
|
# {'commit': {'message': 'm1'}}, {'commit': {'message': 'm2'}}]
|
||||||
|
# with patch('requests.get', return_value=mock_resp):
|
||||||
|
# assert provider.get_commit_messages() == ['m1', 'm2']
|
||||||
|
#
|
||||||
|
# def test_git_url_helpers(self):
|
||||||
|
# provider = self._provider()
|
||||||
|
# issues_url = 'https://gitea.example.com/owner/repo/pulls/3'
|
||||||
|
# assert provider.get_git_repo_url(issues_url) == 'https://gitea.example.com/owner/repo.git'
|
||||||
|
# prefix, suffix = provider.get_canonical_url_parts('https://gitea.example.com/owner/repo.git', 'dev')
|
||||||
|
# assert prefix == 'https://gitea.example.com/owner/repo/src/branch/dev'
|
||||||
|
# assert suffix == ''
|
@ -79,13 +79,14 @@ class TestSortFilesByMainLanguages:
|
|||||||
files = [
|
files = [
|
||||||
type('', (object,), {'filename': 'file1.py'})(),
|
type('', (object,), {'filename': 'file1.py'})(),
|
||||||
type('', (object,), {'filename': 'file2.java'})(),
|
type('', (object,), {'filename': 'file2.java'})(),
|
||||||
type('', (object,), {'filename': 'file3.cpp'})()
|
type('', (object,), {'filename': 'file3.cpp'})(),
|
||||||
|
type('', (object,), {'filename': 'file3.test'})()
|
||||||
]
|
]
|
||||||
expected_output = [
|
expected_output = [
|
||||||
{'language': 'Python', 'files': [files[0]]},
|
{'language': 'Python', 'files': [files[0]]},
|
||||||
{'language': 'Java', 'files': [files[1]]},
|
{'language': 'Java', 'files': [files[1]]},
|
||||||
{'language': 'C++', 'files': [files[2]]},
|
{'language': 'C++', 'files': [files[2]]},
|
||||||
{'language': 'Other', 'files': []}
|
{'language': 'Other', 'files': [files[3]]}
|
||||||
]
|
]
|
||||||
assert sort_files_by_main_languages(languages, files) == expected_output
|
assert sort_files_by_main_languages(languages, files) == expected_output
|
||||||
|
|
||||||
|
69
tests/unittest/test_secret_provider_factory.py
Normal file
69
tests/unittest/test_secret_provider_factory.py
Normal file
@ -0,0 +1,69 @@
|
|||||||
|
import pytest
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
from pr_agent.secret_providers import get_secret_provider
|
||||||
|
|
||||||
|
|
||||||
|
class TestSecretProviderFactory:
|
||||||
|
|
||||||
|
def test_get_secret_provider_none_when_not_configured(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = None
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
result = get_secret_provider()
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
def test_get_secret_provider_google_cloud_storage(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = "google_cloud_storage"
|
||||||
|
settings.config.secret_provider = "google_cloud_storage"
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
with patch('pr_agent.secret_providers.google_cloud_storage_secret_provider.GoogleCloudStorageSecretProvider') as MockProvider:
|
||||||
|
mock_instance = MagicMock()
|
||||||
|
MockProvider.return_value = mock_instance
|
||||||
|
|
||||||
|
result = get_secret_provider()
|
||||||
|
assert result is mock_instance
|
||||||
|
MockProvider.assert_called_once()
|
||||||
|
|
||||||
|
def test_get_secret_provider_aws_secrets_manager(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = "aws_secrets_manager"
|
||||||
|
settings.config.secret_provider = "aws_secrets_manager"
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
with patch('pr_agent.secret_providers.aws_secrets_manager_provider.AWSSecretsManagerProvider') as MockProvider:
|
||||||
|
mock_instance = MagicMock()
|
||||||
|
MockProvider.return_value = mock_instance
|
||||||
|
|
||||||
|
result = get_secret_provider()
|
||||||
|
assert result is mock_instance
|
||||||
|
MockProvider.assert_called_once()
|
||||||
|
|
||||||
|
def test_get_secret_provider_unknown_provider(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = "unknown_provider"
|
||||||
|
settings.config.secret_provider = "unknown_provider"
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
with pytest.raises(ValueError, match="Unknown SECRET_PROVIDER"):
|
||||||
|
get_secret_provider()
|
||||||
|
|
||||||
|
def test_get_secret_provider_initialization_error(self):
|
||||||
|
with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
|
||||||
|
settings = MagicMock()
|
||||||
|
settings.get.return_value = "aws_secrets_manager"
|
||||||
|
settings.config.secret_provider = "aws_secrets_manager"
|
||||||
|
mock_get_settings.return_value = settings
|
||||||
|
|
||||||
|
with patch('pr_agent.secret_providers.aws_secrets_manager_provider.AWSSecretsManagerProvider') as MockProvider:
|
||||||
|
MockProvider.side_effect = Exception("Initialization failed")
|
||||||
|
|
||||||
|
with pytest.raises(ValueError, match="Failed to initialize aws_secrets_manager secret provider"):
|
||||||
|
get_secret_provider()
|
@ -53,12 +53,12 @@ code_suggestions:
|
|||||||
- relevant_file: |
|
- relevant_file: |
|
||||||
src/index2.ts
|
src/index2.ts
|
||||||
label: |
|
label: |
|
||||||
enhancment
|
enhancement
|
||||||
```
|
```
|
||||||
|
|
||||||
We can further improve the code by using the `const` keyword instead of `var` in the `src/index.ts` file.
|
We can further improve the code by using the `const` keyword instead of `var` in the `src/index.ts` file.
|
||||||
'''
|
'''
|
||||||
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancment'}]}
|
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancement'}]}
|
||||||
|
|
||||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
||||||
|
|
||||||
@ -76,10 +76,178 @@ code_suggestions:
|
|||||||
- relevant_file: |
|
- relevant_file: |
|
||||||
src/index2.ts
|
src/index2.ts
|
||||||
label: |
|
label: |
|
||||||
enhancment
|
enhancement
|
||||||
```
|
```
|
||||||
|
|
||||||
We can further improve the code by using the `const` keyword instead of `var` in the `src/index.ts` file.
|
We can further improve the code by using the `const` keyword instead of `var` in the `src/index.ts` file.
|
||||||
'''
|
'''
|
||||||
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancment'}]}
|
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancement'}]}
|
||||||
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
||||||
|
|
||||||
|
|
||||||
|
def test_with_brackets_yaml_content(self):
|
||||||
|
review_text = '''\
|
||||||
|
{
|
||||||
|
code_suggestions:
|
||||||
|
- relevant_file: |
|
||||||
|
src/index.ts
|
||||||
|
label: |
|
||||||
|
best practice
|
||||||
|
|
||||||
|
- relevant_file: |
|
||||||
|
src/index2.ts
|
||||||
|
label: |
|
||||||
|
enhancement
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancement'}]}
|
||||||
|
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
||||||
|
|
||||||
|
def test_tab_indent_yaml(self):
|
||||||
|
review_text = '''\
|
||||||
|
code_suggestions:
|
||||||
|
- relevant_file: |
|
||||||
|
src/index.ts
|
||||||
|
label: |
|
||||||
|
\tbest practice
|
||||||
|
|
||||||
|
- relevant_file: |
|
||||||
|
src/index2.ts
|
||||||
|
label: |
|
||||||
|
enhancement
|
||||||
|
'''
|
||||||
|
expected_output = {'code_suggestions': [{'relevant_file': 'src/index.ts\n', 'label': 'best practice\n'}, {'relevant_file': 'src/index2.ts\n', 'label': 'enhancement\n'}]}
|
||||||
|
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='label') == expected_output
|
||||||
|
|
||||||
|
|
||||||
|
def test_leading_plus_mark_code(self):
|
||||||
|
review_text = '''\
|
||||||
|
code_suggestions:
|
||||||
|
- relevant_file: |
|
||||||
|
src/index.ts
|
||||||
|
label: |
|
||||||
|
best practice
|
||||||
|
existing_code: |
|
||||||
|
+ var router = createBrowserRouter([
|
||||||
|
improved_code: |
|
||||||
|
+ const router = createBrowserRouter([
|
||||||
|
'''
|
||||||
|
expected_output = {'code_suggestions': [{
|
||||||
|
'relevant_file': 'src/index.ts\n',
|
||||||
|
'label': 'best practice\n',
|
||||||
|
'existing_code': 'var router = createBrowserRouter([\n',
|
||||||
|
'improved_code': 'const router = createBrowserRouter([\n'
|
||||||
|
}]}
|
||||||
|
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='improved_code') == expected_output
|
||||||
|
|
||||||
|
|
||||||
|
def test_inconsistent_indentation_in_block_scalar_yaml(self):
|
||||||
|
"""
|
||||||
|
This test case represents a situation where the AI outputs the opening '{' with 5 spaces
|
||||||
|
(resulting in an inferred indent level of 5), while the closing '}' is output with only 4 spaces.
|
||||||
|
This inconsistency makes it impossible for the YAML parser to automatically determine the correct
|
||||||
|
indent level, causing a parsing failure.
|
||||||
|
|
||||||
|
The root cause may be the LLM miscounting spaces or misunderstanding the active block scalar context
|
||||||
|
while generating YAML output.
|
||||||
|
"""
|
||||||
|
|
||||||
|
review_text = '''\
|
||||||
|
code_suggestions:
|
||||||
|
- relevant_file: |
|
||||||
|
tsconfig.json
|
||||||
|
existing_code: |
|
||||||
|
{
|
||||||
|
"key1": "value1",
|
||||||
|
"key2": {
|
||||||
|
"subkey": "value"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
expected_json = '''\
|
||||||
|
{
|
||||||
|
"key1": "value1",
|
||||||
|
"key2": {
|
||||||
|
"subkey": "value"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
expected_output = {
|
||||||
|
'code_suggestions': [{
|
||||||
|
'relevant_file': 'tsconfig.json\n',
|
||||||
|
'existing_code': expected_json
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='existing_code') == expected_output
|
||||||
|
|
||||||
|
|
||||||
|
def test_inconsistent_and_insufficient_indentation_in_block_scalar_yaml(self):
|
||||||
|
"""
|
||||||
|
This test case reproduces a YAML parsing failure where the block scalar content
|
||||||
|
generated by the AI includes inconsistent and insufficient indentation levels.
|
||||||
|
|
||||||
|
The root cause may be the LLM miscounting spaces or misunderstanding the active block scalar context
|
||||||
|
while generating YAML output.
|
||||||
|
"""
|
||||||
|
|
||||||
|
review_text = '''\
|
||||||
|
code_suggestions:
|
||||||
|
- relevant_file: |
|
||||||
|
tsconfig.json
|
||||||
|
existing_code: |
|
||||||
|
{
|
||||||
|
"key1": "value1",
|
||||||
|
"key2": {
|
||||||
|
"subkey": "value"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
expected_json = '''\
|
||||||
|
{
|
||||||
|
"key1": "value1",
|
||||||
|
"key2": {
|
||||||
|
"subkey": "value"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
expected_output = {
|
||||||
|
'code_suggestions': [{
|
||||||
|
'relevant_file': 'tsconfig.json\n',
|
||||||
|
'existing_code': expected_json
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='existing_code') == expected_output
|
||||||
|
|
||||||
|
|
||||||
|
def test_wrong_indentation_code_block_scalar(self):
|
||||||
|
review_text = '''\
|
||||||
|
code_suggestions:
|
||||||
|
- relevant_file: |
|
||||||
|
a.c
|
||||||
|
existing_code: |
|
||||||
|
int sum(int a, int b) {
|
||||||
|
return a + b;
|
||||||
|
}
|
||||||
|
|
||||||
|
int sub(int a, int b) {
|
||||||
|
return a - b;
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
expected_code_block = '''\
|
||||||
|
int sum(int a, int b) {
|
||||||
|
return a + b;
|
||||||
|
}
|
||||||
|
|
||||||
|
int sub(int a, int b) {
|
||||||
|
return a - b;
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
expected_output = {
|
||||||
|
"code_suggestions": [
|
||||||
|
{
|
||||||
|
"relevant_file": "a.c\n",
|
||||||
|
"existing_code": expected_code_block
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
assert try_fix_yaml(review_text, first_key='code_suggestions', last_key='existing_code') == expected_output
|
||||||
|
Reference in New Issue
Block a user