Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 0 additions & 10 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -83,16 +83,6 @@ Wether = "Wether"
accordding = "accordding"
accoustic = "accoustic"
accpetance = "accpetance"
accracy = "accracy"
acutal = "acutal"
apporach = "apporach"
apporaches = "apporaches"
arguements = "arguements"
arguemnts = "arguemnts"
assgin = "assgin"
assginment = "assginment"
auxilary = "auxilary"
avaiable = "avaiable"
baisc = "baisc"
basci = "basci"
beacuse = "beacuse"
Expand Down
2 changes: 1 addition & 1 deletion ci_scripts/check_api_label_cn.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ def run_cn_api_label_checking(rootdir, files):
for file in files:
if should_test(file) and not check_api_label(rootdir, file):
logger.error(
f"The first line in {rootdir}/{file} is not avaiable, please re-check it!"
f"The first line in {rootdir}/{file} is not available, please re-check it!"
)
sys.exit(1)
valid_api_labels = find_all_api_labels_in_dir(rootdir)
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/put_along_axis_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ put_along_axis
- **indices** (Tensor) - 索引矩阵,包含沿轴提取 1d 切片的下标,必须和 arr 矩阵有相同的维度。当 ``broadcast`` 为 ``True`` 时,需要能够 broadcast 与 arr 矩阵对齐,否则除 ``axis`` 维度,其他维度都需要小于等于 ``arr`` 与 ``values`` 的对应维度。数据类型为:int32、int64。
- **values** (float) - 需要插入的值,当 ``broadcast`` 为 ``True`` 时,形状和维度需要能够被 broadcast 与 indices 矩阵匹配,否则各维度需大于等于 ``indices`` 的各维度。数据类型为:bfloat16、float16、float32、float64、int32、int64、uint8、int16。
- **axis** (int) - 指定沿着哪个维度获取对应的值,数据类型为:int。
- **reduce** (str,可选) - 归约操作类型,默认为 ``assign``,可选为 ``add``、 ``multiple``、 ``mean``、 ``amin``、 ``amax``。不同的规约操作插入值 value 对于输入矩阵 arr 会有不同的行为,如为 ``assgin`` 则覆盖输入矩阵, ``add`` 则累加至输入矩阵, ``mean`` 则计算累计平均值至输入矩阵, ``multiple`` 则累乘至输入矩阵, ``amin`` 则计算累计最小值至输入矩阵, ``amax`` 则计算累计最大值至输入矩阵。
- **reduce** (str,可选) - 归约操作类型,默认为 ``assign``,可选为 ``add``、 ``multiple``、 ``mean``、 ``amin``、 ``amax``。不同的规约操作插入值 value 对于输入矩阵 arr 会有不同的行为,如为 ``assign`` 则覆盖输入矩阵, ``add`` 则累加至输入矩阵, ``mean`` 则计算累计平均值至输入矩阵, ``multiple`` 则累乘至输入矩阵, ``amin`` 则计算累计最小值至输入矩阵, ``amax`` 则计算累计最大值至输入矩阵。
- **include_self** (bool,可选) - 规约时是否包含 arr 的元素,默认为 ``True``。
- **broadcast** (bool,可选) - 是否广播 ``index`` 矩阵,默认为 ``True``。

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/scatter_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ PyTorch 兼容的 scatter 函数。基于 :ref:`cn_api_paddle_put_along_axis`
- **dim** (int) - 进行 scatter 操作的维度,范围为 ``[-input.ndim, input.ndim)``。
- **index** (Tensor)- 索引矩阵,包含沿轴提取 1d 切片的下标,必须和 arr 矩阵有相同的维度。注意,除了 ``dim`` 维度外, ``index`` 张量的各维度大小应该小于等于 ``input`` 以及 ``src`` 张量。内部的值应该在 ``input.shape[dim]`` 范围内。数据类型可以是 int32,int64。
- **src** (Tensor)- 需要插入的值。``src`` 是张量时,各维度大小需要至少大于等于 ``index`` 各维度。不受到 ``input`` 的各维度约束。当为标量值时,会自动广播大小到 ``index``。数据类型为:bfloat16、float16、float32、float64、int32、int64、uint8、int16。本参数有一个互斥的别名 ``value``。
- **reduce** (str,可选)- 指定 scatter 的归约方式。默认值为 None,等效为 ``assign``。可选为 ``add``、 ``multiple``、 ``mean``、 ``amin``、 ``amax``。不同的规约操作插入值 src 对于输入矩阵 arr 会有不同的行为,如为 ``assgin`` 则覆盖输入矩阵, ``add`` 则累加至输入矩阵, ``mean`` 则计算累计平均值至输入矩阵, ``multiple`` 则累乘至输入矩阵, ``amin`` 则计算累计最小值至输入矩阵, ``amax`` 则计算累计最大值至输入矩阵。
- **reduce** (str,可选)- 指定 scatter 的归约方式。默认值为 None,等效为 ``assign``。可选为 ``add``、 ``multiple``、 ``mean``、 ``amin``、 ``amax``。不同的规约操作插入值 src 对于输入矩阵 arr 会有不同的行为,如为 ``assign`` 则覆盖输入矩阵, ``add`` 则累加至输入矩阵, ``mean`` 则计算累计平均值至输入矩阵, ``multiple`` 则累乘至输入矩阵, ``amin`` 则计算累计最小值至输入矩阵, ``amax`` 则计算累计最大值至输入矩阵。
- **out** (Tensor,可选) - 用于引用式传入输出值,注意:动态图下 out 可以是任意 Tensor,默认值为 None。

返回
Expand Down
2 changes: 1 addition & 1 deletion docs/design/memory/memory_optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ In former control flow graph, the out-edges of node 5 are 5 --> 6 and 5 --> 2, a

- Uses and Defs

An assignmemt to a variable or temporary defines that variable. An occurence of a variable on the right-hand side of an assginment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
An assignmemt to a variable or temporary defines that variable. An occurence of a variable on the right-hand side of an assignment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.

- Liveness

Expand Down
2 changes: 1 addition & 1 deletion docs/design/phi/design_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -1219,7 +1219,7 @@ REGISTER_OPERATOR(sign, ops::SignOp, ops::SignOpMaker<float>,
* The infrt declare like:
*
* def PDKEL_Reshape_to_CPU : Pat<
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker arguements
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker arguments
* (PDKEL_ReshapeKernelAttr $x, fn($shape_attr)>; // Kernel arguments
* def PDKEL_Reshape_to_CPU : Pat<
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr),
Expand Down
4 changes: 2 additions & 2 deletions docs/design/phi/design_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ We hope to be able to achieve the same three-layer arguments of Python API -> Op
- The initial construction of the PHI operator library paid more attention to Kernel "migration". Due to the consideration of time and labor costs, the original OpKernel logic migration is not forced to be upgraded to "combined" writing for the time being, and the same is true for the forward and backward Kernels
- The "combined Kernel extension development" capability provided by the PHI operator library initially serves the new operators of subsequent increments, and the existing operators still maintain their original coding implementation, reducing the cost of migration
- The "new hardware expansion capability" provided by the PHI operator library is initially only provided within the scope of the new hardware itself. For example, the XPU has implemented 50 Kernels, and then it can combine new Kernels based on 50 Kernels, but this is only limited to the XPU Within the scope, its implementation is not common with CPU, CUDA, etc.
- The PHI operator library project focuses on the work of "Kernel functionalization & Op normalization", Kernel is changed to functional format, C++ API and Op naming and arguemnts list are gradually normalized to Python API under the premise of ensuring compatibility as much as possible
- The PHI operator library project focuses on the work of "Kernel functionalization & Op normalization", Kernel is changed to functional format, C++ API and Op naming and arguments list are gradually normalized to Python API under the premise of ensuring compatibility as much as possible


## 2. Design Overview
Expand Down Expand Up @@ -1219,7 +1219,7 @@ At present, the `ArgumentMapping` function mapping is designed. In the `phi/ops/
* The infrt declare like:
*
* def PDKEL_Reshape_to_CPU : Pat<
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker arguements
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker arguments
* (PDKEL_ReshapeKernelAttr $x, fn($shape_attr)>; // Kernel arguments
* def PDKEL_Reshape_to_CPU : Pat<
* (PD_ReshapeOp $x, $shape_tensor, $shape_attr),
Expand Down
2 changes: 1 addition & 1 deletion docs/design/quantization/fixed_point_quantization.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Fixed-point quantization uses lower bits, for example, 2-bit, 3-bit or 8-bit fixed point to represent weights and activations, which usually are in singe-precision float-point with 32 bits. The fixed-point representation has advantages in reducing memory bandwidth, lowering power consumption and computational resources as well as the model storage requirements. It is especially important for the inference in embedded-device deployment.

According to some experiments, the apporach to quantize the model trained in float point directly works effectively on the large models, like the VGG model having many parameters. But the accuracy drops a lot for the small model. In order to improve the tradeoff between accuracy and latency, many quantized training apporaches are proposed.
According to some experiments, the approach to quantize the model trained in float point directly works effectively on the large models, like the VGG model having many parameters. But the accuracy drops a lot for the small model. In order to improve the tradeoff between accuracy and latency, many quantized training approaches are proposed.

This document is to design a quantized training framework on Fluid. The first part will introduce how to quantize, The second part will describe the quantized training framework. The last part will illustrate how to calculate the quantization scale.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Context APIs

## CustomContext
`CustomContext` is the acutal parameter of the template parameter Context of the custom kernel function. For details, please refer to [custom_context.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/backends/custom/custom_context.h).
`CustomContext` is the actual parameter of the template parameter Context of the custom kernel function. For details, please refer to [custom_context.h](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/backends/custom/custom_context.h).

```c++
// Constructor
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -881,10 +881,10 @@ def evaluate(image, labels, model, acc, tag, reprod_logger):
model.eval()
output = model(image)

accracy = acc(output, labels, topk=(1, 5))
accuracy = acc(output, labels, topk=(1, 5))

reprod_logger.add("acc_top1", np.array(accracy[0]))
reprod_logger.add("acc_top5", np.array(accracy[1]))
reprod_logger.add("acc_top1", np.array(accuracy[0]))
reprod_logger.add("acc_top5", np.array(accuracy[1]))

reprod_logger.save("./result/metric_{}.npy".format(tag))
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -881,10 +881,10 @@ def evaluate(image, labels, model, acc, tag, reprod_logger):
model.eval()
output = model(image)

accracy = acc(output, labels, topk=(1, 5))
accuracy = acc(output, labels, topk=(1, 5))

reprod_logger.add("acc_top1", np.array(accracy[0]))
reprod_logger.add("acc_top5", np.array(accracy[1]))
reprod_logger.add("acc_top1", np.array(accuracy[0]))
reprod_logger.add("acc_top5", np.array(accuracy[1]))

reprod_logger.save("./result/metric_{}.npy".format(tag))
```
Expand Down
20 changes: 10 additions & 10 deletions docs/practices/reinforcement_learning/dqn_fruit_merger.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -490,7 +490,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -780,8 +780,8 @@
" features = np.zeros((height, width, 2), dtype=np.float32)\n",
"\n",
" # 辅助矩阵,分别记录 type (网格水果类型), dr(最小距离)\n",
" auxilary = np.zeros((height, width, 2), dtype=np.float32)\n",
" auxilary[:, :, 1] = np.inf\n",
" auxiliary = np.zeros((height, width, 2), dtype=np.float32)\n",
" auxiliary[:, :, 1] = np.inf\n",
"\n",
" # 更新阈值,距离 dr 大于该阈值的视为不在网格内\n",
" threshold = ((uw**2) + (uh**2)) // 2\n",
Expand All @@ -800,21 +800,21 @@
" dr = dx * dx + dy * dy - r2\n",
"\n",
" # 如果 dr 小于阈值且小于目前的最小 dr,更新网格内水果信息\n",
" if dr < threshold and dr < auxilary[i, j, 1]:\n",
" auxilary[i, j, 0] = f.type\n",
" auxilary[i, j, 1] = dr\n",
" if dr < threshold and dr < auxiliary[i, j, 1]:\n",
" auxiliary[i, j, 0] = f.type\n",
" auxiliary[i, j, 1] = dr\n",
"\n",
" # 是否为空 (True 或 False)\n",
" is_empty = auxilary[:, :, 0] == 0\n",
" is_empty = auxiliary[:, :, 0] == 0\n",
" # 是否和当前水果类型相同 (True 或 False)\n",
" is_same = auxilary[:, :, 0] == self.current_fruit_type\n",
" is_same = auxiliary[:, :, 0] == self.current_fruit_type\n",
"\n",
" # 网格内水果类型 (type_1) ,当前水果 (type_0)\n",
" # 如果 type_1 < type_0,则值为 type_1 - type_0\n",
" # 如果 type_1 == type_0,则值为 1\n",
" # 如果 type_1 > type_0,则值为 0\n",
" # 如果 type_1 == 0 (网格为空),则值为 0\n",
" features[:, :, 0] = auxilary[:, :, 0] - self.current_fruit_type\n",
" features[:, :, 0] = auxiliary[:, :, 0] - self.current_fruit_type\n",
" features[:, :, 0] = features[:, :, 0].clip(max=0)\n",
" features[:, :, 0][is_same] = 1\n",
" features[:, :, 0][is_empty] = 0\n",
Expand All @@ -824,7 +824,7 @@
" # 如果 type_1 == type_0,则值为 1\n",
" # 如果 type_1 < type_0,则值为 0\n",
" # 如果 type_1 == 0 (网格为空),则值为 0\n",
" features[:, :, 1] = self.current_fruit_type - auxilary[:, :, 0]\n",
" features[:, :, 1] = self.current_fruit_type - auxiliary[:, :, 0]\n",
" features[:, :, 1] = features[:, :, 1].clip(max=0)\n",
" features[:, :, 1][is_same] = 1\n",
" features[:, :, 1][is_empty] = 0\n",
Expand Down