Skip to content

Commit

Permalink
EPMRPP-81021 || Remove the Unique Id attribute from docs (#855)
Browse files Browse the repository at this point in the history
Co-authored-by: Yuliya_Prihodko <[email protected]>
  • Loading branch information
pressayuliya and Yuliya_Prihodko authored Jan 17, 2025
1 parent c888271 commit 65bb7a4
Show file tree
Hide file tree
Showing 5 changed files with 4 additions and 44 deletions.
4 changes: 1 addition & 3 deletions docs/analysis/AutoAnalysisOfLaunches.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,6 @@ The following info is sent:
* Flag: “Analyzed by” (where shows by whom the test item has been analyzed by a user or by ReportPortal);
* A launch name;
* Launch ID;
* Unique ID;
* Test case ID;

For the better analysis, we merge small logs (which consist of 1-2 log lines and words number less or equal 100) together. We store this merged log message as a separate document if there are no other big logs (consisting of more than 2 log lines or having a stacktrace) in the test item. We store this merged log message in a separate field "merged_small_logs" for all the big logs if there are ones.
Expand Down Expand Up @@ -85,7 +84,7 @@ Analysis can be launched automatically (via Project Settings) or manually (via t

Here is a simplified procedure of the Auto-analysis candidates searching via OpenSearch.

When a "To investigate" test item appears we search for the most similar test items in the analytical base. We create a query which searches by several fields, message similarity is a compulsory condition, other conditions boost the better results and they will have a higher score (boost conditions are similarity by unique id, launch name, error message, found exceptions, numbers in the logs and etc.).
When a "To investigate" test item appears we search for the most similar test items in the analytical base. We create a query which searches by several fields, message similarity is a compulsory condition, other conditions boost the better results and they will have a higher score (boost conditions are similarity by launch name, error message, found exceptions, numbers in the logs and etc.).

Then OpenSearch receives a log message and divides it into the terms (words) with a tokenizer and calculates the importance of each term (word). For that OpenSearch computes TF-IDF for each term (word) in the analyzed log. If the level of term importance is low, the OpenSearch ignores it.

Expand Down Expand Up @@ -145,7 +144,6 @@ The ML model is an XGBoost model which features (about 30 features) represent di
* the percent of selected test items with the following defect type
* max/min/mean scores for the following defect type
* cosine similarity between vectors, representing error message/stacktrace/the whole message/urls/paths and other text fields
* whether it has the same unique id, from the same launch
* the probability for being of a specific defect type given by the Random Forest Classifier trained on Tf-Idf vectors

The model gives a probability for each defect type group, and we choose the defect type group with the highest probability and the probability should be >= 50%.
Expand Down
1 change: 0 additions & 1 deletion docs/analysis/MLSuggestions.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ The OpenSearch returns to the service Analyzer 10 logs with the highest score fo
* the percent of selected test items with the following defect type
* max/min/mean scores for the following defect type
* cosine similarity between vectors, representing error message/stacktrace/the whole message/urls/paths and other text fields
* whether it has the same unique id, from the same launch
* the probability for being of a specific defect type given by the Random Forest Classifier trained on Tf-Idf vectors

The model gives a probability for each candidate, we filter out test items with the probability less or equal 40%. We sort the test items by this probability, after that we deduplicate test items inside this ranked list. If two test items are similar with >= 98% by their messages, then we will leave the test item with the highest probability. After deduplication we take maximimum 5 items with the highest score to show in the ML Suggestions section.
Expand Down
3 changes: 1 addition & 2 deletions docs/developers-guides/InteractionsBetweenAPIAndAnalyzer.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@ IndexTestItem:
|----------------|----------------------------|---------------------------------------|
| testItemId | Id of test item | 123 |
| issueType | Issue type locator | pb001 |
| uniqueId | Unique id of test item | auto:c6edafc24a03c6f69b6ec070d1fd0089 |
| isAutoAnalyzed | Is test item auto analyzed | false |
| logs | Array of test item logs | |

Expand Down Expand Up @@ -176,4 +175,4 @@ Analyzer do not send response on the request.

## Examples

Custom [analyzer](https://github.com/ihar-kahadouski/custom-analyzer) written in java using [Spring AMQP](https://spring.io/projects/spring-amqp).
Custom [analyzer](https://github.com/ihar-kahadouski/custom-analyzer) written in java using [Spring AMQP](https://spring.io/projects/spring-amqp).
17 changes: 2 additions & 15 deletions docs/work-with-reports/HistoryOfLaunches.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,19 +18,6 @@ Test case hash is a parameter that is automatically generated based on the Test

You can read about the Test Case ID [here](/work-with-reports/TestCaseId).

### Unique ID history
***(deprecated)***

1. Take the ReportPortal release version 5.2.2 or higher. docker-compose.yml

2. Add an environment variable to the service-API service:

```
RP_ENVIRONMENT_VARIABLE_HISTORY_OLD=true
```

3. Redeploy ReportPortal

### Test Case Hash history

Run ReportPortal without env variable.
Expand Down Expand Up @@ -104,7 +91,7 @@ If you click on the Total statistic for the launch *Regression_MacOS* and click

### History table for launches with the same name

If you have configured ReportPortal with [TestCase History table](/work-with-reports/HistoryOfLaunches#history-table) or with Unique ID ***(deprecated)***.
If you have configured ReportPortal with [TestCase History table](/work-with-reports/HistoryOfLaunches#history-table).
This option is for you.

**How you can open a history table with execution launches with the same name?**
Expand All @@ -114,7 +101,7 @@ This option is for you.
- Click on the button 'History'
- Choose the option **'Launches with the same name'** in the drop-down 'BASE'

**What information is shown on the table based on Unique ID (deprecated) or Test Case Hash (with option "Launches with the same name"?)**
**What information is shown on the table based on Test Case Hash (with option "Launches with the same name"?)**

On the history table, you can see the first 20 test cases their last 10 (or 3/5/10/15/20/25/30) executions from only launches with the same name on the project.
Each column on the history table is equaled to a number of the execution.
Expand Down
23 changes: 0 additions & 23 deletions docs/work-with-reports/UniqueId.md

This file was deleted.

0 comments on commit 65bb7a4

Please sign in to comment.