diff --git a/docs/analysis/PatternAnalysis.mdx b/docs/analysis/PatternAnalysis.mdx index 51e0146679..c884152035 100644 --- a/docs/analysis/PatternAnalysis.mdx +++ b/docs/analysis/PatternAnalysis.mdx @@ -5,48 +5,88 @@ sidebar_label: Pattern Analysis # Pattern Analysis -Pattern analysis is a feature that helps you to speed up test failure analysis by finding common patterns in error logs. +Pattern Analysis is a feature that helps you to speed up test failure analysis by finding common patterns in error logs. -## Types of Pattern Analysis +## How to run Pattern Analysis -**String** – any problem phrase. +You can run Pattern Analysis automatically or manually. - +To run Pattern Analysis **automatically**: - +1. Go to the Project Settings. -**Regex** – regular expression. +2. Open Pattern-Analysis tab. - +3. Check the "Auto Pattern-Analysis" checkbox. - +4. Create rule. + +5. Run a launch. + +6. After launch finish, automatic Pattern Analysis will occur. + + :::note -It would be better to use STRING rule instead of REGEX rule in all possible cases to speed up the Pattern Analysis processing in the database. As a result, you can get your analysis completed faster using the STRING patterns rather than REGEX and reduce the database workload. +Automatic Pattern Analysis is activated by default. ::: -## Use case 1: +In case automatic Pattern Analysis is turned off, you can run **manually** from the menu next to a particular launch: + + + +## How to create rules for Pattern Analysis + +To create rule: + +1. Go to the Project Settings. +2. Open Pattern Analysis tab. +3. Click ‘Create Pattern’ button. +4. Fill in the form. +5. Create ‘Create’ button. + + + + -**Problem:** A user knows the several common problems why test cases fail. During tests run a lot of test have failed. A user need to check logs a of tests to know by what reason test cases have failed. +## Types of Pattern Analysis rules -**Solution:** Create a pattern rules for all common reasons which contains a problem phrase (for example: *"Expected status code `<404>` but > was `<500>`"* or "*Null response"*) or with Regex query (for example: java:[0-9]*). Switch On a pattern analysis. -Launch a test run. -So that the ReportPortal systems finds all failed items which have known patterns in error logs and marks them with a label with pattern name. -Find all items failed by the same reason by choosing a filter by Pattern Name on the Step view. -Add The most popular pattern widget (TOP-20) and track the TOP-20 the most popular reason of test failing in the build. +There are two types of Pattern Analysis rules: + +1. String – any problem phrase. + + + + + +2. Regex – regular expression. + + + + + +:::note +It would be better to use STRING rule instead of REGEX rule in all possible cases to speed up the Pattern Analysis processing in the database. As a result, you can get your analysis completed faster using the STRING patterns rather than REGEX and reduce the database workload. +::: - +## Use case 1 -## Use case 2: +**Problem:**
+A user is aware of several common reasons why test cases fail. During a test run, many tests have failed, and the user needs to check the logs to identify the reasons behind the failures. -**Problem:** Test run has finished. A user found that more than 3 items have failed by the same reason. And he want to find all such items. +**Solution:**
+Create pattern rules for all common failure reasons, which include specific problem phrases (e.g., `Expected status code <404> but was <500>` or `Null response`) or use Regex queries (e.g., `java:[0-9]*`). Enable pattern analysis and launch a test run. This way, the ReportPortal system can identify all failed items that match known patterns in the error logs and label them with the corresponding pattern name. To find all items that failed for the same reason, apply a filter by ‘Pattern Name’ in the Step view. Additionally, add a ‘Most Popular Pattern’ widget to track the top 20 most frequent reasons for test failures in the build. -**Solution:** Create a new pattern rule on Project Settings. Launch a pattern analysis manually for one launch. -name. -Find all items failed by the same reason by choosing a filter by Pattern Name on the Step view. + - + +## Use case 2 +**Problem:**
+The test run has finished, and the user notices that more than three items have failed for the same reason. The user wants to find all such items. +**Solution:**
+Create a new pattern rule in the Project Settings. Manually launch a pattern analysis for a specific test run. Use the ‘Pattern Name’ filter in the Step view to find all items that failed for the same reason. + diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis1.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis1.png new file mode 100644 index 0000000000..347010471c Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis1.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis10.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis10.png new file mode 100644 index 0000000000..50330ffc0c Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis10.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis2.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis2.png new file mode 100644 index 0000000000..8907cc375e Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis2.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis3.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis3.png new file mode 100644 index 0000000000..7047e9bd43 Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis3.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis4.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis4.png new file mode 100644 index 0000000000..06ea7e597d Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis4.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis5.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis5.png new file mode 100644 index 0000000000..cd71e4773f Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis5.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis6.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis6.png new file mode 100644 index 0000000000..cfe3511beb Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis6.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis7.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis7.png new file mode 100644 index 0000000000..7be7d93a9d Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis7.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis8.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis8.png new file mode 100644 index 0000000000..f6d49107d1 Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis8.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysis9.png b/docs/analysis/img/PatternAnalysis/PatternAnalysis9.png new file mode 100644 index 0000000000..fe9c39b7db Binary files /dev/null and b/docs/analysis/img/PatternAnalysis/PatternAnalysis9.png differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysisRegex1.png b/docs/analysis/img/PatternAnalysis/PatternAnalysisRegex1.png deleted file mode 100644 index f3e3e43d1c..0000000000 Binary files a/docs/analysis/img/PatternAnalysis/PatternAnalysisRegex1.png and /dev/null differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysisRegex2.png b/docs/analysis/img/PatternAnalysis/PatternAnalysisRegex2.png deleted file mode 100644 index 7ef5eb6522..0000000000 Binary files a/docs/analysis/img/PatternAnalysis/PatternAnalysisRegex2.png and /dev/null differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysisString1.png b/docs/analysis/img/PatternAnalysis/PatternAnalysisString1.png deleted file mode 100644 index bc7bf7f88e..0000000000 Binary files a/docs/analysis/img/PatternAnalysis/PatternAnalysisString1.png and /dev/null differ diff --git a/docs/analysis/img/PatternAnalysis/PatternAnalysisString2.png b/docs/analysis/img/PatternAnalysis/PatternAnalysisString2.png deleted file mode 100644 index b71765cbe5..0000000000 Binary files a/docs/analysis/img/PatternAnalysis/PatternAnalysisString2.png and /dev/null differ diff --git a/docs/installation-steps-advanced/ScalingReportPortalServices.md b/docs/installation-steps-advanced/ScalingReportPortalServices.md new file mode 100644 index 0000000000..668521cc70 --- /dev/null +++ b/docs/installation-steps-advanced/ScalingReportPortalServices.md @@ -0,0 +1,41 @@ +--- +sidebar_position: 13 +sidebar_label: Scaling ReportPortal services +--- + +# Scaling ReportPortal services + +ReportPortal supports dynamic scaling of its API service during runtime to efficiently manage varying loads. This guide provides instructions on how to scale the API service up or down and discusses the implications of asynchronous reporting and queue management in RabbitMQ while scaling. + + ReportPortal also supports the scaling of UAT and UI services. However, it's not recommended to scale the Jobs service due to potential conflicts with cleaning cron jobs, which may lead to database locking issues. + +To effectively scale ReportPortal, you need to follow these steps: + +1. **Additional resources**: Increase capacity by deploying more instances or by enhancing the resources (CPU and memory) of existing ones. +2. **Load Balancing**: The Traefik (for Docker) and Ingress Controller (for Kubernetes) are already set up to automatically distribute incoming requests among all active services. +3. **AMQP settings:** Performance improvements can be achieved by increasing the queue count and adjusting the prefetch count per consumer. These adjustments allow for more efficient processing and resolution of messages within the queues. For more detailed information, refer to the article [Asynchronous Reporting](/developers-guides/AsynchronousReporting/#exchanges-and-queues-for-reporting). + +## Kubernetes Configuration + +1. **Scaling Services**: To scale your [ReportPortal services in Kubernetes](https://github.com/reportportal/kubernetes), you need to increase the replica count parameter in the `values.yaml` file for the required services. For example, to scale the API service, adjust the `replicaCount` as shown below: + + ```yaml + serviceapi: + replicaCount: 2 + ``` + +2. **Load Balancing**: The Ingress Controller is already set up to automatically distribute incoming requests among all active services. However, to enhance control over idle TCP connections adjust the IDLE Timeout value to `300`. + +## Docker Configuration + +1. **Scaling Services**: To scale your [ReportPortal services in Docker](https://github.com/reportportal/reportportal/blob/master/docker-compose.yml), you need to add a replica parameter in the `docker-compose.yml` file for the required services. For example, to scale the API service, adjust the `replicas` as shown below: + + ```yaml + services: + + api: + deploy: + replicas: 2 + ``` + +2. **Load Balancing**: The Teafik is already set up to automatically distribute incoming requests among all active services. diff --git a/docs/installation-steps-advanced/ScalingUpReportPortalAPIService.md b/docs/installation-steps-advanced/ScalingUpReportPortalAPIService.md deleted file mode 100644 index 4c87f93952..0000000000 --- a/docs/installation-steps-advanced/ScalingUpReportPortalAPIService.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -sidebar_position: 13 -sidebar_label: Scaling Up the ReportPortal Service API ---- - -# Scaling Up the ReportPortal Service API - -ReportPortal supports dynamic scaling of its API service during runtime to efficiently manage varying loads. This guide provides instructions on how to scale the API service up or down, and discusses the implications on asynchronous reporting and queue management in RabbitMQ while scaling. - -## Scaling Up the API Service - -### Steps to Scale Up -1. **Launch Additional Instances**: Increase the capacity by starting more instances of the API service. -2. **Load Balancing**: The load balancer will automatically distribute incoming requests among all active API service instances. - -## Scaling Down the API Service - -### Steps to Scale Down -1. **Shutdown Instances**: Decrease the scale by shutting down any of the API service instances. -2. **Message Redistribution**: Messages in the queues of the shutdown instance will automatically shift to the queues of the remaining active APIs. -3. **Queue Cleanup**: Inactive queues (those not receiving any new messages) will be removed after a few minutes. - -## Impact on Asynchronous Reporting and Queue Management - -### Considerations for Scaling Up -- **Message Rebalancing**: During periods of heavy asynchronous reporting, scaling up may cause messages to be rebalanced across different queues, despite using "Consistent Hashing Algorithm" for distribution. It may lead to an increased number of retries. It might take approx. 2 hours to restore order using retry logic with progressively increasing TTL for each message. -- **Avoid During Heavy Reporting**: Given the potential complexities in message handling when scaling up, it is advisable to refrain from doing so during extensive reporting activities to prevent hard-to-resolve situations and missed reporting items. - -### Considerations for Scaling Down -- **Continuity in Message Processing**: Shutting down an API instance leads to its queues redistributing their messages to the remaining queues, ensuring no disruption in processing. - -### Notable Effects -Scaling operations primarily affect asynchronous report processing and management within RabbitMQ queues: - -- **Order Processing Assurance**: To maintain correct order processing of reports for a specific Launch, all requests are directed to one particular queue and handled by only one consumer. - -### About RabbitMQ Queues -- **Scaling Limitations**: Currently, it is not possible to scale queues in RabbitMQ with spreading requests across multiple queues and consumers. - - -## Scaling up configuration for ReportPortal API Service - -### Kubernetes - -To scale your ReportPortal services in Kubernetes, you need to adjust the `replicaCount` and `queues.totalNumber` in your `values.yaml` file. - -1. **Update Replica Count**: - Change `replicaCount` from `1` to `2` for additional replication.
- [values.yaml replicaCount](https://github.com/reportportal/kubernetes/blob/master/reportportal/values.yaml#L91) - -2. **Edit Total Number of Queues**: - Modify `queues` from `10` to `20` to increase the total available queues.
- [values.yaml queues](https://github.com/reportportal/kubernetes/blob/master/reportportal/values.yaml#L159) - -### Docker - -To scale your ReportPortal services using Docker, update the environment variables and duplicate the API values block. - -- **Set Environment Variables**: - Add `RP_AMQP_QUEUES` and `RP_AMQP_QUEUESPERPOD` to your API environment variables.
- [docker-compose.yml environment](https://github.com/reportportal/reportportal/blob/v23.2/docker-compose.yml#L202)
- ```bash - version: '3.8' - services: - - api: - <...> - environment: - REPORTING_QUEUES_COUNT: 10 - <...> - ``` - -#### Docker Compose v2 -- **Duplicate API Values Block**: - Create a copy of the API values block and rename `api` to `api_replica_1` to facilitate scaling.
- [docker-compose.yml API values block](https://github.com/reportportal/reportportal/blob/v23.2/docker-compose.yml#L191-L241)
- ```bash - version: '3.8' - services: - - api: - <...> - environment: - REPORTING_QUEUES_COUNT: 10 - <...> - - api_replica_1: - <...> - environment: - REPORTING_QUEUES_COUNT: 10 - <...> - ``` - -#### Docker Compose v3.3+ -- **Add replicas**: - Add `deploy.replicas: 2` to your API: - ```bash - version: '3.8' - services: - - api: - <...> - deploy: - replicas: 2 - <...> - ``` diff --git a/docs/installation-steps-advanced/UpgradingPostgreSQLForReportPortalV24.2AndLater.md b/docs/installation-steps-advanced/UpgradingPostgreSQLForReportPortalV24.2AndLater.md new file mode 100644 index 0000000000..9b0fd99c7e --- /dev/null +++ b/docs/installation-steps-advanced/UpgradingPostgreSQLForReportPortalV24.2AndLater.md @@ -0,0 +1,64 @@ +# Upgrading PostgreSQL for ReportPortal v24.2 and later + +:::important +This guide is intended for users planning to upgrade from Postgres 12 to a newer version, starting with ReportPortal version 24.2. +::: + +This guide will walk you through backing up your current PostgreSQL database, removing existing containers and volumes, downloading the latest release, and restoring the PostgreSQL dump. + +## Step 0: Backup Postgres and Storage +Before proceeding, ensure you have a complete Postgres database and Storage backup. + +## Step 1: Create a Dump of Database +Run the following command to create a dump of your current PostgreSQL database: + +```bash +docker exec -t postgres pg_dump -U rpuser -d reportportal > reportportal24_1_postgres12_dump.sql +``` + +## Step 2: Remove All Containers +Shut down and remove all containers: + +```bash +docker compose -p reportportal down +``` + +## Step 3: Remove Postgres Volume +Remove the Postgres volume to ensure a clean state for the new database: + +```bash +docker volume rm reportportal_postgres +``` + +## Step 4: Download Latest Release +Fetch the latest `docker-compose.yml` file to get the most recent version of ReportPortal: + +```bash +curl -LO https://raw.githubusercontent.com/reportportal/reportportal/refs/heads/master/docker-compose.yml +``` + +## Step 5: Run Postgres Container +Start only the Postgres container to prepare for database restoration: + +```bash +docker compose -p reportportal up -d postgres +``` + +## Step 6: Restore Postgres Dump +Restore the database dump into the new Postgres container: + +```bash +docker exec -i -e PGPASSWORD=rppass postgres psql -U rpuser -d reportportal < reportportal24_1_postgres12_dump.sql > upgrade_db.log 2>&1 +``` + +## Step 7: Run ReportPortal +Bring up all the services for ReportPortal: + +```bash +docker compose -p reportportal up -d +``` + +## Final Notes +- Verify that all services are running correctly using `docker ps` or checking the logs. +- Keep the log file `upgrade_db.log` for any potential troubleshooting. +- Regular backups are essential. Make sure to have a reliable strategy in place. diff --git a/docusaurus.config.js b/docusaurus.config.js index bcfebaaf46..a89953512a 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -525,7 +525,7 @@ const config = { from: '/installation-steps/ReportPortal23.1FileStorageOptions', }, { - to: '/installation-steps-advanced/ScalingUpReportPortalAPIService', + to: '/installation-steps-advanced/ScalingReportPortalServices', from: '/installation-steps/ScalingUpReportPortalAPIService', }, {