diff --git a/docs/Accessing-for-the-First-Time.md b/docs/Accessing-for-the-First-Time.md index 16ba0cf..bfadf91 100644 --- a/docs/Accessing-for-the-First-Time.md +++ b/docs/Accessing-for-the-First-Time.md @@ -6,7 +6,7 @@ nav_order: 2 ## Getting Started -To begin, navigate to [OpenFlow](https://app.openiap.io), where you'll encounter a login page like this: +To begin, navigate to [OpenCore](https://app.openiap.io), where you'll encounter a login page like this: ![Authentication page](Accessing-for-the-First-Time/Authentication-page.png) @@ -24,8 +24,8 @@ Check your email for a validation code. If you don't see it within a minute, rem ![Validation Code](Accessing-for-the-First-Time/Validation-Code.png) > **Note:** -> If you are using a locally hosted OpenFlow, or signed in using federation, you will not be asked for a validation code. +> If you are using a locally hosted OpenCore, or signed in using federation, you will not be asked for a validation code. Congratulations, you have now created your first user account! -![OpenIAP Flow frontpage](Accessing-for-the-First-Time/OpenIAP-Flow-frontpage.png) +![OpenCore frontpage](Accessing-for-the-First-Time/OpenIAP-Flow-frontpage.png) diff --git a/docs/activities/Basic-Activities.md b/docs/activities/Basic-Activities.md index db593d6..bdb1bb5 100644 --- a/docs/activities/Basic-Activities.md +++ b/docs/activities/Basic-Activities.md @@ -174,8 +174,8 @@ ClipWait, % Timeout, % WaitForAnyData **Why:** This is an excellent way to make a workflow react to things in the environment. You could make a workflow that helps filling in information into a form when the user presses a specific keyboard combination, or you would show a helpful dialog, when ever a user opens a Timesheet, or maybe you want to add extra actions to an existing button. -Note, as soon as an activity has been created, all actions will also be sent to OpenFlow. -You can use OpenFlow to trigger other robots based on a trigger ( or interact with one of the more than 2000 different systems supported ) +Note, as soon as an activity has been created, all actions will also be sent to OpenCore. +You can use OpenCore to trigger other robots based on a trigger ( or interact with one of the more than 2000 different systems supported ) ![1558723403613](activities/DetectorNode.png) @@ -185,9 +185,9 @@ You can use OpenFlow to trigger other robots based on a trigger ( or interact wi ![1561191004924](activities/InvokeOpenFlowNode.png) -**What:** Call a workflow inside OpenFlow +**What:** Call a workflow inside OpenCore -**How:** Insert a workflow node inside OpenFlow, check RPA to make it visible to robots and click deploy. Now you can select this workflow inside the InvokeOpenFlow activity inside OpenRPA. All variables in the workflow will be sent to the workflow in msg.payload, and any data in msg.payload will be sent back to the robot once completed, if a corresponding variable exists. +**How:** Insert a workflow node inside OpenCore, check RPA to make it visible to robots and click deploy. Now you can select this workflow inside the InvokeOpenFlow activity inside OpenRPA. All variables in the workflow will be sent to the workflow in msg.payload, and any data in msg.payload will be sent back to the robot once completed, if a corresponding variable exists. **Why:** Greatly improves to possibilities in RPA workflow, by giving access to other robors and more than 2000 other systems, using an easy to use drag and drop workflow engine. @@ -199,7 +199,7 @@ You can use OpenFlow to trigger other robots based on a trigger ( or interact wi **How:** Drag in InvokeOpenRPA and select the workflow you would like to call. Any arguments in the targeted workflow will be mapped to local variables of the same name, to support transferring parameters between the two workflows. Click "Add variable" to have all the in and out arguments in the targeted workflow created locally in the current scope/sequence. -**Why:** More complex workflows is easier to manage if split up to smaller "chucks" that call each other. Having multiple smaller workflows also give easy access to run statistics on each part of the workflow using OpenFlow. +**Why:** More complex workflows is easier to manage if split up to smaller "chucks" that call each other. Having multiple smaller workflows also give easy access to run statistics on each part of the workflow using OpenCore. # InvokeRemoteOpenRPA @@ -209,13 +209,13 @@ You can use OpenFlow to trigger other robots based on a trigger ( or interact wi **How:** Just select the robot or role you want to send the request to, and the list of workflows will limit it self to what ever that robot or role have access too. If the workflow takes or returns arguments clicking "Add variables" will add a variable for each for those to the current sequence. -**Why:** Easier than having to add an OpenFlow workflow to handle calling the robot. It's always more secure to use OpenFlow to handle multiple robots, but sometimes its nice to have a fast easy way, to get something done on multiple machines. +**Why:** Easier than having to add an OpenCore workflow to handle calling the robot. It's always more secure to use OpenCore to handle multiple robots, but sometimes its nice to have a fast easy way, to get something done on multiple machines. # InsertOne ![1564847314516](activities/InsertOne.png) -**What:** Take any object and convert it to a Json document and saves it in the database if connected to an OpenFlow instance. +**What:** Take any object and convert it to a Json document and saves it in the database if connected to an OpenCore instance. **How:** Set object to save in Item, override document type using the Type field, use Encrypt Fields to define what elements of the document to encrypt with EAS 256bit encryption. Result contains the result of the insert. @@ -225,7 +225,7 @@ You can use OpenFlow to trigger other robots based on a trigger ( or interact wi ![1564847604412](activities/InsertOrUpdateOne.png) -**What:** Take any object and convert it to a Json document and saves it in the database if connected to an OpenFlow instance. +**What:** Take any object and convert it to a Json document and saves it in the database if connected to an OpenCore instance. **How:** Works just like InsertOne. Using Uniqueness you can define a custom unique constraint when inserting or updating. Say you have an object with an property "department" and you know all departments have an unique name. instead of manually testing if the object already exists you can use InsertOrUpdate and set type to "department" and Uniqueness to "department,_type" ( type is saved as _type in the database). If Uniqueness is not supplied the default constrain of using _id is used. @@ -236,7 +236,7 @@ Using Uniqueness you can define a custom unique constraint when inserting or upd ![1564847997708](activities/DeleteOne.png) -**What:** Delete an document from the database in OpenFlow +**What:** Delete an document from the database in OpenCore **How:** Either supply the object in item or the ID in _id @@ -246,7 +246,7 @@ Using Uniqueness you can define a custom unique constraint when inserting or upd ![1564848147347](activities/query.png) -**What:** Search the database in OpenFlow. +**What:** Search the database in OpenCore. **How:** Supply an [MongoDB query](https://docs.mongodb.com/manual/tutorial/query-documents/) in QueryString and get result as a array of [JObjects](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_Linq_JObject.htm) in result @@ -276,9 +276,9 @@ Using Uniqueness you can define a custom unique constraint when inserting or upd ![image-20200116103439150](activities/SaveFile.png) -**What:** Upload a file to [GridFS](https://docs.mongodb.com/manual/core/gridfs/) in the database in OpenFlow. +**What:** Upload a file to [GridFS](https://docs.mongodb.com/manual/core/gridfs/) in the database in OpenCore. -**How:** Uploads a file to OpenFlow, can be downloaded again using GetFile, updated using InsertOrUpdateOne and deleted using RemoveOne +**How:** Uploads a file to OpenCore, can be downloaded again using GetFile, updated using InsertOrUpdateOne and deleted using RemoveOne **Why:** Easy way to save and work with data across domains/multiple robots or use it as a convenient database. @@ -286,7 +286,7 @@ Using Uniqueness you can define a custom unique constraint when inserting or upd ![image-20200116103517686](activities/GetFile.png) -**What:** Download a file from [GridFS](https://docs.mongodb.com/manual/core/gridfs/) stored in the database in OpenFlow. +**What:** Download a file from [GridFS](https://docs.mongodb.com/manual/core/gridfs/) stored in the database in OpenCore. **How:** Download a file again using either _id or get the latest version based on the filename, if user has access. @@ -306,11 +306,11 @@ GetCredentials ![image-20200116104028045](activities/GetCredentials.png) -**What:** Gets credentials by name from OpenFlow +**What:** Gets credentials by name from OpenCore **How:** Set name, and save username and password in variables, if you cannot use SecureString use UnsecurePassword to save the password in a string variable instead. -**Why:** It's not good practice to keep username and passwords in a workflow, so is more safe to save the them inside OpenFlow where username and password will be encrypted. +**Why:** It's not good practice to keep username and passwords in a workflow, so is more safe to save the them inside OpenCore where username and password will be encrypted. # SetAutoLogin diff --git a/docs/activities/Plugin-Script-Activities.md b/docs/activities/Plugin-Script-Activities.md index 3cd911c..4fa3ea3 100644 --- a/docs/activities/Plugin-Script-Activities.md +++ b/docs/activities/Plugin-Script-Activities.md @@ -158,5 +158,5 @@ Output Argument of the dynamic activity, the recommended output argument include - process activities[].requires - validate rpa_args - show usage of each script activity -- `optional` load from remote (eg. OpenFlow) +- `optional` load from remote (eg. OpenCore) - `optional` manage script activities \ No newline at end of file diff --git a/docs/flow/Agent-Getting-Started.md b/docs/flow/Agent-Getting-Started.md index 7a6e271..e9602a5 100644 --- a/docs/flow/Agent-Getting-Started.md +++ b/docs/flow/Agent-Getting-Started.md @@ -1,12 +1,12 @@ --- layout: default title: Agent Quick Start Guide -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 11 --- # creating your first agent -In OpenIAP flow, an agent means something that can run a package. A package is a zip file with a package definition and some code files. The code can be in multiple different languages (at the time of writing this, either NodeJS, Python or .NET 6+). This code can be divided into two categories: code that runs once and exits, and code that runs as a daemon and reacts to something. +In OpenCore, an agent means something that can run a package. A package is a zip file with a package definition and some code files. The code can be in multiple different languages (at the time of writing this, either NodeJS, Python or .NET 6+). This code can be divided into two categories: code that runs once and exits, and code that runs as a daemon and reacts to something. # Video walk through Working with agents and sdk’s @@ -23,7 +23,7 @@ To begin, you need to install Visual Studio Code and any language(s) you plan to **Note:** When installing, make sure to select the option to add the languages to the path. Once installed, open Visual Studio Code and go to Extensions (Ctrl+Shift+X). Search for "OpenIAP" and install [OpenIAP assistant](https://marketplace.visualstudio.com/items?itemName=openiap.openiap-assistant) -Next, open the Palette and search for "Add OpenIAP flow instance", and follow the guide. For this demo, you can accept all the default values. When prompted for a username, just press Enter to login using the browser and create/login to your [app.openiap.io](https://app.openiap.io/#/Login) account. +Next, open the Palette and search for "Add OpenCore instance", and follow the guide. For this demo, you can accept all the default values. When prompted for a username, just press Enter to login using the browser and create/login to your [app.openiap.io](https://app.openiap.io/#/Login) account. As always, you can also [use your own locally installed instance](https://github.com/open-rpa/docker). @@ -34,9 +34,9 @@ Next, open the Palette and search for "Initialize project ... for OpenIAP instan If you run this command in an empty workspace, it will automatically detect which languages you have installed and add one example file for each language. It will also install any OpenIAP dependencies necessary for your project. -You should now have an .vscode folder with a launch.json file. This will contain the launch setting for each of the example files added. The settings will include 2 environment variables, apurl and jwt. If you have more OpenIAP instances added, you can swap these by calling "Initialize project" again. This way, you can quickly test your code against multiple different OpenIAP flow instances. +You should now have an .vscode folder with a launch.json file. This will contain the launch setting for each of the example files added. The settings will include 2 environment variables, apurl and jwt. If you have more OpenIAP instances added, you can swap these by calling "Initialize project" again. This way, you can quickly test your code against multiple different OpenCore instances. -If Node.js was detected, it will also contain a node_modules folder, and finally, it will contain a package.json. The package.json is mandatory no matter the language you are writing in, since this is how the VS Code extension and OpenFlow agents recognize your project dependencies and how to run it. Most notably, there will be a "main" entry, which tells the agents what is the main code file for this package. It also contains an "openiap" object with the general settings for this project, like programming language and requirements for the host running it. +If Node.js was detected, it will also contain a node_modules folder, and finally, it will contain a package.json. The package.json is mandatory no matter the language you are writing in, since this is how the VS Code extension and OpenCore agents recognize your project dependencies and how to run it. Most notably, there will be a "main" entry, which tells the agents what is the main code file for this package. It also contains an "openiap" object with the general settings for this project, like programming language and requirements for the host running it. If Python was detected, a requirements.txt file will also be added. This is where you add any Python packages that are needed to run your code. For now, this will contain a reference to the OpenIAP package. @@ -45,7 +45,7 @@ If both NodeJS and Python are detected, you'll have a `main.js` and a `main.py` To start debugging, open Run and Debug (using the shortcut Ctrl+Shift+D) and select one of the launch profiles from the list, then press run (shortcut F5). -First, the code will create an instance of the OpenIAP client. Next, we attach a function to define what code we want to run when it has connected to the OpenIAP flow instance. This ensures that if we lose connection for some reason, the same code will run every time, like registering queues. +First, the code will create an instance of the OpenIAP client. Next, we attach a function to define what code we want to run when it has connected to the OpenCore instance. This ensures that if we lose connection for some reason, the same code will run every time, like registering queues. Next, the code calls `connect()`. If using Python, it will start an event loop. Once connected, we first register and start consuming a message queue. In this example, we're using a temporary queue. In real life, it would be better to update this to be a static name or read it from an environment variable. As an example, we show how you could pop a work item off a work item queue when you receive a message. @@ -55,7 +55,7 @@ Lastly, we demonstrate how you can query the database for a few documents in the To deploy the code, open the command palette and search for "Pack and publish to OpenIAP instance". If you have more than one instance added, you'll be prompted to select one. This will pack all files using npm and upload the package file, as well as a package definition, to the selected OpenIAP instance. # Running package as an Agent -To run the package as an agent, first log in to your OpenFlow instance in a browser and go to Agents. Next, click "Packages" and make sure your package is listed. Click on the package to inspect its settings. Note that the package hasn't been enabled as a daemon yet - when a package is running in Docker, it's expected to be running as a daemon. Our code currently does this, but we haven't told it so. +To run the package as an agent, first log in to your OpenCore instance in a browser and go to Agents. Next, click "Packages" and make sure your package is listed. Click on the package to inspect its settings. Note that the package hasn't been enabled as a daemon yet - when a package is running in Docker, it's expected to be running as a daemon. Our code currently does this, but we haven't told it so. Head back to VS Code and open `package.json`. Under "openiap," set "daemon" to "true," then deploy the package once more by opening the command palette and selecting "Pack and publish to OpenIAP instance." @@ -74,7 +74,7 @@ Head to [OpenIAP Desktop Assistant](https://github.com/openiap/assistant/release -The first time you run it, you will be prompted to select the OpenIAP flow instance you want to be connected to. Make sure the url matches the instance you want to connect to, then click the "Connect" button. This will open your local browser and prompt you to signin to feed a token into the agent. The agent will now login and register it self as an agent in the openiap flow instance. In the browser windows click "Agents" and validate you see your agent listed by "hostname / username". +The first time you run it, you will be prompted to select the OpenCore instance you want to be connected to. Make sure the url matches the instance you want to connect to, then click the "Connect" button. This will open your local browser and prompt you to signin to feed a token into the agent. The agent will now login and register it self as an agent in the OpenCore instance. In the browser windows click "Agents" and validate you see your agent listed by "hostname / username". Now go to the agent window and validate you see the agent is signed in and has listed all the packages you have access to, this should include the package we deployed above. Now click the package link, this will start the package and you can see the console output live inside the Agent. @@ -85,4 +85,4 @@ Open a terminal as administrator (Run as Administrator) or root (`sudo -s`) and This will download and install the `nodeagent` package and run it as a command-line program. By default, it will install itself as a service called “nodeagent” after asking a few questions. -First, it asks for an API URL to use; you can use the URL you saw in the `launch.json` file above. Next, it prompts you to open the URL listed in the console, in a browser, to approve the service to request a JWT token to be used for the agent. Once you have signed in, the service will be installed and start running. The service will register itself as an agent in the selected OpenIAP flow instance. So click “Agents” and validate that you now see the local daemon installed (hostname/root or localsystem). Click it to see what programming languages it detects that are supported. +First, it asks for an API URL to use; you can use the URL you saw in the `launch.json` file above. Next, it prompts you to open the URL listed in the console, in a browser, to approve the service to request a JWT token to be used for the agent. Once you have signed in, the service will be installed and start running. The service will register itself as an agent in the selected OpenCore instance. So click “Agents” and validate that you now see the local daemon installed (hostname/root or localsystem). Click it to see what programming languages it detects that are supported. diff --git a/docs/flow/AgentsPage.md b/docs/flow/AgentsPage.md index b5a5473..23ae648 100644 --- a/docs/flow/AgentsPage.md +++ b/docs/flow/AgentsPage.md @@ -1,14 +1,14 @@ --- layout: default title: Agents Page -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 14 --- # Agents Page ## Term Definition An agent is the name for a machine or desktop that is running the [agent runtime](https://github.com/openiap/nodeagent). -A package is some code and a project.json file uploaded as a .tgz file to OpenFlow. These files are then downloaded and executed by agents. +A package is some code and a project.json file uploaded as a .tgz file to OpenCore. These files are then downloaded and executed by agents. Assistant is a cross-platform Electron app that a user can run to allow running packages on demand in the user's context. ## Agent Capabilities @@ -20,18 +20,18 @@ Most examples on the [OpenIAP GitHub repo](https://github.com/openiap/) will sho ## Agent Runtimes ### Docker/Kubernetes/OpenShift -Agents can be started inside Docker/Kubernetes/OpenShift using OpenFlow. -When we do that, you can decide if any packages that will be running on the agent need to expose a web interface. If so, it will create an ingress route to the agent using the slug name. For instance, if you create an agent and it gets assigned the slug name `dawn-cloud-223c` and you are accessing an OpenFlow instance running at `app.openiap.io`. Once a package starts exposing a website/API, you can access that on https://dawn-cloud-223c.app.openiap.io. -Agents in Kubernetes/OpenShift can also be configured to auto-assign one or more persistent storage volumes. This is handy when running services or code that needs to persist data that may not be suitable to be stored inside the OpenFlow database. Be careful to back up this data. +Agents can be started inside Docker/Kubernetes/OpenShift using OpenCore. +When we do that, you can decide if any packages that will be running on the agent need to expose a web interface. If so, it will create an ingress route to the agent using the slug name. For instance, if you create an agent and it gets assigned the slug name `dawn-cloud-223c` and you are accessing an OpenCore instance running at `app.openiap.io`. Once a package starts exposing a website/API, you can access that on https://dawn-cloud-223c.app.openiap.io. +Agents in Kubernetes/OpenShift can also be configured to auto-assign one or more persistent storage volumes. This is handy when running services or code that needs to persist data that may not be suitable to be stored inside the OpenCore database. Be careful to back up this data. All packages started on an agent will share the same credentials, so it is vital to always run agents with a user that does not have access to any sensitive data. -All packages will be configured to talk to the same OpenFlow instances, i.e., you cannot have multiple packages that are connected to different OpenFlow instances. (Well, you can, but it's not supported and not recommended.) +All packages will be configured to talk to the same OpenCore instances, i.e., you cannot have multiple packages that are connected to different OpenCore instances. (Well, you can, but it's not supported and not recommended.) OpenIAP offers images with and without Chromium installed, for web automation inside agents. But you are free to create your own images if you have special demands. For instance, if it takes a long time to install all dependencies and you need fast startup times, it can be very handy to have a separate image with all those pre-installed. This is what OpenIAP does with the Node-RED image. Node-RED is simply a package referencing the Node-RED npm module, and is deployed as a package. But the Node-RED image has the package pre-loaded for faster startup. ### NodeAgent / Daemon Agents can be installed as a daemon on Windows, MacOS, and Linux (even Raspberry Pi OS). The agent requires NodeJS 14 or higher (18 or 20 recommended right now) to be pre-installed. Once installed, you can install the agent using the NPX command-line tool. The agent will run as a local system on Windows, and root on MacOS and Linux. You are free to change this to an unprivileged user, with access to its own profile data, and the network. The agent will store all configuration and packages inside the .openiap folder in the profile/home folder. -Running an agent as a daemon is a handy way to run agents/packages that need access outside Docker/Kubernetes. If you are running OpenFlow in the cloud and need to run code on-premise, or if OpenFlow is running on a separate VLAN, you can install an agent on a machine where a package needs access to data not all packages need access to. -Another use could be access to special or more powerful hardware. Most things in OpenFlow require almost no resources, so it can be beneficial to offload heavy workloads to hardware you have already purchased, to save on cloud costs. Or maybe you need to run heavy Machine Learning or LLM training/inference, and decided to rent GPUs at a different cloud provider like RunPod, or need access to Google TPUs for TensorFlow workloads, but want to keep your Kubernetes cluster somewhere else or in a different region. +Running an agent as a daemon is a handy way to run agents/packages that need access outside Docker/Kubernetes. If you are running OpenCore in the cloud and need to run code on-premise, or if OpenCore is running on a separate VLAN, you can install an agent on a machine where a package needs access to data not all packages need access to. +Another use could be access to special or more powerful hardware. Most things in OpenCore require almost no resources, so it can be beneficial to offload heavy workloads to hardware you have already purchased, to save on cloud costs. Or maybe you need to run heavy Machine Learning or LLM training/inference, and decided to rent GPUs at a different cloud provider like RunPod, or need access to Google TPUs for TensorFlow workloads, but want to keep your Kubernetes cluster somewhere else or in a different region. ### Assistant The assistant application is designed to run on a user's desktop. This way, the user can run any package you have given them access to. The package will then run inside the assistant in the user's context, and has access to any files or application the user has. This can be handy for doing assisted RPA (Robotic Process Automation), but anything is possible in code. Common use cases would be loading/processing files, generating reports, or simply creating a library of handy scripts that can help the users be more productive. As in all other cases, you can easily share, update, monitor, and control each agent and packages. Though not the main use cases, you can also do both ad hoc runs, schedule, and start daemons remotely on an agent running inside an assistant, but be aware, the user is always able to close/kill the assistant or stop any running job. @@ -56,8 +56,8 @@ The package.json is used as a normal package.json for NodeJS/TypeScript packages - `main - The file to execute. This can be Python, PowerShell, or any other supported language and will also define how the agent prepares the environment before executing the main file. - `language` - Tells the agent which runtime to use when executing the code. - `typescript` - Not used at the moment, but is intended to be used when TypeScript has not been compiled and is run using ts-node. -- `daemon` - Is used by the agent and OpenFlow to determine if this is a never-ending process (like something listening on a port, or waiting on events). -- `chromium` - Used by OpenFlow to control which packages to show for an agent. Will only allow this package to run on agents that have a Chrome or Chromium browser. +- `daemon` - Is used by the agent and OpenCore to determine if this is a never-ending process (like something listening on a port, or waiting on events). +- `chromium` - Used by OpenCore to control which packages to show for an agent. Will only allow this package to run on agents that have a Chrome or Chromium browser. - `ports` - Define any ports this package might be listening on. - `If port is left empty, a random free port will be used. If the port is already in use, a new free port number will be used, and injected as an environment variable into the host using the portname. @@ -78,7 +78,7 @@ Older project examples did not use an environment.yaml. If the agent does not fi ### JupyterLab This is still in early beta. A special package has been created that allows provisioning multiple JupyterLab instances, each with its own separated Python environment, and automatically syncs any data inside the lab. -This is intended for data scientists and AI/LLM developers that need a well-known environment that also allows easy scaling and more easy access to data stored in OpenFlow. +This is intended for data scientists and AI/LLM developers that need a well-known environment that also allows easy scaling and more easy access to data stored in OpenCore. Once matured more, this will most likely be migrated into the main node agent package, so it will become a standard feature of agents running anywhere in any environment. ### PowerShell @@ -94,6 +94,6 @@ Since Node-RED is such an important part of the solution, it deserves to be ment If all you want is to run Node-RED inside Docker/Kubernetes/OpenShift, you can simply select it from the dropdown list of images when creating an agent. This image is essentially a normal NodeAgent with the Node-RED package preloaded for faster startup times. -If you want to run what we previously referred to as "remote Node-REDs," you will now use NodeAgent instead of PM2. Simply fork or clone the [Node-RED agent](https://github.com/openiap/noderedagent) package and deploy it into your OpenFlow. You can now schedule that on any agent. +If you want to run what we previously referred to as "remote Node-REDs," you will now use NodeAgent instead of PM2. Simply fork or clone the [Node-RED agent](https://github.com/openiap/noderedagent) package and deploy it into your OpenCore. You can now schedule that on any agent. The old approach would set up PM2 to keep Node-RED running, but we can accomplish this using NodeAgent now. This also allows for better remote management and monitoring, centralized updating, remote debugging using port forwarding, and much more. So, the functionality is still there, just deployed a little differently. diff --git a/docs/flow/Architecture.md b/docs/flow/Architecture.md index 7548d41..2af4033 100644 --- a/docs/flow/Architecture.md +++ b/docs/flow/Architecture.md @@ -1,25 +1,25 @@ --- layout: default title: Architecture -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 5 --- ## Architecture -OpenFlow is an extendible stack, it's core components consist of a [MongoDB](https://www.mongodb.com/) database, it must be a replica set to allow support for change streams. -A stateless [RabbitMQ](https://www.rabbitmq.com/) (Can be run with durable queues and persistent storage.) OpenFlow API and web interface. +OpenCore is an extendible stack, it's core components consist of a [MongoDB](https://www.mongodb.com/) database, it must be a replica set to allow support for change streams. +A stateless [RabbitMQ](https://www.rabbitmq.com/) (Can be run with durable queues and persistent storage.) OpenCore API and web interface. If deployed in docker, you can spin up multiple NodeRED instances using the API or the web interface. -Different types of clients, [custom web](https://github.com/open-rpa/openflow-web-angular11-template) interfaces, [OpenRPA](https://github.com/open-rpa/openrpa) robots, PowerShell modules and remotely installed NodeRED's can then connect to the OpenFlow API using web sockets and receive events and data. Clients will use the API to register/publish queues and exchanges in order to add an extra layer of authentication and to simply for network requirements. All database access is exposed as natively close to the MongoDB driver, but with an added layer of security though the API. +Different types of clients, [custom web](https://github.com/open-rpa/openflow-web-angular11-template) interfaces, [OpenRPA](https://github.com/open-rpa/openrpa) robots, PowerShell modules and remotely installed NodeRED's can then connect to the OpenCore API using web sockets and receive events and data. Clients will use the API to register/publish queues and exchanges in order to add an extra layer of authentication and to simply for network requirements. All database access is exposed as natively close to the MongoDB driver, but with an added layer of security though the API. If installed the basic [docker-compose](DockerCompose) file that would make something like this. ![openflow_traefik](architecture/openflow_with_traefik.png) -For bigger installations we recommend using kubernetes, we supply an easy to get started with [helm chart](https://github.com/open-rpa/helm-charts/), that also supports very complex demands. Besides adding easy access for running geo distributed installation ( multiple data centers ) of a single OpenFlow install, it also adds more layers of security and much needed fault tolerance and scalability. This is usually also when we want to add better monitoring of the core components and support for designing graphs and dashboard based on data in OpenFlow. +For bigger installations we recommend using kubernetes, we supply an easy to get started with [helm chart](https://github.com/open-rpa/helm-charts/), that also supports very complex demands. Besides adding easy access for running geo distributed installation ( multiple data centers ) of a single OpenCore install, it also adds more layers of security and much needed fault tolerance and scalability. This is usually also when we want to add better monitoring of the core components and support for designing graphs and dashboard based on data in OpenCore. ![openflow_with_otel](architecture/openflow_with_monitoring.png) -When running in high secured network, where you need to control the direction and priority the flow of data and events, OpenFlow can be deployed in mesh topologies. +When running in high secured network, where you need to control the direction and priority the flow of data and events, OpenCore can be deployed in mesh topologies. This can also be useful if working in distributed networks where network outage can last for very long periods of time, and the local storage of a remote NodeRED is not enough, or you need access to the web interface and reports even when the network is down. Running on kubernetes, requires a premium license, [see here](https://openiap.io/pricing) for more details. \ No newline at end of file diff --git a/docs/flow/Build-from-source.md b/docs/flow/Build-from-source.md index bac20c5..43382c7 100644 --- a/docs/flow/Build-from-source.md +++ b/docs/flow/Build-from-source.md @@ -1,7 +1,7 @@ --- layout: default title: Build from source -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 12 --- @@ -15,10 +15,10 @@ Install gulp and typescript globally Clone this repo into a folder, in a shell type `git clone https://github.com/open-rpa/openflow.git` -go to the folder with openflow +go to the folder with OpenCore `cd openflow` -install packages for openflow api/web +install packages for OpenCore api/web `npm i` Now open in VS code @@ -34,14 +34,14 @@ port=80 Next you need to allow powershell scripts to run, i don't know what is the recommended setting, i normally just go with bypass `Set-ExecutionPolicy Bypass -Force` -Now you can run this by going to run ( Ctrl+Shit+D) and selecting OpenFlow in the dropdown box and press play button. +Now you can run this by going to run ( Ctrl+Shit+D) and selecting OpenCore in the dropdown box and press play button. This will serve an empty webpage, so we need to build the stylesheets and copy the compiled files to the dist folder, so go to the Terminal tab and add a new shell, then type `npm run sass` Lastly we can bundle and minify the asserts to the dist folder, by typing `gulp watch` -You can now access openflow web on [http://localhost.openiap.io](http://localhost.openiap.io) +You can now access OpenCore web on [http://localhost.openiap.io](http://localhost.openiap.io) #### Building docker image From version 1.5 the docker images is now using build images, so simply run diff --git a/docs/flow/ClientAuthPage.md b/docs/flow/ClientAuthPage.md index 0a8d71a..fc8ade5 100644 --- a/docs/flow/ClientAuthPage.md +++ b/docs/flow/ClientAuthPage.md @@ -1,6 +1,6 @@ --- layout: default title: Managing Client Authenication -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 8 --- diff --git a/docs/flow/ConfigurationValues.md b/docs/flow/ConfigurationValues.md index a5adaa3..967c294 100644 --- a/docs/flow/ConfigurationValues.md +++ b/docs/flow/ConfigurationValues.md @@ -1,12 +1,12 @@ --- layout: default title: Configuration Values -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 14 --- # Configuration settings -This document outlines the configuration options available for OpenIAP flow. +This document outlines the configuration options available for OpenCore. These settings are set either though environments variables. - **Docker**: set in your docker-compose file when using [docker](DockerCompose) instalations. - **Kubernetes**: set in your values fle when unding [helm chart}(Kubernetes). @@ -19,7 +19,7 @@ Open the docker-compose file you are using, find the `api` service, usaly places under `environment:` you fill fine the most common settings, but when relevant you can add some of those listed beloc. # Kubernetes -If you followed the guide on the [helm chart}(Kubernetes) you should have a values file, you use then updating openflow. +If you followed the guide on the [helm chart}(Kubernetes) you should have a values file, you use then updating OpenCore. Kubernetes uses the same values but are defined different, so please refere to this file for details [document here](https://github.com/open-rpa/helm-charts/blob/main/charts/openflow/values.yaml). # .env file @@ -28,7 +28,7 @@ If you follow the building from [source code}(Build-from-source) guide, you will # Database base config object. To enforce values set using any of the above methods require restarting the api nodes. Almost all variables can also be overriden using an object in the database. Manully create an object inside the config collection of type "_type": "config" or open the [Console page](https://app.openiap.io/#/Console) and check "enabled streaming". -Now you manually add one of more of the below values to the object, to emeiadtly override that value. Be aware, this means if you make an mistake you will manully have to find a way to update the database to remove/change it, if you make a mistake that will make openflow unable to start/reload. +Now you manually add one of more of the below values to the object, to emeiadtly override that value. Be aware, this means if you make an mistake you will manully have to find a way to update the database to remove/change it, if you make a mistake that will make OpenCore unable to start/reload. ```bash log_with_colors= # Default: true - Use colors in the console output, can be an issue for certain types of log collectors @@ -39,12 +39,12 @@ domain= # Default: localhost.openiap.io - sent to website and used in baseurl() cookie_secret= # Used to protect cookies max_ace_count= # Default: 128 - Discard overflow ace's if an _acl has more than 128 entries -saml_issuer= # Default: the-issuer - Normal set to uri:api-domain of openflow +saml_issuer= # Default: the-issuer - Normal set to uri:api-domain of OpenCore aes_secret= # encryption key used for user passwords and encryptiong data specefied in _encrypt -# Signing certificate used for SAML token issued by openflow +# Signing certificate used for SAML token issued by OpenCore signing_crt= singing_key= -# WAP token and email used for OpenFlow's WebPush service +# WAP token and email used for OpenCore's WebPush service wapid_mail= wapid_pub= wapid_key= @@ -110,9 +110,9 @@ NODE_ENV= # Default: development - development or production. Optimize and less HTTP_PROXY= # OS specefic, use to set PROXY settings for the api node HTTPS_PROXY= # OS specefic, use to set PROXY settings for the api node NO_PROXY= # OS specefic, use to set PROXY settings for the api node -agent_HTTP_PROXY= # Set HTTP_PROXY for all agent's started using openflow -agent_HTTPS_PROXY= # Set HTTPS_PROXY for all agent's started using openflow -agent_NO_PROXY= # Set NO_PROXY for all agent's started using openflow +agent_HTTP_PROXY= # Set HTTP_PROXY for all agent's started using OpenCore +agent_HTTPS_PROXY= # Set HTTPS_PROXY for all agent's started using OpenCore +agent_NO_PROXY= # Set NO_PROXY for all agent's started using OpenCore stripe_api_key= # If resource broker has been configured and you have a stripe account, set this and stripe_api_secret to enable online payments stripe_api_secret= # If resource broker has been configured and you have a stripe account, set this and stripe_api_key to enable online payments @@ -137,8 +137,8 @@ client_disconnect_signin_error= # Default: false - Send error to client when dis use_ingress_beta1_syntax= # Default: false - For old kubernetes installation, use beta1 syntax ? use_openshift_routes= # Default: false - on openshift we use routes and not traefik as ingress controller agent_image_pull_secrets= # If using custom image repository that requeires authentiction, like habor, set secret here -auto_create_personal_nodered_group= # Default: false - Backwward compability with 1.4, allow openflow to autocrete nodered admin roles for all new users -auto_create_personal_noderedapi_group= # Default: false - Backwward compability with 1.4, allow openflow to autocrete nodered api roles for all new users, require auto_create_personal_nodered_group to be true as well +auto_create_personal_nodered_group= # Default: false - Backwward compability with 1.4, allow OpenCore to autocrete nodered admin roles for all new users +auto_create_personal_noderedapi_group= # Default: false - Backwward compability with 1.4, allow OpenCore to autocrete nodered api roles for all new users, require auto_create_personal_nodered_group to be true as well force_add_admins= # Default: true - Force adding admins role with full control to all objects in the database. # Default: false - Allow non-federated user to get an reset password link sent. @@ -196,18 +196,18 @@ allow_merge_acl= # Default: false - merge acls by combining bits for all aces wi multi_tenant= enable_guest= # Default: false - Allow issuing guest tokens. The guest user is not a member of users and can be used by applications for anonymous access -enable_gitserver= # Default: false - Enable git api, to allow using openflow as a git server from /git endpoint +enable_gitserver= # Default: false - Enable git api, to allow using OpenCore as a git server from /git endpoint enable_gitserver_guest= # Default: false - Allow anonymous access to /git, and allows using guest as part of the ACL on branches enable_gitserver_guest_create= # Default: false - Allow anonymous users to create new git repositories ( there will then allow anyone to update them ) cleanup_on_delete_customer= # Default: false - Try and auto delete all associated data and user/roles when deleting a customer. Be ware !!!! cleanup_on_delete_user= # Default: false - Try and auto delete all associated data and user/roles when deleting a customer. Be ware !!!! ( force hard delete ) api_bypass_perm_check= # Default: false - Completly disable **ALL** permission checks, allowing anyone to see and do everything -disable_db_config= # Default: false - Stop loading config from the database. Usefull when openflow will not start due to bad config in database +disable_db_config= # Default: false - Stop loading config from the database. Usefull when OpenCore will not start due to bad config in database force_audit_ts= # Default: false - Force audit collection as a timeseries collection, if one exists will rename it force_dbusage_ts= # Default: false - Force dbusage collection as a timeseries collection, if one exists will rename it migrate_audit_to_ts= # Default: true - If an old version of audit exists, migrate old data to the new and then delete it. This can take a LOOOONGGG time. -# OpenFlow version 1 settings, only relevant for old angularjs webinterface and OpenRPA clients +# OpenCore version 1 settings, only relevant for old angularjs webinterface and OpenRPA clients websocket_package_size= # Default: 25000 websocket_max_package_count= # Default: 1048576 websocket_message_callback_timeout= # Default: 3600 @@ -282,7 +282,7 @@ otel_trace_url= # Custom Open Telemetry exporter trace URL otel_metric_url= # CUstom Open Telemetry exporter metric URL otel_trace_interval= # Default: 5000 otel_metric_interval= # Default: 5000 -otel_trace_pingclients= # Default: false - add trace for each ping clients in openflow +otel_trace_pingclients= # Default: false - add trace for each ping clients in OpenCore otel_trace_dashboardauth= # Default: false - add trace for dashboardauth events otel_trace_include_query= # Default: false - include query in spans otel_trace_connection_ips= # Default: false - track connection requests per ip address @@ -310,7 +310,7 @@ validate_user_form= # User form validation configuration license_key= enable_openapi= # Default: true - Enable generic OpenAPI endpoint. Requires a valid license enable_grafanaapi= # Default: true - Enable grafana endpoint used by the openaip flow data source in grafana. Requires a valid license -grafana_url= # Enable a grafana link in the openflow web interface that link's to this URL. +grafana_url= # Enable a grafana link in the OpenCore web interface that link's to this URL. enable_gitserver= # Default: false - Enable git server at /git. Requires a valid license enable_gitserver_guest= # Default: false - Enable guest access to git server at /git ( you can add guests to repos and allow the to read and or update them ) enable_gitserver_guest_create= # Default: false - Enable guest access to git server at /git ( allow guests to push new repositories ) diff --git a/docs/flow/Creating-a-New-User.md b/docs/flow/Creating-a-New-User.md index 06b7e35..0a6c6de 100644 --- a/docs/flow/Creating-a-New-User.md +++ b/docs/flow/Creating-a-New-User.md @@ -1,12 +1,12 @@ --- layout: default title: Creating a New User -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 1 --- # Creating a New User -To add a new user to your OpenFlow environment, start by selecting `Users` from the main menu. Then, click on the `Add User` button located in the top right corner of the screen. +To add a new user to your OpenCore environment, start by selecting `Users` from the main menu. Then, click on the `Add User` button located in the top right corner of the screen. ![Add User Button](Add-User-Button.png) diff --git a/docs/flow/CreatingForms.md b/docs/flow/CreatingForms.md index 3e40a08..6040346 100644 --- a/docs/flow/CreatingForms.md +++ b/docs/flow/CreatingForms.md @@ -1,11 +1,12 @@ --- + layout: default title: Creating Web Forms -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 14 --- # Creating Web Forms -Forms are a user-friendly way of passing input to a workflow by creating dynamic OpenFlow's webpages. There are two ways of generating a Form: one of them is through OpenFlow's automatically generated Forms which are created upon saving a Workflow into its repository and the other one is manually creating a Form and connecting it to a Node-RED workflow. +Forms are a user-friendly way of passing input to a workflow by creating dynamic OpenCore's webpages. There are two ways of generating a Form: one of them is through OpenCore's automatically generated Forms which are created upon saving a Workflow into its repository and the other one is manually creating a Form and connecting it to a Node-RED workflow. For thorough information on how to use Forms, please refer to [form.io Intro](https://help.form.io/userguide/introduction/) (`https://help.form.io/userguide/introduction/`). Most of this chapter is based on this guide. @@ -18,7 +19,7 @@ Creating a form is rather easy and simple. Go to the Forms page, where all Forms Now at the Forms edit page, there are many Form components from which you can choose. For general purposes, we are only going to discuss the most used one here: Text Field Component. The other ones will be discussed further on in their specific sections. -![OpenFlow's Forms creation page](CreatingForms/Create-Form.png) +![OpenCore's Forms creation page](CreatingForms/Create-Form.png) Drag the Text Field Component Form from the Basic category into the Form workspace. Immediately after, a window containing all the parameters to configure the Form Component will appear. @@ -33,13 +34,13 @@ Below are the steps needed to properly configure a Form Component, a TextField i To change the Form's label, i.e., the title which will appear for the end-user, simply click the Display tab and change the input form titled **Label**. The changes are shown real-time. -![OpenFlow's Form Display configuration tab](../../images/openflow_form_label_config_page.png) +![OpenCore's Form Display configuration tab](../../images/openflow_form_label_config_page.png) - **Assigning Input Variable** - To assign the input form to a variable configured inside the OpenRPA workflow you've mapped to OpenFlow, simply go to the API tab and insert the name of the variable inside **Property Name** and press save. Now the next time this workflow is called, a new parameter will appear. + To assign the input form to a variable configured inside the OpenRPA workflow you've mapped to OpenCore, simply go to the API tab and insert the name of the variable inside **Property Name** and press save. Now the next time this workflow is called, a new parameter will appear. -![OpenFlow's Form API configuration tab](../../images/openflow_text_field_api_config.png) +![OpenCore's Form API configuration tab](../../images/openflow_text_field_api_config.png) - **Assigning Form to Node-RED Workflow** @@ -47,7 +48,7 @@ Below are the steps needed to properly configure a Form Component, a TextField i ![Assigning Form to Node-RED workflow](../../images/openflow_node_red_configure_form.png) -For more information on how to configure the Form Component, please refer to the OpenFlow Forms section. +For more information on how to configure the Form Component, please refer to the OpenCore Forms section. ... [Further sections continue with detailed explanations and associated images] ... diff --git a/docs/flow/DockerCompose.md b/docs/flow/DockerCompose.md index 9582ea9..c007fa4 100644 --- a/docs/flow/DockerCompose.md +++ b/docs/flow/DockerCompose.md @@ -1,7 +1,7 @@ --- layout: default title: Install using docker-compose -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 4 --- ## Getting started @@ -62,7 +62,7 @@ You will need to run `./normal-up.sh` after running `./normal-pull.sh` You can delete all data, by first running `./normal-down.sh` and then `./remove-data.sh` ( you must stop all agents manually first ) You can access RabbitMQ Admin Interface at http://mq.localhost.openiap.io -Each agent started inside openflow, will be listening at username.localhost.openiap.io +Each agent started inside OpenCore, will be listening at username.localhost.openiap.io ### Premium demo version @@ -89,9 +89,9 @@ If enabled in the yml file, you can also access 1. Access Grafana at http://grafana.localhost.openiap.io 2. Access RabbitMQ Admin Interface at http://mq.localhost.openiap.io -3. Each agent started inside openflow, will be listening at username.localhost.openiap.io +3. Each agent started inside OpenCore, will be listening at username.localhost.openiap.io -### Openflow with SSL using lets enrypt +### OpenCore with SSL using lets enrypt [docker-compose-letsencrypt.yml](https://github.com/open-rpa/docker/blob/master/docker-compose-letsencrypt.yml) is the "plain" version, but with traefik configured to request certificates using lets encrypt. @@ -105,7 +105,7 @@ If enabled in the yml file, you can also access 1. Access MongoDB Web Editor at http://express.localhost.openiap.io 2. Access RabbitMQ Admin Interface at http://mq.localhost.openiap.io -3. Each agent started inside openflow, will be listening at username.localhost.openiap.io +3. Each agent started inside OpenCore, will be listening at username.localhost.openiap.io ### Using custom port This setup does not support using a custom port. Only port 80 or 443 is supported. @@ -116,25 +116,25 @@ All examples use localhost.openiap.io for domain. This domain points to your loc First find the IP if your machine. If used on the local network only, use the IP of the machine with docker, if you are in the cloud, use the public IP given to that machine. -You need to add 2 DNS record at your DNS provider, one for OpenFlow it self, and and for all the services under that OpenFlow ( MQ, agent's, etc. ) +You need to add 2 DNS record at your DNS provider, one for OpenCore it self, and and for all the services under that OpenCore ( MQ, agent's, etc. ) -First add one A record for OpenFlow, pointing to the IP of the docker host. ( in this example your domain is mydomain.com ) +First add one A record for OpenCore, pointing to the IP of the docker host. ( in this example your domain is mydomain.com ) ``` -openflow A 10.0.1.1 +opencore A 10.0.1.1 ``` Next add an wildcard * record for all the services exposed from that instance, as a CNAME pointing to the instance ``` -* CNAME openflow.mydomain.com. +* CNAME opencore.mydomain.com. ``` ( a few DNS providers does not allow to create wildcard records using CNAME, in that case use an A record pointing to the same IP ) Once complete, open the docker compose file -- add an environment with the name `domain` and the value of domain you chose ( in the above example `openflow.mydomain.com`) -- Do a search and replace for `localhost.openiap.io` and replace it with the domain you choise ( in the above example `openflow.mydomain.com`) +- add an environment with the name `domain` and the value of domain you chose ( in the above example `opencore.mydomain.com`) +- Do a search and replace for `localhost.openiap.io` and replace it with the domain you choise ( in the above example `opencore.mydomain.com`) ### Troubleshooting tips diff --git a/docs/flow/Enable-Multi-Tenancy.md b/docs/flow/Enable-Multi-Tenancy.md index 1f36724..4ab1ab9 100644 --- a/docs/flow/Enable-Multi-Tenancy.md +++ b/docs/flow/Enable-Multi-Tenancy.md @@ -1,7 +1,7 @@ --- layout: default title: Enable Multi-Tenancy -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 3 --- ## Enable Multi-Tenancy diff --git a/docs/flow/EntitiesPage.md b/docs/flow/EntitiesPage.md index 4230b65..73a2904 100644 --- a/docs/flow/EntitiesPage.md +++ b/docs/flow/EntitiesPage.md @@ -1,16 +1,16 @@ --- layout: default title: Entities Page -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 14 --- # Entities Page What are Entities? ================== -Entities are groups of data that compose a meaningful object inside OpenFlow - i.e., a workflow, a workflow instance, a user, etc. These groups of data are stored as [Documents](https://docs.mongodb.com/manual/core/document/) inside [Collections](https://docs.mongodb.com/manual/core/databases-and-collections/#collections) in MongoDB. `Collections` are analogous to tables in relational databases. Think of an `Entity` as a row inside a relational table. In layman terms, a `Collection` would correspond to a category inside your phonebook and an `Entity` would correspond to a single entry. Please check below for more on `Collections`, [Collections](#collections). +Entities are groups of data that compose a meaningful object inside OpenCore - i.e., a workflow, a workflow instance, a user, etc. These groups of data are stored as [Documents](https://docs.mongodb.com/manual/core/document/) inside [Collections](https://docs.mongodb.com/manual/core/databases-and-collections/#collections) in MongoDB. `Collections` are analogous to tables in relational databases. Think of an `Entity` as a row inside a relational table. In layman terms, a `Collection` would correspond to a category inside your phonebook and an `Entity` would correspond to a single entry. Please check below for more on `Collections`, [Collections](#collections). -In OpenFlow, these `Collections` are grouped by their name inside the [Entities page](https://app.openiap.io/#/Entities/entities). Currently, there exist 10 groups, listed below. +In OpenCore, these `Collections` are grouped by their name inside the [Entities page](https://app.openiap.io/#/Entities/entities). Currently, there exist 10 groups, listed below. Collections =========== @@ -27,11 +27,11 @@ Note you cannot delete agents or packages here, you need to use the [Agents page audit ----- -This collection contains data on all authentication and orchestration actions attempts inside OpenFlow. +This collection contains data on all authentication and orchestration actions attempts inside OpenCore. config ------ -Contains all configuration objects releated to openflow. +Contains all configuration objects releated to OpenCore. This includes, but is not limited to [Federation Providers](FederationProviders), [Client Authentication Providers](ClientAuthPage) and [Resource Broker](ResourcePage) and the [Base Configuration](ConfigurationValues) @@ -45,13 +45,13 @@ Here lie all instances of workflows invoked through **OpenRPA**. Each instance w entities -------- -This collection contains all objects serialized into OpenFlow by using the **OpenRPA.OpenFlowDB** activities. +This collection contains all objects serialized into OpenCore by using the **OpenRPA.OpenFlowDB** activities. users ----- -This collection contains all users and roles automatically created by Node-RED or through OpenFlow itself. +This collection contains all users and roles automatically created by Node-RED or through OpenCore itself. workflow -------- @@ -63,7 +63,7 @@ Here lie all instances of workflows invoked through the [Workflows page](http:// forms ----- -This collection holds all Forms created inside OpenFlow. +This collection holds all Forms created inside OpenCore. nodered ------- diff --git a/docs/flow/FederationProviders.md b/docs/flow/FederationProviders.md index ec8d6f1..5352d24 100644 --- a/docs/flow/FederationProviders.md +++ b/docs/flow/FederationProviders.md @@ -1,14 +1,14 @@ --- layout: default title: Manage Sign in Providers -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 8 --- # Providers The providers page is where you decide how users can access the site and NodeRED. -Right now OpenFlow supports 3 ways of signing in +Right now OpenCore supports 3 ways of signing in ## Local login @@ -20,8 +20,8 @@ WS-Federation, often used in conjunction with SAML tokens, is a well know and se ## OAuth 2.0 -OAuth is a widely used authentication protocol, but for many years venders could decide on a specific standard there fore you will find most providers have slightly different implementations. OpenFlow's implementation have been tested against Microsoft [Azure AD](https://azure.microsoft.com/en-us/services/active-directory) and Google Suit/GoogleID others may work as well. +OAuth is a widely used authentication protocol, but for many years venders could decide on a specific standard there fore you will find most providers have slightly different implementations. OpenCore's implementation have been tested against Microsoft [Azure AD](https://azure.microsoft.com/en-us/services/active-directory) and Google Suit/GoogleID others may work as well. ## OpenID Connect -[OpenID connect](https://openid.net/connect/) is considered the "next thing" after OAuth. OpenFlow's implementation have been tested against [Azure AD](https://azure.microsoft.com/en-us/services/active-directory), other may work as well. \ No newline at end of file +[OpenID connect](https://openid.net/connect/) is considered the "next thing" after OAuth. OpenCore's implementation have been tested against [Azure AD](https://azure.microsoft.com/en-us/services/active-directory), other may work as well. \ No newline at end of file diff --git a/docs/flow/History.md b/docs/flow/History.md index b09ff24..c47d0a4 100644 --- a/docs/flow/History.md +++ b/docs/flow/History.md @@ -1,12 +1,12 @@ --- layout: default title: History/versioning -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 6 --- ## Versioning -OpenFlow has a simple versioning system built in. Using [jsondiff](https://github.com/benjamine/jsondiffpatch) and/or full copy of object you can always go back in time and see who changed what, and when and easily revert back and forth to different versions. +OpenCore has a simple versioning system built in. Using [jsondiff](https://github.com/benjamine/jsondiffpatch) and/or full copy of object you can always go back in time and see who changed what, and when and easily revert back and forth to different versions. Go to the object page, select the collection you are interested in and click the history icon, you will then be presented for a list of changes to that object. diff --git a/docs/flow/Kubernetes.md b/docs/flow/Kubernetes.md index 54e0a93..c3cfa5a 100644 --- a/docs/flow/Kubernetes.md +++ b/docs/flow/Kubernetes.md @@ -1,23 +1,24 @@ --- + layout: default title: Running on Kubernetes -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 10 --- # Kubernetes -OpenFlow was designed to run on [kubernetes](https://kubernetes.io). You can still deployed in other ways, but for most production setups, that is the recommend platform to run it on, at least for the primary site. +OpenCore was designed to run on [kubernetes](https://kubernetes.io). You can still deployed in other ways, but for most production setups, that is the recommend platform to run it on, at least for the primary site. We use helm to deploy different deployments to kubernetes, so first install [helm](https://github.com/helm/helm/releases) simply drop this some where and add it to the path, so you can reference it from cmd/powershell. Also, make sure you have [kubectl](https://kubernetes.io/docs/tasks/tools/) installed and configured to access your kubernetes cluster. -> note: OpenIAP flow requires a premium license to run on Kubernetes. [Read mere here](https://openiap.io/pricing) +> note: OpenCore requires a premium license to run on Kubernetes. [Read mere here](https://openiap.io/pricing) -OpenFlow depends on [traefik](https://doc.traefik.io/traefik/v1.7/user-guide/kubernetes/) as ingress controller. It's beyond the scope of this guide on how to install this in non-clouded environments, but if you are using GKE, aWS, Azure, Alibaba or some of the other cloud providers that has out of the box external loadbalencers, you can simpy deploy trafik with the service with type: LoadBalancer, and from here on everything "just works". +OpenCore depends on [traefik](https://doc.traefik.io/traefik/v1.7/user-guide/kubernetes/) as ingress controller. It's beyond the scope of this guide on how to install this in non-clouded environments, but if you are using GKE, aWS, Azure, Alibaba or some of the other cloud providers that has out of the box external loadbalencers, you can simpy deploy trafik with the service with type: LoadBalancer, and from here on everything "just works". You can find an example on how to deploy traefik using help on [this page](https://github.com/open-rpa/helm-charts/tree/main/traefik-example) I also go though this process in the video -[![Configuring Openflow on Kubernetes](https://img.youtube.com/vi/onI_9JIAKbM/1.jpg)](https://youtu.be/onI_9JIAKbM) +[![Configuring OpenCore on Kubernetes](https://img.youtube.com/vi/onI_9JIAKbM/1.jpg)](https://youtu.be/onI_9JIAKbM) So first we need to add OpenIAP's helm repo and update this and other repos you might have installed @@ -26,22 +27,22 @@ helm repo add openiap https://open-rpa.github.io/helm-charts/ helm repo update ``` -Next create a values file. To avoid confusen i recomend you name this file the same as your namespace and the "instance" you are creating. So imaging you want to deploy an openflow instance responding to demo.mydomain.com then create a file named demo.yaml +Next create a values file. To avoid confusen i recomend you name this file the same as your namespace and the "instance" you are creating. So imaging you want to deploy an OpenCore instance responding to demo.mydomain.com then create a file named demo.yaml -There is a ton of different settings you can fine tune, you can always find all the settings in the openflow [values file here](https://github.com/open-rpa/helm-charts/blob/main/charts/openflow/values.yaml) but you only need to add the values you want to override. So as a good starting point, add the following to your demo.yaml file +There is a ton of different settings you can fine tune, you can always find all the settings in the OpenCore [values file here](https://github.com/open-rpa/helm-charts/blob/main/charts/openflow/values.yaml) but you only need to add the values you want to override. So as a good starting point, add the following to your demo.yaml file ```yaml -# this will be the root domain name hence your openflow url will now be http://demo.mydomain.com +# this will be the root domain name hence your OpenCore url will now be http://demo.mydomain.com domainsuffix: mydomain.com # this will be added to all domain names domain: demo # if using a reverse procy that add ssl, uncomment below line. # protocol: https -openflow: +OpenCore: # external_mongodb_url: mongodb+srv://user:pass@cluster0.gcp.mongodb.net?retryWrites=true&w=majority rabbitmq: default_pass: supersecret # if you are using mpongodb atlas, or has mongodb running somewhere else -# uncomment below line, and external_mongodb_url in openflow above +# uncomment below line, and external_mongodb_url in OpenCore above # mongodb: # enabled: false ``` @@ -51,7 +52,7 @@ So first we need to create a namespace. Namespaces allow us to segregate multipl ``` sh kubectl create namespace demo ``` -and now we can create our first openflow installation inside that namespace +and now we can create our first OpenCore installation inside that namespace ``` sh helm install openflow openiap/openflow -n demo --values ./demo.yaml ``` @@ -65,11 +66,11 @@ Utilizing multiple node pools [![Distributing workloads with nodepools](https://img.youtube.com/vi/06OmsoV-AgM/1.jpg)](https://youtu.be/06OmsoV-AgM) -After install, this will help you getting started with monitoring (premium openflow only!) +After install, this will help you getting started with monitoring (premium OpenCore only!) [![Configurering Reporting and Monitoring](https://img.youtube.com/vi/cyseDpnects/1.jpg)](https://youtu.be/cyseDpnects) -Performance tuning and/or troubleshooting workflows or the platform (premium openflow only!) +Performance tuning and/or troubleshooting workflows or the platform (premium OpenCore only!) [![Collecting spans and custom metrics](https://img.youtube.com/vi/wlErCAJX52E/1.jpg)](https://youtu.be/wlErCAJX52E) diff --git a/docs/flow/Managing-Roles.md b/docs/flow/Managing-Roles.md index dbde73e..ae21f01 100644 --- a/docs/flow/Managing-Roles.md +++ b/docs/flow/Managing-Roles.md @@ -1,30 +1,30 @@ --- layout: default -title: Managing Roles in OpenIAP Flow -parent: What Is OpenIAP Flow +title: Managing Roles in OpenCore +parent: What Is OpenCore nav_order: 2 --- -# Managing Roles in OpenIAP Flow +# Managing Roles in OpenCore ## What are Roles? -Roles in **OpenIAP Flow** are sets of privileges and permissions that can be assigned to users or other roles. These roles are crucial for controlling access to various types of data within OpenIAP Flow, including projects/workflows and queues. +Roles in **OpenCore** are sets of privileges and permissions that can be assigned to users or other roles. These roles are crucial for controlling access to various types of data within OpenCore, including projects/workflows and queues. ## Nested Roles -In OpenIAP Flow, roles can be nested up to three levels by default, although this limit can be adjusted by the system administrator. Nested roles allow for a more granular and hierarchical organization of permissions and access rights. +In OpenCore, roles can be nested up to three levels by default, although this limit can be adjusted by the system administrator. Nested roles allow for a more granular and hierarchical organization of permissions and access rights. ## RPA Roles Roles can also be designated as OpenRPA roles. These special roles are used to group multiple OpenRPA robots, enabling the distribution of Invoke OpenRPA calls among all members. This maximizes robot utilization and allows for multiple workflows to be run in parallel. -## List of Built-in Roles in OpenIAP Flow -OpenIAP Flow includes several built-in roles, each with specific permissions: -- **users**: Represents all users created in OpenIAP Flow. -- **admins**: Members can manage all aspects of OpenIAP Flow. +## List of Built-in Roles in OpenCore +OpenCore includes several built-in roles, each with specific permissions: +- **users**: Represents all users created in OpenCore. +- **admins**: Members can manage all aspects of OpenCore. - **customer admins**: Members can manage all customers, if multi tenancy is enabled - **resellers**: Members (who is also customer admin) can create new customers. - **workitem queue admins**: Members can access all workitem queues and all workitems in those queues - **workitem queue users**: Members can create new workitem queues -- **filestore users**: Can upload and download files in OpenIAP Flow. -- **filestore admins**: Have full control over all files uploaded in OpenIAP Flow. +- **filestore users**: Can upload and download files in OpenCore. +- **filestore admins**: Have full control over all files uploaded in OpenCore. - **nodered admins**: Allows members to log into Node-RED instances. - **nodered api users**: Allows members to access any http endpoint exposed from all Node-RED workflows. diff --git a/docs/flow/Offline-Proxy-Server.md b/docs/flow/Offline-Proxy-Server.md index 0e44e37..35d66a5 100644 --- a/docs/flow/Offline-Proxy-Server.md +++ b/docs/flow/Offline-Proxy-Server.md @@ -6,16 +6,16 @@ nav_order: 8 --- ###### Used offline -OpenFlow ( and NodeRED and OpenRPA ) can run completely without internet, but it does requires some preparation. +OpenCore ( and NodeRED and OpenRPA ) can run completely without internet, but it does requires some preparation. And the preparations heavily depend on weather you are using docker/Kubernetes or using NPM packages. You will need to install a local docker repository, a local NPM repository and in some cases a local NodeRED catalog. Those will get documented at a later time, please reach out to [openiap](https://openiap.io) if you want consulting on how to do this. -Running OpenFlow offline and some MQTT stuff +Running OpenCore offline and some MQTT stuff -[![Running OpenFlow offline and some MQTT stuff](https://img.youtube.com/vi/r_aEHZMSICE/0.jpg)](https://www.youtube.com/watch?v=r_aEHZMSICE) +[![Running OpenCore offline and some MQTT stuff](https://img.youtube.com/vi/r_aEHZMSICE/0.jpg)](https://www.youtube.com/watch?v=r_aEHZMSICE) #### Notes when used behind a proxy server or without internet -If you are testing OpenFlow using NPM packages or docker and are behind a proxy, make sure to add an HTTPS_PROXY and HTTP_PROXY global/machine level environment variable for your proxy server. If you get an error about accessing localhost.openiap.io also add NO_PROXY with the value: localhost.openiap.io +If you are testing OpenCore using NPM packages or docker and are behind a proxy, make sure to add an HTTPS_PROXY and HTTP_PROXY global/machine level environment variable for your proxy server. If you get an error about accessing localhost.openiap.io also add NO_PROXY with the value: localhost.openiap.io -If using docker, you need to add those 2 (or 3) variables to the docker-compose file for the web instance, if using the helm chart, you need to add them to your values file under OpenFlow. This will then re-add those for all NodeRED's started, so NPM can pickup the proxy settings. +If using docker, you need to add those 2 (or 3) variables to the docker-compose file for the web instance, if using the helm chart, you need to add them to your values file under OpenCore. This will then re-add those for all NodeRED's started, so NPM can pickup the proxy settings. diff --git a/docs/flow/ProtocolDetails.md b/docs/flow/ProtocolDetails.md index 32e058d..7b7fdd0 100644 --- a/docs/flow/ProtocolDetails.md +++ b/docs/flow/ProtocolDetails.md @@ -1,10 +1,10 @@ --- layout: default title: OpenIAP network protocol -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 9 --- -# Details about the Network Protocol used by OpenIAP Flow +# Details about the Network Protocol used by OpenCore #### Intro @@ -12,8 +12,8 @@ Before 1.5 OpenAIP flow only supported websockets ( and way way back rest+OData Moving forward it will support the old wbebsocket protocol and the new protocol as described below. This is to keep supporting older versions of OpenRPA and NodeRED. At some point the old version might be removed, but for now it's the plan to keep it for backward compatability. -Moving forward we will now use protobuf as the base protocol. This allows us to ensure the same look and feel undependenly of the programming language used for communicating with OpenIAP flow. OpenIAP Flow it self uses the [nodeapi](https://github.com/openiap/nodeapi) implementation, that functions as both a server and client package. -All proto3 files can be found at this [github](https://github.com/openiap/proto) repository. All api implementations uses this repository for generating and parsing the messages to and from OpenIAP flow. +Moving forward we will now use protobuf as the base protocol. This allows us to ensure the same look and feel undependenly of the programming language used for communicating with OpenCore. OpenCore it self uses the [nodeapi](https://github.com/openiap/nodeapi) implementation, that functions as both a server and client package. +All proto3 files can be found at this [github](https://github.com/openiap/proto) repository. All api implementations uses this repository for generating and parsing the messages to and from OpenCore. These message can then be send over multiple different base protocols. Currently those are - [GRPC](https://grpc.io/) @@ -45,9 +45,9 @@ message Envelope { - `traceid`/`spanid` used for collecting spans across applications with [OpenTelemetry](https://opentelemetry.io/) - `data` the message you want to sent When packing the Any message type_url MUST math the `command` ( for instance if sending a signin message, command must be `signin` and type_url must be `type.googleapis.com/openiap.SigninRequest` ) -- `priority` when OpenIAP flow has enable_openflow_amqp enabled, this sets the priority on the message. +- `priority` when OpenCore has enable_openflow_amqp enabled, this sets the priority on the message. We recomend using priority 2 for UI messages, and priority 1 for everything else. This way the UI will always be responsive even under heavy load. For all non essential things like batch processing use priority 0. By default priority 0 to 3 is enabled in rabbitmq Certain commands will setup a stream to receive multiple message on the same request. For instance download/upload file will create a stream for sending the file content. -If you send DownloadRequest in Envelop with id 5 then OpenIAP flow will send multiple message with envelop rid 4. First you receive a a BeginStream, then X number of Stream messges with the file centet and then a EndStream message. Finally you will receive a DownloadResponse containing detailed informaiton about the file and folwdown proces. +If you send DownloadRequest in Envelop with id 5 then OpenCore will send multiple message with envelop rid 4. First you receive a a BeginStream, then X number of Stream messges with the file centet and then a EndStream message. Finally you will receive a DownloadResponse containing detailed informaiton about the file and folwdown proces. diff --git a/docs/flow/RPAWorkflowPage.md b/docs/flow/RPAWorkflowPage.md index c7e16f6..250ad8e 100644 --- a/docs/flow/RPAWorkflowPage.md +++ b/docs/flow/RPAWorkflowPage.md @@ -1,18 +1,18 @@ --- layout: default title: Invoke OpenRPA Workflows page -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 13 --- # Invoke OpenRPA Workflows page What is it? =========== -Workflows in OpenRPA and OpenFlow are the same thing, an algorithm or a sequence of steps that execute a meaningful task. The difference is that when you invoke a workflow in OpenFlow, it creates an instance of that workflow. By accessing the Workflows tab, you may invoke workflows remotely, meaning, the stack in OpenFlow will send a message to the available agent to process and execute the given workflow. +Workflows in OpenRPA and OpenCore are the same thing, an algorithm or a sequence of steps that execute a meaningful task. The difference is that when you invoke a workflow in OpenCore, it creates an instance of that workflow. By accessing the Workflows tab, you may invoke workflows remotely, meaning, the stack in OpenCore will send a message to the available agent to process and execute the given workflow. -From OpenFlow, you can create forms, grant permissions to a given workflow, and most importantly, invoke it. +From OpenCore, you can create forms, grant permissions to a given workflow, and most importantly, invoke it. -OpenFlow automatically manages the workflow repository. When properly connected, by saving a workflow inside OpenRPA, it will also automatically appear inside the Workflows webpage. +OpenCore automatically manages the workflow repository. When properly connected, by saving a workflow inside OpenRPA, it will also automatically appear inside the Workflows webpage. To download a workflow, simply go to the RPA Workflows link and click download. After downloading the .XAML file, you may share it with others or import it into your OpenRPA client. @@ -21,15 +21,15 @@ Invoking - **Methods for Invoking** - Here we discuss the methods for invoking a workflow using OpenFlow. + Here we discuss the methods for invoking a workflow using OpenCore. -- **Invoking through OpenFlow's RPA Workflows Page** +- **Invoking through OpenCore's RPA Workflows Page** - To invoke a workflow through OpenFlow, simply go to [RPA Workflows page](https://app.openiap.io/#/RPAWorkflows) and click Invoke. Another page for the specific workflow will be opened, where all the forms needed to be filled are going to be presented. Simply fill them and click Invoke again. The input data is then sent to the chosen robot/agent, and it will start processing the workflow. + To invoke a workflow through OpenCore, simply go to [RPA Workflows page](https://app.openiap.io/#/RPAWorkflows) and click Invoke. Another page for the specific workflow will be opened, where all the forms needed to be filled are going to be presented. Simply fill them and click Invoke again. The input data is then sent to the chosen robot/agent, and it will start processing the workflow. Data processing is bi-directional: input parameters are sent to a robot/agent, and the workflow output will also be returned. That means that you can make many workflows calling different applications. Think of it as a message; messages are sent, read, and replied to. Nothing prevents that message from being sent, read, or replied to multiple times. -> For the user to invoke a workflow using OpenFlow, the user must have the proper permissions. See more at OpenRPA's chapter on Granting permissions to users/roles. +> For the user to invoke a workflow using OpenCore, the user must have the proper permissions. See more at OpenRPA's chapter on Granting permissions to users/roles. ![Alt text](RPAWorkflowPage/RPAWorkflowPage.png) diff --git a/docs/flow/Requirements.md b/docs/flow/Requirements.md index 91e295e..ba77122 100644 --- a/docs/flow/Requirements.md +++ b/docs/flow/Requirements.md @@ -1,20 +1,21 @@ --- + layout: default title: Requirements -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 7 --- -# Size recommendations for OpenFlow +# Size recommendations for OpenCore If installing on kubernetes, traefik is required as ingress controller. You can run both nginx and traefik side by side, but you cannot have nginx infront of traefik. A storage provider needs to be provisioned that support both up and down scaling the size. Using local storage and assigning pods to specific machines is NOT supported or recommended. -If you need to share RabbitMQ with other applications, its recommended create a virtual server solely for OpenFlow. In the connection string then add the name of your virtual server. +If you need to share RabbitMQ with other applications, its recommended create a virtual server solely for OpenCore. In the connection string then add the name of your virtual server. ``` amqp_url=amqp://user:password@rabbitmqhost/openflowvirtualserver ``` -Using OpenFlow without premium features: +Using OpenCore without premium features: using docker, with traefik as ingress controller allocated around 200 to 300mb ram for RabbitMQ @@ -27,11 +28,11 @@ each image is around 500 to 1 Gigabyte and most setups takes a long time to reac -#### Using OpenFlow with premium features, then add: +#### Using OpenCore with premium features, then add: -1) for option to use Grafana toward OpenFlow data +1) for option to use Grafana toward OpenCore data This requires only starting a Grafana instance and should not require more than 50mb to 100mb of RAM (the image is 250mb so also 500mb of disk space ) 2) option to use Open Telemetry to collect usage, metrics and spans and send custom tracing info from NodeRED. There are a few options here, but a typical setup would involve: diff --git a/docs/flow/ResourcePage.md b/docs/flow/ResourcePage.md index 6fb0dcb..97ae982 100644 --- a/docs/flow/ResourcePage.md +++ b/docs/flow/ResourcePage.md @@ -1,7 +1,7 @@ --- layout: default title: Resource Broker Page -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 14 --- # Resources Page diff --git a/docs/flow/Security-Model.md b/docs/flow/Security-Model.md index 8f11b8c..14eeabe 100644 --- a/docs/flow/Security-Model.md +++ b/docs/flow/Security-Model.md @@ -1,24 +1,25 @@ --- + layout: default title: Security Model -parent: What Is OpenIAP Flow +parent: What Is OpenCore nav_order: 6 --- ## Security Model When talking about security, you need to look at it from multiple angels and in multiple layers. -OpenFlow does not care about the physical layer ( But we do support running OpenFlow in [Trusted execution environment](https://en.wikipedia.org/wiki/Trusted_execution_environment) So if you are sensitive about code getting changed or injected you can run secure booted environments and have both the repositories, the packages and images digitally signed ) +OpenCore does not care about the physical layer ( But we do support running OpenCore in [Trusted execution environment](https://en.wikipedia.org/wiki/Trusted_execution_environment) So if you are sensitive about code getting changed or injected you can run secure booted environments and have both the repositories, the packages and images digitally signed ) -Next is security, as in fault tolerance. Nothing is 100% secure, nothing can be guaranteed to never break down, but we can limit the impact using fault tolerance. The entire system was built to run in [docker](https://openflow.openiap.io/dockercompose)/swarm/[kubernetes](https://github.com/open-rpa/helm-charts/), but can also be deployed as pure [npm packages](https://openflow.openiap.io/npmopenflow) and can run on raspberry pi, Linux, mac and windows. Everything can run on a single pc/server or distributed. Every single part of the OpenFlow stack support scaling out, there for you can create a system that is as fault tolerance as you want. You fault domains can also span multiple data center and/or cloud providers, and multiple physical location. For distributed deployments, we support a [mesh topology](https://www.google.com/search?q=mesh+topology) where you can setup either allow traffic and events to flow only one or both ways. Each physical location can be configured to allow running disconnected from the network and/or internet ( as long as you have enough storage on site ) and supports prioritizing data and events, both doing normal operations and when syncing up after an network outage. +Next is security, as in fault tolerance. Nothing is 100% secure, nothing can be guaranteed to never break down, but we can limit the impact using fault tolerance. The entire system was built to run in [docker](https://openflow.openiap.io/dockercompose)/swarm/[kubernetes](https://github.com/open-rpa/helm-charts/), but can also be deployed as pure [npm packages](https://openflow.openiap.io/npmopenflow) and can run on raspberry pi, Linux, mac and windows. Everything can run on a single pc/server or distributed. Every single part of the OpenCore stack support scaling out, there for you can create a system that is as fault tolerance as you want. You fault domains can also span multiple data center and/or cloud providers, and multiple physical location. For distributed deployments, we support a [mesh topology](https://www.google.com/search?q=mesh+topology) where you can setup either allow traffic and events to flow only one or both ways. Each physical location can be configured to allow running disconnected from the network and/or internet ( as long as you have enough storage on site ) and supports prioritizing data and events, both doing normal operations and when syncing up after an network outage. When deploying remote NodeRED's we support running disconnected from the network and/or internet, and they will automatically sync up when connection is re-established. When deploying using docker or kubernetes, we use traefik as an ingress controller and do tight control on what "the world" can access. If not using docker, make sure to implement your own protection on who can access MongoDB , RabbitMQ and other parts of the system. -OpenFlow allow signing in with username and password (local provider) but we encourage users to disable local login and only allow signing in using federated providers ( like google, azure/office 365, local ADFS servers or one one of the [500+ supported providers](http://www.passportjs.org/packages/) ) and then use two-factor authentication (2FA) on any account that has access to sensitive data or users. +OpenCore allow signing in with username and password (local provider) but we encourage users to disable local login and only allow signing in using federated providers ( like google, azure/office 365, local ADFS servers or one one of the [500+ supported providers](http://www.passportjs.org/packages/) ) and then use two-factor authentication (2FA) on any account that has access to sensitive data or users. -OpenFlow can be used as an [Identity Provider](https://en.wikipedia.org/wiki/Identity_provider) for other systems as well. ( using SAML, OAuth 2 or Open ID Connect). This is handy to keep in line with the [least privileges concept](https://en.wikipedia.org/wiki/Principle_of_least_privilege) but can also be used to "bundle" multiple user credentials into a single identity. +OpenCore can be used as an [Identity Provider](https://en.wikipedia.org/wiki/Identity_provider) for other systems as well. ( using SAML, OAuth 2 or Open ID Connect). This is handy to keep in line with the [least privileges concept](https://en.wikipedia.org/wiki/Principle_of_least_privilege) but can also be used to "bundle" multiple user credentials into a single identity. Every single component in the platform can be configured to allow allows send and transmitting data using HTTPS/TLS, but by default this is terminated in [traefik](https://traefik.io/blog/traefik-2-tls-101-23b4fbee81f1/) and remote endpoints. All data (or parts of), except file uploads, can be encrypted using EAS256 ( can be customized and/or extended to use an existing PKI infrastructure ) diff --git a/docs/flow/agents/Manage-an-Agents.md b/docs/flow/agents/Manage-an-Agents.md index a3f9026..ca660ef 100644 --- a/docs/flow/agents/Manage-an-Agents.md +++ b/docs/flow/agents/Manage-an-Agents.md @@ -28,7 +28,7 @@ You'll encounter two main sections: agent configuration and settings. - **Timezone**: Sets the global timezone for the agent. - **Run as**: Determines the user identity for running the agent and its packages. -Once you run the agent, OpenIAP Flow will download the Docker image and start it based on your specifications. +Once you run the agent, OpenCore will download the Docker image and start it based on your specifications. ![Agent Pending](Agent-Pending.png) diff --git a/docs/flow/agents/Scheduling-Packages.md b/docs/flow/agents/Scheduling-Packages.md index 6d0b029..58e5de0 100644 --- a/docs/flow/agents/Scheduling-Packages.md +++ b/docs/flow/agents/Scheduling-Packages.md @@ -1,6 +1,6 @@ ## Editing an Agent: Adding Schedules -When editing an agent in OpenIAP Flow, you'll find the "Add schedule" section at the bottom of the agent's configuration page. +When editing an agent in OpenCore, you'll find the "Add schedule" section at the bottom of the agent's configuration page. ![Add Schedule](Add-Schedule.png) diff --git a/docs/flow/agents/What-is-Agents.md b/docs/flow/agents/What-is-Agents.md index a25554a..9fb1c9c 100644 --- a/docs/flow/agents/What-is-Agents.md +++ b/docs/flow/agents/What-is-Agents.md @@ -1,9 +1,9 @@ -# What is an Agent in OpenIAP Flow +# What is an Agent in OpenCore -An `Agent` in OpenIAP Flow is responsible for executing one or more `Packages`. There are several types of agents: +An `Agent` in OpenCore is responsible for executing one or more `Packages`. There are several types of agents: -- `Dockerimage` as part of the OpenIAP Flow installation, typically a cloud-hosted image managed through OpenIAP Flow. +- `Dockerimage` as part of the OpenCore installation, typically a cloud-hosted image managed through OpenCore. - `Nodeagent daemon` installed as a background daemon on macOS and Linux, or as a Windows Service on Windows. - `Assistant` an application run by an end user, which can auto-start on user login. Users can initiate ad-hoc jobs and track scheduled jobs through the Assistant's UI. -You can [scheduling packages](scheduling-packages) on any of the above agents on the add/edit agent page in OpenIAP flow +You can [scheduling packages](scheduling-packages) on any of the above agents on the add/edit agent page in OpenCore diff --git a/docs/flow/architecture/diagram.py b/docs/flow/architecture/diagram.py index 2258f0d..01f3b7f 100644 --- a/docs/flow/architecture/diagram.py +++ b/docs/flow/architecture/diagram.py @@ -13,7 +13,7 @@ from diagrams.onprem.tracing import Jaeger from diagrams.onprem.database import Cassandra -with Diagram("OpenFlow Basic"): +with Diagram("OpenCore Basic"): with Cluster("Backend"): b = [Mongodb("MongoDB"), Rabbitmq("RabbitMQ")] with Cluster("Remote Clients"): @@ -25,7 +25,7 @@ b << api api << rc -with Diagram("OpenFlow with Traefik"): +with Diagram("OpenCore with Traefik"): with Cluster("Backend"): b = [Mongodb("MongoDB"), Rabbitmq("RabbitMQ")] @@ -46,7 +46,7 @@ t << rc -with Diagram("OpenFlow with Monitoring"): +with Diagram("OpenCore with Monitoring"): with Cluster("Backend"): b = [Mongodb("MongoDB"), Rabbitmq("RabbitMQ")] diff --git a/docs/flow/index.md b/docs/flow/index.md index 7704ea7..515171e 100644 --- a/docs/flow/index.md +++ b/docs/flow/index.md @@ -1,15 +1,15 @@ --- layout: default -title: What Is OpenIAP Flow +title: What Is OpenCore nav_order: 3 has_children: true --- -**OpenIAP Flow** is a versatile framework designed to simplify the creation, deployment, and management of distributed code. At its core, OpenIAP Flow excels in orchestrating a variety of agents and workflows. Let's explore some of its standout features: +**OpenCore** is a versatile framework designed to simplify the creation, deployment, and management of distributed code. At its core, OpenCore excels in orchestrating a variety of agents and workflows. Let's explore some of its standout features: - **Managing, invoking, and configuring your robots and workflows**: Seamlessly control and customize your automation processes. - **Managing users and their permission levels**: Keep your system secure by managing user access efficiently. - **Creating forms for human interaction**: Simplify human input in processes with easy form creation and track pending workflows. -- **A central repository**: OpenFlow serves as a one-stop repository for workflows, package code, credentials, entities, and any unstructured data. +- **A central repository**: OpenCore serves as a one-stop repository for workflows, package code, credentials, entities, and any unstructured data. - **Managing data**: Access data effortlessly through the API or web interface. Features like on-the-fly encryption, built-in version control, and a centralized backup point enhance data security and management. -Isn't this exciting? With OpenFlow, streamlining and enhancing your business processes becomes a breeze! +Isn't this exciting? With OpenCore, streamlining and enhancing your business processes becomes a breeze! diff --git a/docs/flow/workitems/Creating-a-Workitem-Queue.md b/docs/flow/workitems/Creating-a-Workitem-Queue.md index 2fc7c9c..5a3c413 100644 --- a/docs/flow/workitems/Creating-a-Workitem-Queue.md +++ b/docs/flow/workitems/Creating-a-Workitem-Queue.md @@ -1,4 +1,4 @@ -From OpenIAP Flow's web interface you can start by adding a new workitem queue. +From OpenCore's web interface you can start by adding a new workitem queue. Click `Work item Queues` in the main menu, then click `+` button to add a new queue. ![Alt text](plusbutton.png) @@ -16,7 +16,7 @@ Click `Work item Queues` in the main menu, then click `+` button to add a new qu Both `Robot/Role` and `Workflow` must have a value before this will work. You can use [RPA roles](flow/Managing-Roles) to spread the workload between multiple robots. # Agent and NodeRED specefic settings -- **amqpqueue**: OpenIAP flow will periodicly send an empty message to this queue, when there are new items ready to be processed in the queue. This is how we can allow a NodeRED workflow or an deamon agent package wait for items without having to "test" pull at certain intervals. This is always the prefere way of implementing this. +- **amqpqueue**: OpenCore will periodicly send an empty message to this queue, when there are new items ready to be processed in the queue. This is how we can allow a NodeRED workflow or an deamon agent package wait for items without having to "test" pull at certain intervals. This is always the prefere way of implementing this. # Agent specefic settings - **amqpqueue**: What agent to notify about specefic workiems ready to be processed. diff --git a/docs/nodered/example-flows.md b/docs/nodered/example-flows.md index 40c0611..3e09d00 100644 --- a/docs/nodered/example-flows.md +++ b/docs/nodered/example-flows.md @@ -4,11 +4,11 @@ title: Example Workflows parent: What Is NodeRED --- -# Using OpenFlow Forms +# Using OpenCore Forms -## Create a Form in OpenFlow +## Create a Form in OpenCore -In this section, users will learn how to create a Form in OpenFlow. Refer to the Forms section for more information. +In this section, users will learn how to create a Form in OpenCore. Refer to the Forms section for more information. 1. Navigate to the [Forms page](http://app.openiap.io/#/Forms) and click the `Add Form` button. ![Add Form Button](../../images/nodered_openflow_forms_click_add_form_button.png) @@ -16,7 +16,7 @@ In this section, users will learn how to create a Form in OpenFlow. Refer to the 2. Drag a `Text Field` form to the Form designer. ![Drag Text Field](../../images/nodered_openflow_forms_drag_textfield_form.png) -3. Change the `Label` parameter to `Please enter 'Hello from OpenFlow!' below`. +3. Change the `Label` parameter to `Please enter 'Hello from OpenCore!' below`. ![Change Label](../../images/nodered_openflow_forms_change_label_textfield_form.png) 4. Click on the `API` tab and change the `Property Name` to `hello_from_openflow`. @@ -25,7 +25,7 @@ In this section, users will learn how to create a Form in OpenFlow. Refer to the 5. Click the `Save` button, set the Form name as `hellofromopenflow`, and save it. ![Save Form](../../images/nodered_openflow_forms_set_name_and_save.png) -Congratulations! You have successfully configured a Form in OpenFlow. +Congratulations! You have successfully configured a Form in OpenCore. ## Configure Form in Node-RED @@ -76,10 +76,10 @@ In this section, users will learn how to invoke the Form just created using Node 5. Click the button inside the `inject` node to assign an instance of the Workflow to the `users` role. -6. Navigate to OpenFlow's home page to see the instance of the Workflow. - ![OpenFlow Homepage](../../images/nodered_openflow_forms_homepage.png) +6. Navigate to OpenCore's home page to see the instance of the Workflow. + ![OpenCore Homepage](../../images/nodered_openflow_forms_homepage.png) -7. Test the Form by entering `Hello from OpenFlow!` in the text field and clicking the **Submit** button. A debug message will appear in Node-RED. +7. Test the Form by entering `Hello from OpenCore!` in the text field and clicking the **Submit** button. A debug message will appear in Node-RED. ![Debug Message](../../images/nodered_openflow_forms_debug_message.png) This completes the process of invoking the Form using Node-RED. Users can now test and interact with the Form they have created. diff --git a/docs/nodered/index.md b/docs/nodered/index.md index dcf3211..2519c2a 100644 --- a/docs/nodered/index.md +++ b/docs/nodered/index.md @@ -7,13 +7,13 @@ has_children: true # What Is NodeRED -In OpenIAP flow you can start an agent with Node-RED preloaded with a lot of OpenIAP flow specific nodes. Node-RED is a visual programming tool used to automate Software API's and hardware devices (IoT). Its an Open Source and much more advanced version, of closed source platforms like zapier or n8n. +In OpenCore you can start an agent with Node-RED preloaded with a lot of OpenCore specific nodes. Node-RED is a visual programming tool used to automate Software API's and hardware devices (IoT). Its an Open Source and much more advanced version, of closed source platforms like zapier or n8n. It provides an in-browser editor where you can connect flows using any nodes available. Each node represents a step that when wired together forms a meaningful task. It also follows a common pattern: input, processing and output. It is important to note that Node-RED functions like a middleware to an information processing system. It simply connects the inputs to the workflows and allows them to process it. # Getting started -To get started, login to Openflow and then click Agents in the menu. +To get started, login to OpenCore and then click Agents in the menu. Then click `Add agent` ![Add Agent](add-agent.png) @@ -22,16 +22,16 @@ Change the name if you like, then select NodeRED in the `image` dropdown menu. ![NodeRED image](nodered-image.png) -This will auto fill out the required envoriment variables for your new NodeRED instance. This is how we configure NodeRED for instance `nodered_id` tell NodeRED were to store your workflows and other information inside openflow. You can find these later in the `nodered` Collection under `Entities` +This will auto fill out the required envoriment variables for your new NodeRED instance. This is how we configure NodeRED for instance `nodered_id` tell NodeRED were to store your workflows and other information inside OpenCore. You can find these later in the `nodered` Collection under `Entities` -Now Click `Save` and OpenFlow will save and then start your new NodeRED instance. +Now Click `Save` and OpenCore will save and then start your new NodeRED instance. ![Save](save.png) If this is the first time you start a NodeRED and you are on a local installation this might take a little time, while it download the NodeRED docker image. But after a while it will say `Status` Running ![Nodered Status](nodered-status.png) -> Note: you can see CPU and Memory usage here too. If you are using the cloud based version of OpenIAP Flow, and these numbers get to high, it might be time to purche a bigger instance. +> Note: you can see CPU and Memory usage here too. If you are using the cloud based version of OpenCore, and these numbers get to high, it might be time to purche a bigger instance. Now click the last button here, to open a new tab with you NodeRED diff --git a/docs/openrpa/CommandLine.md b/docs/openrpa/CommandLine.md index 00b3bb0..19169d8 100644 --- a/docs/openrpa/CommandLine.md +++ b/docs/openrpa/CommandLine.md @@ -38,7 +38,7 @@ OpenRPA.exe /workflowid "dev\add_to_notepad.xaml" -text "Hi mom" ``` ### Running using PowerShell -Using the PowerShell module you can also run openrpa and openflow workflow. You can pipe an object or hash table to the command to fill out workflow arguments and Invoke-OpenRPA will per default with for the workflow to complete and return an object with all out arguments. This allows for for much better control since you can now parse the result. +Using the PowerShell module you can also run openrpa and opencore workflow. You can pipe an object or hash table to the command to fill out workflow arguments and Invoke-OpenRPA will per default with for the workflow to complete and return an object with all out arguments. This allows for for much better control since you can now parse the result. ![PowerShell1](commandline/PowerShell1.png) diff --git a/docs/openrpa/Debugging.md b/docs/openrpa/Debugging.md index 2b30be1..ea7f243 100644 --- a/docs/openrpa/Debugging.md +++ b/docs/openrpa/Debugging.md @@ -45,7 +45,7 @@ Now test contains "hi mom" and the yellow border has moved to the next activity. You do not need to single step over every activity, by pressing F5 ( or the Play button ) the workflow will continue as normal, and/or stop next time it hits a breakpoint. You can add and remove breakpoints even when the workflow is running or is idle. -Breakpoints also work, when workflows have been started remotely from OpenFlow, making debugging parameters from OpenFlow very easy. +Breakpoints also work, when workflows have been started remotely from OpenCore, making debugging parameters from OpenCore very easy. ![1559999293327](debugging/1559999293327.png) diff --git a/docs/openrpa/Detectors.md b/docs/openrpa/Detectors.md index f6202fb..1fd02ac 100644 --- a/docs/openrpa/Detectors.md +++ b/docs/openrpa/Detectors.md @@ -7,16 +7,16 @@ nav_order: 4 --- # Detectors -Workflows needs to be activated in some way. And opening the workflow and pressing Play is not always the best solution. Detectors is a way to make the robot serve as a detection probe for OpenFlow and/or as a way of activating Workflow locally on the robot machine. For instance, when implementing Assisted Robotics, where the robot will help a human at the PC, it would make a lot of sense to teach the users that pressing a certain key combination, for instance Ctrl+M will activate a robot sequence that does something, like copying the content of the select field, call an workflow in OpenFlow and insert data in the form, based on the result. Another common detector is the FileWatch detector. Setting op a job that monitors a folder for CSV files, and processes them, once a new one arrives. +Workflows needs to be activated in some way. And opening the workflow and pressing Play is not always the best solution. Detectors is a way to make the robot serve as a detection probe for OpenCore and/or as a way of activating Workflow locally on the robot machine. For instance, when implementing Assisted Robotics, where the robot will help a human at the PC, it would make a lot of sense to teach the users that pressing a certain key combination, for instance Ctrl+M will activate a robot sequence that does something, like copying the content of the select field, call an workflow in OpenCore and insert data in the form, based on the result. Another common detector is the FileWatch detector. Setting op a job that monitors a folder for CSV files, and processes them, once a new one arrives. ## Setting up detector Pressing Detectors in the toolbar will open a list of detectors you have access too, at the top will be a list of the currently installed detector plugins, clicking one of these will add a new Detector. Select a detector and press the Delete key on your keyboard, to remove a detector. Depending in the detector you will get a list properties and buttons. For instance, the 2 Windows Detectors have a Select button, and the option to finetune the selector by clicking Open Selector. -As soon as the detector is added, it becomes active, and you will be able to listen for Detector events inside OpenFlow, or use the detector in local workflows. +As soon as the detector is added, it becomes active, and you will be able to listen for Detector events inside OpenCore, or use the detector in local workflows. ## Using a detetor -There are 2 main uses for detector. Listening for events from OpenFlow to react on robot computer events, or inside running workflows. +There are 2 main uses for detector. Listening for events from OpenCore to react on robot computer events, or inside running workflows. ### In OpenRPA Lets say you created a click detector that reacts on a user clicking the Help menu item in Notepad. Now create a new workflow and add a DoWhile activity and set Condition to true. We want the workflow to run for ever, since the workflow will end once it reaches the end. By keeping the detector and other code inside a never ending loop, we can make sure the workflow keeps running, and triggers every time the detector triggers. @@ -31,7 +31,7 @@ Now press Play, and try clicking the Help menu item and enjoy how helpful this w ### In NodeRED -Once a detector have been added it will be visible inside OpenFlow. Just drag in a detector node, and select the new detector from the drop down list +Once a detector have been added it will be visible inside OpenCore. Just drag in a detector node, and select the new detector from the drop down list ![1558865708406](detectors/1558865708406.png) @@ -137,7 +137,7 @@ Monitors for new files in a specified folder. **Parameters:** -- `Name`: Name of the detector in OpenFlow. +- `Name`: Name of the detector in OpenCore. - `Path`: Absolute path to the monitored folder. - `File filter`: Filters files with a specific extension. - `Sub Directories`: If checked, it also monitors subdirectories. diff --git a/docs/openrpa/Offline.md b/docs/openrpa/Offline.md index 961a669..b4b3072 100644 --- a/docs/openrpa/Offline.md +++ b/docs/openrpa/Offline.md @@ -1,13 +1,13 @@ --- layout: default title: Using robot "offline" -description: How to configure the robot not to use OpenFlow ( "offline" mode ) +description: How to configure the robot not to use OpenCore ( "offline" mode ) parent: What Is OpenRPA nav_order: 6 --- ## "Offline" mode -OpenRPA was made to work in tandem with OpenFlow, but it can work in a standalone mode where it does not need to be connected to an OpenFlow instance, but then you loose all the benefits from OpenFlow. +OpenRPA was made to work in tandem with OpenCore, but it can work in a standalone mode where it does not need to be connected to an OpenCore instance, but then you loose all the benefits from OpenCore. Make sure the robot is not running, then open the file settings.json inside "Documents\OpenRPA" diff --git a/docs/openrpa/OpenRPA-Installer.md b/docs/openrpa/OpenRPA-Installer.md index a293296..59116b4 100644 --- a/docs/openrpa/OpenRPA-Installer.md +++ b/docs/openrpa/OpenRPA-Installer.md @@ -27,8 +27,8 @@ Next you need to select if you need more than the default OpenRPA extenstions ![Custom Setup 3](OpenRPA-Installer/Custom-Setup-3.png) - **OpenRPA Core**: . Is mandatory and contains OpenRPA it self. -- **Openflow Specific**: Activties for working with OpenFlow database. -- **PowerShel module**: Allow working with OpenRP and OpenFlow database from PowerShell +- **OpenCore Specific**: Activties for working with OpenCore database. +- **PowerShel module**: Allow working with OpenRP and OpenCore database from PowerShell - **Microsoft Office**: This will be hidden if installer did not detect a supported version of Microsoft Office (2010+). Add ability to record, and activities to work with Microsoft Office directly. - **Forge Forms**: Add activities for doing user interaction using forms from inside OpenRPA workflows. - **Internet Explrer**: Add ability to record, and activities for working with Internet Explorer. @@ -43,7 +43,7 @@ Next you need to select if you need more than the default OpenRPA extenstions - **File Watcher**: Add the file watcher derector. This allows the Robot to wait on Windows File System Notofication on specific folder/files. - **AviRecorder**: Add abilities to OpenRPA for doing automatic screen redordings, and adds activities for working manually with screen recordings. - **Invoice scanning with Rosum**: Add abilities to OpenRPA for doing integrating with rosum ai for invoice processing. -- **High Density Robots**: Add tools and extensions to OpenRPA that allows installing a Windows Server on the machine to login in one or more RDP session on the local machine and keep OpenRPA running inside those. This requires OpenRPA to be installed for all users, it requires OpenRPA is talking to OpenFlow. +- **High Density Robots**: Add tools and extensions to OpenRPA that allows installing a Windows Server on the machine to login in one or more RDP session on the local machine and keep OpenRPA running inside those. This requires OpenRPA to be installed for all users, it requires OpenRPA is talking to OpenCore. After completion, you can find OpenRPA in the start menu. diff --git a/docs/openrpa/OpenRPA-Settings.md b/docs/openrpa/OpenRPA-Settings.md index 605f684..c66290a 100644 --- a/docs/openrpa/OpenRPA-Settings.md +++ b/docs/openrpa/OpenRPA-Settings.md @@ -14,9 +14,9 @@ If you are in a corporate environmnet using [roaming profiles](https://learn.mic When you start **OpenRPA** it will load `layout.config`. This will store the user's layout preferences (panel sizes, order, location etc). This file is updated when you exit OpenRPA. To reset all UI elements to the defaults, simple cose OpenRPA and delete this file. -OpenRPA will Create/Open a .db file store in the users OpenRPA folder. The name of the file will match the domain name of the OpenFlow it's connected to (or `offline` when running [in offline mode](Offline)). This way, OpenRPA does not hve to redownload all data whenever you are switching between different OpenFlow installations, or switching between online and offline modes. +OpenRPA will Create/Open a .db file store in the users OpenRPA folder. The name of the file will match the domain name of the OpenCore it's connected to (or `offline` when running [in offline mode](Offline)). This way, OpenRPA does not hve to redownload all data whenever you are switching between different OpenCore installations, or switching between online and offline modes. If something goes wrong, you can always close OpenRPA and delete this file, then OpenRPA will re-create it and download all workflows it has access to next time you start it. -> important note: do NOT delete this file, if you are in offline mode. This is where all data is stored. Often make backup's of this file, if you are in Offline mode. For users connection to OpenFlow, version control and backups are handling using OpenFlow. +> important note: do NOT delete this file, if you are in offline mode. This is where all data is stored. Often make backup's of this file, if you are in Offline mode. For users connection to OpenCore, version control and backups are handling using OpenCore. > If you are having frequent issues with the .db file, consider switching from .db local storage to file based local storage. See [further in the document](#local-storage-settings) for details. Editing either file (`settings.json` / `layout.config`) while **OpenRPA** is running will have no effect and, when the application is closed, any value changed will be lost/overwritten. Hence, if you desire to edit any setting, always make sure that **OpenRPA** is not running. @@ -37,28 +37,28 @@ Computer\HKEY_LOCAL_MACHINE\SOFTWARE\OpenRPA ## Common settings Below is some of the common settings explained -- **wsurl:** Url of the Openflow you want to connect OpenRPA to. use wss:// if your openflow is using ssl certificate, use ws:// if it does not. +- **wsurl:** Url of the OpenCore you want to connect OpenRPA to. use wss:// if your OpenCore is using ssl certificate, use ws:// if it does not. - **notify_on_workflow_end:** Show a Windows notification when a workflow has ended. -- **notify_on_workflow_remote_start:** Show a Windows notification if OpenFlow has requested a workflow to be run. +- **notify_on_workflow_remote_start:** Show a Windows notification if OpenCore has requested a workflow to be run. - **notify_on_workflow_remote_end:** Show a Windows notification when a workflow triggered remotly has completed. - **isagent:** If set to true, OpenRPA will start with UI in a compact mode, with out access to the Workflow Designer. This mode is meant for Assisted Robotics, were you want a "slim" UI for the end users. -- **updatecheckinterval:** By default OpenRPA will register a watch in OpenFlow to detect changes in the database, but if you are connected to an older OpenFlow that does not support this, OpenRPA will pull for updates with this interval +- **updatecheckinterval:** By default OpenRPA will register a watch in OpenCore to detect changes in the database, but if you are connected to an older OpenCore that does not support this, OpenRPA will pull for updates with this interval - **thread_lock_timeout_seconds:** If 2 threads try to work with the UI, each thread will wait this long to get excluise access, and if not, it will throw an exception in the log windows and continue. -- **skip_online_state:** By default OpenRPA will save state for running workflow, this means if you close OpenRPA or the machine restarts, when OpenRPA starts again, it will continue the workflow from last persist. This information is also save inside OpenFlow. To save bandwith or DB space you can disable saving the state online here. +- **skip_online_state:** By default OpenRPA will save state for running workflow, this means if you close OpenRPA or the machine restarts, when OpenRPA starts again, it will continue the workflow from last persist. This information is also save inside OpenCore. To save bandwith or DB space you can disable saving the state online here. - **disable_instance_store:** To completly disable saving state, set this to false. - **skip_child_session_check:** Doing startup OpenRPA will try and detect if it is running inside a Child Session ( sometimes also called Picture in Picture ). If OpenRPA detect it is running inside a child session it will not update the config file and it will not register watches and queues. This will also disable remote running except using the special PowerShell commands. It's the "hosts" responsibility to handle these things. Sometimes on VDI installations OpenRPA will detect all OpenRPA instances as running in a child session, if this happens to you, set this value to "true" - **showloadingscreen:** if "isagent" is set to false, you can disable the OpenRPA loading screen here. In verson 1.4.55 the loading screen has been removed, but the setting has been kept for backward compatibility. - **username:** This field will get updated when ever you signin. - **entropy:** key needed to decrypt jwt or password, only windows user that created this can decrypt it, so copying this config fole to a different user will not work, and will make OpenRPA prompt the user to login again. -- **jwt:** Encrypted JWT token issued by OpenFlow to the user. +- **jwt:** Encrypted JWT token issued by OpenCore to the user. - **password:** encrypted Password, to set this, use **unsafepassword** and doing next start of OpenRPA the password till be encrypted and unsafepassword reset. -- **remote_allowed:** Disable to never allow OpenFlow to remotely run workflows on this robot. +- **remote_allowed:** Disable to never allow OpenCore to remotely run workflows on this robot. - **remote_allow_multiple_running:** If allowed, do we allow running more than one workflow at the time ? Since RPA is about automating the UI we cannot allow multiple workfows to try and control the desktop. It is highly recommend to keep this set to false. You can still allow more than workflow to run by marking certain workflows as "background" workflows, do this for workfows that will never interact with the UI. This way, those workflows will not "count" as running, - **remote_allow_multiple_running:** If multiple allowed, how many ? -- **remote_allowed_killing_self** Do we allow OpenFlow to tell the robot to kill any running instances of the workflow it's trying to run (kill if running) -- **remote_allowed_killing_any** Do we allow OpenFlow to tell the robot to kill **any** workflow running or just the same workflow as it's requesting us to run. +- **remote_allowed_killing_self** Do we allow OpenCore to tell the robot to kill any running instances of the workflow it's trying to run (kill if running) +- **remote_allowed_killing_any** Do we allow OpenCore to tell the robot to kill **any** workflow running or just the same workflow as it's requesting us to run. - **recording_add_to_designer:** By default, when you are recording, OpenRPA will add every single activity to the workflow right away. On low end hardware this can have a big performance inpact, there for you can allow OpenRPA to collect each action and first add them to the workflow at the end. When using this be very carefull to not work to fast, or OpenRPA cannot keep up and will "miss" certain actions. -- **querypagesize** When OpenRPA is requestiong data from OpenFlow how many items does it get at a time ? +- **querypagesize** When OpenRPA is requestiong data from OpenCore how many items does it get at a time ? - **ocrlanguage** If image recognition extension has been installed, what are the langauge we want to use for doing OCR on he screen. - **noweblogin** Disable requesting user to login using browser. This forces login to only work using `username` and `unsafepassword` - **max_workflows** limit the robot to only download this amount of workflow. ( to avoid crashing when signed in as an admin with access to everything ) @@ -77,11 +77,11 @@ Below is some of the common settings explained Before changing these settings, take note that: - Only one storage option should be enabled at the same time. Keeping more than one enabled is not supported and may lead to unexpected desynchronizations of saved data. -- Storage systems **do not** automatically synchronize with each other, but all of them do synchronize with OpenFlow if connected. Therefore, if you're working in online mode, your work will synchronize automatically, but if working in offline mode you need to export/import accordingly when switching. +- Storage systems **do not** automatically synchronize with each other, but all of them do synchronize with OpenCore if connected. Therefore, if you're working in online mode, your work will synchronize automatically, but if working in offline mode you need to export/import accordingly when switching. When using `StorageLiteDB` **OpenRPA** will behave the same as with previous versions. -When using `StorageFileSystem` **OpenRPA** will create subfolders in your OpenRPA folder, named after OpenFlow instances you connect to (or offline), and further subfolders for used elements (projects, workflows and so on). +When using `StorageFileSystem` **OpenRPA** will create subfolders in your OpenRPA folder, named after OpenCore instances you connect to (or offline), and further subfolders for used elements (projects, workflows and so on). `StorageFileSystem_strict` is recommended to be left enabled. It will make **OpenRPA** validate certain operations (update, delete etc.) and display errors if storage state is unexpected, for example if you're trying to update a workflow that isn't in the storage. If you are using `StorageFileSystem`, take note that: - Running from `Documents/OpenRPA` and having OneDrive or similar synchronization software enabled may result in synchronization loops on `openrpa_instances` subfolder (which keeps track of all started and completed workflows). If that is an issue for you, consider moving to `%appdata%` instead \ No newline at end of file diff --git a/docs/openrpa/OpenRPA-State.md b/docs/openrpa/OpenRPA-State.md index 04b7f9b..6f17f40 100644 --- a/docs/openrpa/OpenRPA-State.md +++ b/docs/openrpa/OpenRPA-State.md @@ -14,19 +14,19 @@ Here are some limitations users may face when dealing with workflows containing ### Saving workflow states -When **OpenRPA** is connected to an **OpenFlow**, workflow states are saved automatically in **OpenFlow** whenever specific activities¹ are reached. These states exist to indicate the current situation relevant to a workflow instance, such as the designer layout (activities, sequences) being run, the variables and arguments and their current values, etc. Hence, if the workflow contains non-serializable objects such as a `DataTable`, the state cannot be saved. +When **OpenRPA** is connected to an **OpenCore**, workflow states are saved automatically in **OpenCore** whenever specific activities¹ are reached. These states exist to indicate the current situation relevant to a workflow instance, such as the designer layout (activities, sequences) being run, the variables and arguments and their current values, etc. Hence, if the workflow contains non-serializable objects such as a `DataTable`, the state cannot be saved. *¹ - All the activities that "go idle" (For instance `Detector`, `Delay`, `Persist` and all the `Invoke ` activities* *Workaround*: Split complex workflows into smaller workflows, leaving the smaller workflows to manage the non-serializable objects. That way if an unexpected interruption occurs, you will not lose all the data. ### Remote OpenRPA / invoking with non-serializable arguments -### Invoke Openflow / Node-RED invoke and data return +### Invoke OpenCore / Node-RED invoke and data return If you wish to use the one of the `Invoke ` Activity, be aware that the workflow being invoked may not use non-serializable arguments. As the destination computer / **OpenRPA** is different than the source **OpenRPA**, the non-serializable arguments cannot be passed around and thus the invoking will fail. -Similarly, when using the `Invoke Openflow` activity to invoke a flow in **Node-RED**, non-serializable objects are not supported as arguments. An exception is the `DataTable` type; **OpenRPA** will attempt to convert these to `JArray Objects` before contacting **Node-RED** and convert it back to `DataTable` when the data is returned / node `workflow out` is used. +Similarly, when using the `Invoke OpenCore` activity to invoke a flow in **Node-RED**, non-serializable objects are not supported as arguments. An exception is the `DataTable` type; **OpenRPA** will attempt to convert these to `JArray Objects` before contacting **Node-RED** and convert it back to `DataTable` when the data is returned / node `workflow out` is used. -*Workaround*: Use the activities from `OpenRPA.OpenFlowDB` toolbox to upload files/update data/create entities and collections in **OpenFlow** MongoDB. Then, on the destination computer, access the data by querying **OpenFlow** MongoDB. This way, the entire `Data/DataSet/DataTable` (any non-serializable object) is stored within MongoDB and the parameter passed during the invoke can be the `_id` or some other identifier. +*Workaround*: Use the activities from `OpenRPA.OpenFlowDB` toolbox to upload files/update data/create entities and collections in **OpenCore** MongoDB. Then, on the destination computer, access the data by querying **OpenCore** MongoDB. This way, the entire `Data/DataSet/DataTable` (any non-serializable object) is stored within MongoDB and the parameter passed during the invoke can be the `_id` or some other identifier. *Workaround2*: Convert the non-serializable object into a serializable one, like a base 64 string, then pass it as a parameter and convert it back if needed. diff --git a/docs/openrpa/OpenRPA-UI.md b/docs/openrpa/OpenRPA-UI.md index ced6c01..041321f 100644 --- a/docs/openrpa/OpenRPA-UI.md +++ b/docs/openrpa/OpenRPA-UI.md @@ -71,7 +71,7 @@ This bar has three sections: `Logging`, `Output`, and `Workflow Instances`. ### Connection Bar -- Shows the connection status with the **OpenFlow** web service and the status of `NM` and `SAP` plugins. +- Shows the connection status with the **OpenCore** web service and the status of `NM` and `SAP` plugins. ![OpenRPA Connection Bar](../../images/openrpa_connection_bar.png) diff --git a/docs/openrpa/Requirements.md b/docs/openrpa/Requirements.md index b66afff..25f74fc 100644 --- a/docs/openrpa/Requirements.md +++ b/docs/openrpa/Requirements.md @@ -14,9 +14,9 @@ If a workflow goes though a list of something, consider using work items and wor If you enable a click or element detector the robot will be monitoring every mouse click on the computer, and on very low-end computers this can give a few milliseconds delay on clicks and show a little CPU usage on the robot. But most pc this is a non-issue. -The robot, if not configured on offline mode, will require a working network connection to start. Once in sync with [OpenFlow](https://github.com/open-rpa/openflow) it will function even if the network goes a wait for a short periods of time. -The robot uses WebSocket's to connect with [OpenFlow](https://github.com/open-rpa/openflow), if using a firewall that does layer 4 inspection or if you are using a proxy server, be sure to check this supports WebSocket's. +The robot, if not configured on offline mode, will require a working network connection to start. Once in sync with [OpenCore](https://github.com/open-rpa/openflow) it will function even if the network goes a wait for a short periods of time. +The robot uses WebSocket's to connect with [OpenCore](https://github.com/open-rpa/openflow), if using a firewall that does layer 4 inspection or if you are using a proxy server, be sure to check this supports WebSocket's. If you are using [app.openiap.io](https://app.openiap.io) be aware that this is running in Google Cloud, so for people in certain countries, this can be an issue. -If you are hit by any of the last two issues, either use the robot in offline mode, or create your own installation of [OpenFlow](https://github.com/open-rpa/openflow), in another network or On-premise. The are a guides on how to create demo setups on [OpenFlow](https://github.com/open-rpa/openflow)'s GitHub page \ No newline at end of file +If you are hit by any of the last two issues, either use the robot in offline mode, or create your own installation of [OpenCore](https://github.com/open-rpa/openflow), in another network or On-premise. The are a guides on how to create demo setups on [OpenCore](https://github.com/open-rpa/openflow)'s GitHub page \ No newline at end of file diff --git a/docs/openrpa/SAP-Knowledge.md b/docs/openrpa/SAP-Knowledge.md index 0513f46..bad13e1 100644 --- a/docs/openrpa/SAP-Knowledge.md +++ b/docs/openrpa/SAP-Knowledge.md @@ -69,7 +69,7 @@ Get a good start with SAP in OpenRPA [![Get a good start with SAP in OpenRPA](https://img.youtube.com/vi/nDLKHrX3SxE/0.jpg)](https://www.youtube.com/watch?v=nDLKHrX3SxE) -Running OpenFlow offline and some MQTT stuff +Running OpenCore offline and some MQTT stuff [![Recording in SAP with OpenRPA](https://img.youtube.com/vi/4VJ2Q4mPWnk/0.jpg)](https://www.youtube.com/watch?v=4VJ2Q4mPWnk) diff --git a/index.md b/index.md index f8fa7fa..cae97a5 100644 --- a/index.md +++ b/index.md @@ -3,15 +3,15 @@ title: OpenIAP Documentation layout: home nav_order: 1 --- -**OpenIAP Flow** is a versatile framework designed to simplify the creation, deployment, and management of distributed code. At its core, OpenIAP Flow excels in orchestrating a variety of agents and workflows. Let's explore some of its standout features: +**OpenCore** is a versatile framework designed to simplify the creation, deployment, and management of distributed code. At its core, OpenCore excels in orchestrating a variety of agents and workflows. Let's explore some of its standout features: - **Managing, invoking, and configuring your robots and workflows**: Seamlessly control and customize your automation processes. - **Managing users and their permission levels**: Keep your system secure by managing user access efficiently. - **Creating forms for human interaction**: Simplify human input in processes with easy form creation and track pending workflows. -- **A central repository**: OpenFlow serves as a one-stop repository for workflows, package code, credentials, entities, and any unstructured data. +- **A central repository**: OpenCore serves as a one-stop repository for workflows, package code, credentials, entities, and any unstructured data. - **Managing data**: Access data effortlessly through the API or web interface. Features like on-the-fly encryption, built-in version control, and a centralized backup point enhance data security and management. -Isn't this exciting? With OpenFlow, streamlining and enhancing your business processes becomes a breeze! +Isn't this exciting? With OpenCore, streamlining and enhancing your business processes becomes a breeze! ## **community help** Join the 🤷💻🤦 [community forum](https://discourse.openiap.io/)