From e836df9f4dbcf24e5b0abf58485285e2f4d8f190 Mon Sep 17 00:00:00 2001 From: Carl Anderson Date: Mon, 26 Feb 2024 17:14:25 +0000 Subject: [PATCH 1/7] updates to the README and gitignore --- .gitignore | 7 +++++++ README.md | 52 ++++++++++++++++++++++++++++++++-------------------- 2 files changed, 39 insertions(+), 20 deletions(-) diff --git a/.gitignore b/.gitignore index ea9c809..b278a8a 100644 --- a/.gitignore +++ b/.gitignore @@ -8,6 +8,13 @@ node_modules /template.tests /template +############################## +## sushi +############################## +_gencontinuous.* +_genonce.* +_updatePublisher.* + ############################## ## IntelliJ ############################## diff --git a/README.md b/README.md index fee7b20..f88da10 100644 --- a/README.md +++ b/README.md @@ -15,21 +15,29 @@ including configuration for the menu. This is a Sushi project and can use HL7 IG Publisher to build locally: 1. Clone this respository - 2. Run `./scripts/_updatePublisher.sh` to get the latest IG publisher - 3. Run `./scripts/_genonce.sh` to generate the IG - 4. Run `open output/index.html` to view the IG website + 1. Run `./scripts/_updatePublisher.sh` to get the latest IG publisher + 1. Install `sushi` if you don't have it already with: `npm i fsh-sushi` + 1. Run `./scripts/_genonce.sh` to generate the IG + 1. Run `open output/index.html` to view the IG website + 1. Alternatively, you may run a local `http-server` to view the built content: +``` +npm i http-server +cd output +http-server +``` Building tests, see [test README](tests/README.md) ## Testing Implementation -Specification contains set of tests in `/tests` directory. -Tests are set of test case files, each case covers one aspect of implementation. -Test case is represented as JSON document. It has title and description attributes, -set of fixtures (FHIR resources) as `resources` attribute, and array of test objects. +This specification contains a set of tests in the `/tests` directory, +which are set of test case files, each covering one aspect of implementation. +A test case is represented as JSON document with `title` and `description` attributes, +a set of fixtures (FHIR resources) as the `resources` attribute, and an array of test objects. -Test object has unique `title`, ViewDefinition at `view` attribute, and expected set of resulting -rows in `expect` attribute. +[TODO]: # (link to the ViewDefinition resource here.) +A test object has a unique `title`, a ViewDefinition as the `view` attribute, and and expected set of resulting +rows in the `expect` attribute. ## Tests Overview @@ -38,17 +46,17 @@ directory. Each test case file is structured to include a combination of attributes that define the scope and expectations of the test. The main components of a test case file are: -- **Title**: A brief, descriptive title that summarizes the aspect of the +- **Title** (`title` attribute): A brief, descriptive title that summarizes the aspect of the implementation being tested. -- **Description**: A detailed explanation of what the test case aims to +- **Description** (`description` attribute): A detailed explanation of what the test case aims to validate, including any relevant context or specifications that the test is based on. - **Fixtures** (`resources` attribute): A set of FHIR resources that serve as input data for the test. These fixtures are essential for setting up the test environment and conditions. -- **Test Objects**: An array of objects, each representing a unique test +- **Test Objects** (`tests` attribute): An array of objects, each representing a unique test scenario within the case. Every test object includes: - - **Title**: A unique, descriptive title for the test object, differentiating + - **Title** (`title` attribute): A unique, descriptive title for the test object, differentiating it from others in the same test case. - **ViewDefinition** (`view` attribute): Specifies the ViewDefinition being tested. This attribute outlines the expected data view or transformation @@ -94,7 +102,11 @@ Below is an abstract representation of what a test case file might look like: To ensure comprehensive validation and interoperability, it is recommended for implementers to integrate the test suite contained in this repository directly into their projects. This can be achieved efficiently by adding this repository -as a git submodule to your project. Furthermore, implementers are advised to +as a git submodule to your project. + +[TODO]: # (provide instructions on how to link this repo as a submodule in a
collapse) + +Furthermore, implementers are advised to develop a test runner based on the following guidelines to execute the test cases and generate a test report. This process is essential for verifying the implementation against the specified test cases. @@ -117,16 +129,16 @@ runner: - Execute each test: - For every test object within the tests array of a testcase, evaluate the view against the loaded fixtures by calling a function like - evaluate(test.view, testcase.resources). + `evaluate(test.view, testcase.resources)`. - Compare the result of the evaluation with the expected results specified in the `expect` attribute of the test object. ### Generating the Test Report -The test runner should produce a test_report.json +The test runner should produce a `test_report.json` file containing the results of the test executions. The structure of the test report should mirror that of the original test cases, with an additional -attribute result added to each test object. This attribute should contain the +attribute `result` added to each test object. This attribute should contain the set of rows returned by the implementation when evaluating the test. Ensure the result accurately reflects the output of your implementation for each test, facilitating a straightforward comparison between expected and actual @@ -163,10 +175,10 @@ outcomes. ### Reporting Your Test Results After running the test suite and generating a `test_report.json` file with the -outcomes of your implementation's test runs, the next step is to make these +outcomes of your implementations test runs, the next step is to make these results accessible for review and validation. Publishing your test report to a publicly accessible HTTP server enables broader visibility and verification of -your implementation's compliance with the specifications. This guide outlines +your implementations compliance with the specifications. This guide outlines the process of publishing your test report and registering your implementation. ## Publishing the Test Report @@ -233,7 +245,7 @@ discovery and comparison. - If you're working on a fork or a branch, submit a pull request to the main repository to merge your changes. By following these steps, you'll not only make your test results publicly -available but also contribute to a collective resource that benefits the entire +available, you'll also contribute to a collective resource that benefits the entire FHIR implementation community. Your participation helps in demonstrating interoperability and compliance with the specifications, fostering trust and collaboration among developers and organizations. From 2f68ba3ef72e5662fed1564382fe1ebc24068ac2 Mon Sep 17 00:00:00 2001 From: Carl Anderson Date: Mon, 26 Feb 2024 17:31:26 +0000 Subject: [PATCH 2/7] added a link reference for ViewDeginition --- README.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index f88da10..417044f 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,9 @@ This project provides the source for the SQL on FHIR v2.0 Implementation Guide [**Read the specification →**](https://build.fhir.org/ig/FHIR/sql-on-fhir-v2/) +[//]: # (Links used in this document) +[ViewDefinition]: https://build.fhir.org/ig/FHIR/sql-on-fhir-v2/StructureDefinition-ViewDefinition.html "ViewDefinition" + ## Content Content as markdown is now found in [input/pagecontent](input/pagecontent). @@ -35,8 +38,7 @@ which are set of test case files, each covering one aspect of implementation. A test case is represented as JSON document with `title` and `description` attributes, a set of fixtures (FHIR resources) as the `resources` attribute, and an array of test objects. -[TODO]: # (link to the ViewDefinition resource here.) -A test object has a unique `title`, a ViewDefinition as the `view` attribute, and and expected set of resulting +A test object has a unique `title`, a [ViewDefinition][] as the `view` attribute, and and expected set of resulting rows in the `expect` attribute. ## Tests Overview @@ -58,7 +60,7 @@ components of a test case file are: scenario within the case. Every test object includes: - **Title** (`title` attribute): A unique, descriptive title for the test object, differentiating it from others in the same test case. - - **ViewDefinition** (`view` attribute): Specifies the ViewDefinition being + - **ViewDefinition** (`view` attribute): Specifies the [ViewDefinition][] being tested. This attribute outlines the expected data view or transformation applied to the input fixtures. - **Expected Result** (`expect` attribute): An array of rows that represent From 5ed166bf8098af288df44772f9588d19135696e5 Mon Sep 17 00:00:00 2001 From: Carl Anderson Date: Mon, 26 Feb 2024 17:41:10 +0000 Subject: [PATCH 3/7] Indented the http-server instructions and reworded --- README.md | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 417044f..c842e71 100644 --- a/README.md +++ b/README.md @@ -22,12 +22,17 @@ This is a Sushi project and can use HL7 IG Publisher to build locally: 1. Install `sushi` if you don't have it already with: `npm i fsh-sushi` 1. Run `./scripts/_genonce.sh` to generate the IG 1. Run `open output/index.html` to view the IG website - 1. Alternatively, you may run a local `http-server` to view the built content: -``` -npm i http-server -cd output -http-server -``` +
+ Instructions for viewing the IG in a local http-server... + + ```sh + npm i http-server + cd output + http-server # Will launch the content in a new browser tab. + ``` + +
+ Building tests, see [test README](tests/README.md) From 77f3bb4c94c8d7f7b9a5f5d37fa5c32f4c1769c6 Mon Sep 17 00:00:00 2001 From: Carl Anderson Date: Mon, 26 Feb 2024 17:50:19 +0000 Subject: [PATCH 4/7] nit: typo - pt1 should be "pt-1" --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index c842e71..d32b5f3 100644 --- a/README.md +++ b/README.md @@ -81,7 +81,7 @@ Below is an abstract representation of what a test case file might look like: description: '...', // fixtures resources: [ - {resourceType: 'Patient', id: 'pt1'}, + {resourceType: 'Patient', id: 'pt-1'}, {resourceType: 'Patient', id: 'pt-2'} ] tests: [ From 6d94b4387cb84ac331839a0a2485adbab3ae2786 Mon Sep 17 00:00:00 2001 From: Carl Anderson Date: Mon, 26 Feb 2024 17:54:19 +0000 Subject: [PATCH 5/7] fix: prevent GH rendering comments as ugly red lines --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d32b5f3..2579dbd 100644 --- a/README.md +++ b/README.md @@ -151,7 +151,7 @@ the result accurately reflects the output of your implementation for each test, facilitating a straightforward comparison between expected and actual outcomes. -```json +```js //example test_report.json { "title": "Example Test Case", From fbb34b957cb9f2cb3be15981920643e889816181 Mon Sep 17 00:00:00 2001 From: Carl Anderson Date: Mon, 26 Feb 2024 17:55:41 +0000 Subject: [PATCH 6/7] nit: unmatched bracket --- README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/README.md b/README.md index 2579dbd..a2e540e 100644 --- a/README.md +++ b/README.md @@ -176,7 +176,6 @@ outcomes. } ] } -] ``` ### Reporting Your Test Results From 8b0538b298ace603b5caadcc5ae8122faba45401 Mon Sep 17 00:00:00 2001 From: Carl Anderson Date: Mon, 26 Feb 2024 17:57:40 +0000 Subject: [PATCH 7/7] fix: broken rendering for this block in GH --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index a2e540e..17a80fb 100644 --- a/README.md +++ b/README.md @@ -237,12 +237,12 @@ discovery and comparison. - Clone or fork the repository containing the `implementations.json` if necessary. - Add an entry for your implementation in the format: ```json - { - "name": "YourImplName", - "description": "", - "url": "", - "testResultsUrl": "" - }, + { + "name": "YourImplName", + "description": "", + "url": "", + "testResultsUrl": "" + }, ``` - Ensure that the URL is directly accessible and points to the latest version of your test report.