diff --git a/.gitignore b/.gitignore index ea9c809..b278a8a 100644 --- a/.gitignore +++ b/.gitignore @@ -8,6 +8,13 @@ node_modules /template.tests /template +############################## +## sushi +############################## +_gencontinuous.* +_genonce.* +_updatePublisher.* + ############################## ## IntelliJ ############################## diff --git a/README.md b/README.md index fee7b20..17a80fb 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,9 @@ This project provides the source for the SQL on FHIR v2.0 Implementation Guide [**Read the specification →**](https://build.fhir.org/ig/FHIR/sql-on-fhir-v2/) +[//]: # (Links used in this document) +[ViewDefinition]: https://build.fhir.org/ig/FHIR/sql-on-fhir-v2/StructureDefinition-ViewDefinition.html "ViewDefinition" + ## Content Content as markdown is now found in [input/pagecontent](input/pagecontent). @@ -15,21 +18,33 @@ including configuration for the menu. This is a Sushi project and can use HL7 IG Publisher to build locally: 1. Clone this respository - 2. Run `./scripts/_updatePublisher.sh` to get the latest IG publisher - 3. Run `./scripts/_genonce.sh` to generate the IG - 4. Run `open output/index.html` to view the IG website + 1. Run `./scripts/_updatePublisher.sh` to get the latest IG publisher + 1. Install `sushi` if you don't have it already with: `npm i fsh-sushi` + 1. Run `./scripts/_genonce.sh` to generate the IG + 1. Run `open output/index.html` to view the IG website +
+ Instructions for viewing the IG in a local http-server... + + ```sh + npm i http-server + cd output + http-server # Will launch the content in a new browser tab. + ``` + +
+ Building tests, see [test README](tests/README.md) ## Testing Implementation -Specification contains set of tests in `/tests` directory. -Tests are set of test case files, each case covers one aspect of implementation. -Test case is represented as JSON document. It has title and description attributes, -set of fixtures (FHIR resources) as `resources` attribute, and array of test objects. +This specification contains a set of tests in the `/tests` directory, +which are set of test case files, each covering one aspect of implementation. +A test case is represented as JSON document with `title` and `description` attributes, +a set of fixtures (FHIR resources) as the `resources` attribute, and an array of test objects. -Test object has unique `title`, ViewDefinition at `view` attribute, and expected set of resulting -rows in `expect` attribute. +A test object has a unique `title`, a [ViewDefinition][] as the `view` attribute, and and expected set of resulting +rows in the `expect` attribute. ## Tests Overview @@ -38,19 +53,19 @@ directory. Each test case file is structured to include a combination of attributes that define the scope and expectations of the test. The main components of a test case file are: -- **Title**: A brief, descriptive title that summarizes the aspect of the +- **Title** (`title` attribute): A brief, descriptive title that summarizes the aspect of the implementation being tested. -- **Description**: A detailed explanation of what the test case aims to +- **Description** (`description` attribute): A detailed explanation of what the test case aims to validate, including any relevant context or specifications that the test is based on. - **Fixtures** (`resources` attribute): A set of FHIR resources that serve as input data for the test. These fixtures are essential for setting up the test environment and conditions. -- **Test Objects**: An array of objects, each representing a unique test +- **Test Objects** (`tests` attribute): An array of objects, each representing a unique test scenario within the case. Every test object includes: - - **Title**: A unique, descriptive title for the test object, differentiating + - **Title** (`title` attribute): A unique, descriptive title for the test object, differentiating it from others in the same test case. - - **ViewDefinition** (`view` attribute): Specifies the ViewDefinition being + - **ViewDefinition** (`view` attribute): Specifies the [ViewDefinition][] being tested. This attribute outlines the expected data view or transformation applied to the input fixtures. - **Expected Result** (`expect` attribute): An array of rows that represent @@ -66,7 +81,7 @@ Below is an abstract representation of what a test case file might look like: description: '...', // fixtures resources: [ - {resourceType: 'Patient', id: 'pt1'}, + {resourceType: 'Patient', id: 'pt-1'}, {resourceType: 'Patient', id: 'pt-2'} ] tests: [ @@ -94,7 +109,11 @@ Below is an abstract representation of what a test case file might look like: To ensure comprehensive validation and interoperability, it is recommended for implementers to integrate the test suite contained in this repository directly into their projects. This can be achieved efficiently by adding this repository -as a git submodule to your project. Furthermore, implementers are advised to +as a git submodule to your project. + +[TODO]: # (provide instructions on how to link this repo as a submodule in a
collapse) + +Furthermore, implementers are advised to develop a test runner based on the following guidelines to execute the test cases and generate a test report. This process is essential for verifying the implementation against the specified test cases. @@ -117,22 +136,22 @@ runner: - Execute each test: - For every test object within the tests array of a testcase, evaluate the view against the loaded fixtures by calling a function like - evaluate(test.view, testcase.resources). + `evaluate(test.view, testcase.resources)`. - Compare the result of the evaluation with the expected results specified in the `expect` attribute of the test object. ### Generating the Test Report -The test runner should produce a test_report.json +The test runner should produce a `test_report.json` file containing the results of the test executions. The structure of the test report should mirror that of the original test cases, with an additional -attribute result added to each test object. This attribute should contain the +attribute `result` added to each test object. This attribute should contain the set of rows returned by the implementation when evaluating the test. Ensure the result accurately reflects the output of your implementation for each test, facilitating a straightforward comparison between expected and actual outcomes. -```json +```js //example test_report.json { "title": "Example Test Case", @@ -157,16 +176,15 @@ outcomes. } ] } -] ``` ### Reporting Your Test Results After running the test suite and generating a `test_report.json` file with the -outcomes of your implementation's test runs, the next step is to make these +outcomes of your implementations test runs, the next step is to make these results accessible for review and validation. Publishing your test report to a publicly accessible HTTP server enables broader visibility and verification of -your implementation's compliance with the specifications. This guide outlines +your implementations compliance with the specifications. This guide outlines the process of publishing your test report and registering your implementation. ## Publishing the Test Report @@ -219,12 +237,12 @@ discovery and comparison. - Clone or fork the repository containing the `implementations.json` if necessary. - Add an entry for your implementation in the format: ```json - { - "name": "YourImplName", - "description": "", - "url": "", - "testResultsUrl": "" - }, + { + "name": "YourImplName", + "description": "", + "url": "", + "testResultsUrl": "" + }, ``` - Ensure that the URL is directly accessible and points to the latest version of your test report. @@ -233,7 +251,7 @@ discovery and comparison. - If you're working on a fork or a branch, submit a pull request to the main repository to merge your changes. By following these steps, you'll not only make your test results publicly -available but also contribute to a collective resource that benefits the entire +available, you'll also contribute to a collective resource that benefits the entire FHIR implementation community. Your participation helps in demonstrating interoperability and compliance with the specifications, fostering trust and collaboration among developers and organizations.