From 79bc8b4e829764109af5e5bd9e6f4a7655802752 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Mon, 9 Sep 2024 11:32:40 -0400 Subject: [PATCH 01/22] Adding stub files for vector indexes --- docs/source/index.rst | 6 ++++-- docs/source/indexes/bruteforce.rst | 24 +++++++++++++++++++++++ docs/source/indexes/cagra.rst | 31 ++++++++++++++++++++++++++++++ docs/source/indexes/indexes.rst | 3 +++ docs/source/indexes/ivfflat.rst | 28 +++++++++++++++++++++++++++ docs/source/indexes/ivfpq.rst | 25 ++++++++++++++++++++++++ 6 files changed, 115 insertions(+), 2 deletions(-) create mode 100644 docs/source/indexes/bruteforce.rst create mode 100644 docs/source/indexes/cagra.rst create mode 100644 docs/source/indexes/indexes.rst create mode 100644 docs/source/indexes/ivfflat.rst create mode 100644 docs/source/indexes/ivfpq.rst diff --git a/docs/source/index.rst b/docs/source/index.rst index 88f361243..86ccf8c45 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -6,7 +6,8 @@ Useful Resources .. _cuvs_reference: https://docs.rapids.ai/api/cuvs/stable/ -- `Example Notebooks `_: Self-contained Code examples +- `Example Notebooks `_: Example notebooks +- `Code Examples `_: Self-contained code examples - `RAPIDS Community `_: Get help, contribute, and collaborate. - `GitHub repository `_: Download the cuVS source code. - `Issue tracker `_: Report issues or request features. @@ -22,7 +23,8 @@ cuVS is a library for vector search and clustering on the GPU. :caption: Contents: getting_started.rst - build.md + indexes/indexes.rst + build.rst integrations.rst api_docs.rst contributing.md diff --git a/docs/source/indexes/bruteforce.rst b/docs/source/indexes/bruteforce.rst new file mode 100644 index 000000000..23aa5a10d --- /dev/null +++ b/docs/source/indexes/bruteforce.rst @@ -0,0 +1,24 @@ +Brute-force +=========== + +Brute-force, or flat index, is the most simple index type, as it ultimately boils down to an exhaustive matrix multiplication. + +While it scales with O(N^2*D), brute-force can be a great choice when + +1. exact nearest neighbors are required, and +2. when the number of vectors is relatively small (a few thousand to one million) + + +[ C API | C++ API | Python API | Rust API ] + +Configuration parameters +------------------------ + + + +Tuning Considerations +--------------------- + +Memory footprint +---------------- + diff --git a/docs/source/indexes/cagra.rst b/docs/source/indexes/cagra.rst new file mode 100644 index 000000000..2c1c42f2f --- /dev/null +++ b/docs/source/indexes/cagra.rst @@ -0,0 +1,31 @@ +CAGRA +===== + +CAGRA is a graph-based index that is based loosely on the navigable small-world graph (NSG) algorithm, but which has been +built from the ground-up specifically for the GPU. CAGRA constructs a flat graph representation by first building a kNN graph +of the training points and then removing redundant paths between neighbors. + +The CAGRA algorithm has two basic steps- +1. Construct a kNN graph +2. Prune redundant routes from the kNN graph. + +Brute-force could be used to construct the initial kNN graph. This would yield the most accurate graph but would be very slow and +we find that in practice the kNN graph does not need to be very accurate since the pruning step helps to boost the overall recall of +the index. cuVS provides IVF-PQ and NN-Descent strategies for building the initial kNN graph and these can be selected in index + params object during index construction. + +Interoperability with HNSW +-------------------------- + +[ C API | C++ API | Python API | Rust API ] + + +Configuration parameters +------------------------ + +Tuning Considerations +--------------------- + +Memory footprint +---------------- + diff --git a/docs/source/indexes/indexes.rst b/docs/source/indexes/indexes.rst new file mode 100644 index 000000000..62747db11 --- /dev/null +++ b/docs/source/indexes/indexes.rst @@ -0,0 +1,3 @@ +Nearest Neighbor Indexes +======================== + diff --git a/docs/source/indexes/ivfflat.rst b/docs/source/indexes/ivfflat.rst new file mode 100644 index 000000000..9cf1af519 --- /dev/null +++ b/docs/source/indexes/ivfflat.rst @@ -0,0 +1,28 @@ +IVF-Flat +======== + +IVF-Flat is an inverted file index (IVF) algorithm, which in the context of nearest neighbors means that data points are +partitioned into clusters. At search time, brute-force is performed only in a (user-defined) subset of the closest clusters. +In practice, this algorithm can search the index much faster than brute-force and oftem still maintain an acceptable +recall, though this comes with the drawback that the index itself copies the original training vectors into a memory layout +that is optimized for fast memory reads and adds some additional memory storage overheads. Once the index is trained, +this algorithm no longer requires the original raw training vectors. + +IVF-Flat tends to be a great choice when + +1. like brute-force, there is enough device memory available to fit all of the vectors +in the index, and +2. exact recall is not needed. as with the other index types, the tuning parameters are used to trade-off recall for search latency / throughput. + +[ C API | C++ API | Python API | Rust API ] + +Configuration parameters +------------------------ + + +Tuning Considerations +--------------------- + +Memory footprint +---------------- + diff --git a/docs/source/indexes/ivfpq.rst b/docs/source/indexes/ivfpq.rst new file mode 100644 index 000000000..143b57f6f --- /dev/null +++ b/docs/source/indexes/ivfpq.rst @@ -0,0 +1,25 @@ +IVF-PQ +====== + +IVF-PQ is an inverted file index (IVF) algorithm, which is an extension to the IVF-Flat algorithm (e.g. data points are first +partitioned into clusters) where product quantization is performed within each cluster in order to shrink the memory footprint +of the index. Product quantization is a lossy compression method and it is capable of storing larger number of vectors +on the GPU by offloading the original vectors to main memory, however higher compression levels often lead to reduced recall. +Often a strategy called refinement reranking is employed to make up for the lost recall by querying the IVF-PQ index for a larger +`k` than desired and performing a reordering and reduction to `k` based on the distances from the unquantized vectors. Unfortunately, +this does mean that the unquantized raw vectors need to be available and often this can be done efficiently using multiple CPU threads. + +[ C API | C++ API | Python API | Rust API ] + + +Configuration parameters +------------------------ + + + +Tuning Considerations +--------------------- + +Memory footprint +---------------- + From 0db2eaef10f6aadad3928dd3fe845b1ade40b62d Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Mon, 23 Sep 2024 11:46:01 -0400 Subject: [PATCH 02/22] Checking inx! --- docs/source/indexes/bruteforce.rst | 47 +++++++++++++++- docs/source/indexes/cagra.rst | 88 +++++++++++++++++++++++++++--- docs/source/indexes/indexes.rst | 16 ++++++ docs/source/indexes/ivfflat.rst | 51 ++++++++++++++++- 4 files changed, 192 insertions(+), 10 deletions(-) diff --git a/docs/source/indexes/bruteforce.rst b/docs/source/indexes/bruteforce.rst index 23aa5a10d..b43e532cb 100644 --- a/docs/source/indexes/bruteforce.rst +++ b/docs/source/indexes/bruteforce.rst @@ -6,19 +6,62 @@ Brute-force, or flat index, is the most simple index type, as it ultimately boil While it scales with O(N^2*D), brute-force can be a great choice when 1. exact nearest neighbors are required, and -2. when the number of vectors is relatively small (a few thousand to one million) +2. when the number of vectors is relatively small (a few thousand to a few million) +Brute-force can also be a good choice for heavily filtered queries where other algorithms might struggle returning the expected results. For example, +when filtering out 90%-95% of the vectors from a search, the IVF methods could struggle to return anything at all with smaller number of probes and +graph-based algorithms with limited hash table memory could end up skipping over important unfiltered entries. -[ C API | C++ API | Python API | Rust API ] + +[ `C API <../c_api.rst>` | `C++ API <../cpp_api.rst` | `Python API <../python_api.rst` | `Rust API <../rust_api/index.rst` ] + +Filtering considerations +------------------------ + +Because it is exhaustive, brute-force can quickly become the slowest, albeit most accurate form of search. However, even +when the number of vectors in an index are very large, brute-force can still be used to search vectors efficiently with a filter. + +This is especially true for cases where the filter is excluding 90%-99% of the vectors in the index where the partitioning + inherent in other approximate algorithms would simply not include expected vectors in the results. In the case of pre-filtered + brute-force, the computation is inverted so distances are only computed between vectors that pass the filter, significantly reducing + the amount of computation required. Configuration parameters ------------------------ +Build parameters +~~~~~~~~~~~~~~~~ + +None + +Search Parameters +~~~~~~~~~~~~~~~~~ + +None Tuning Considerations --------------------- +Brute-force is exact but that doesn't always mean it's deterministic. For example, when there are many nearest neighbors with +the same distances it's possible they might be ordered differently across different runs. This especially becomes apparent in +cases where there are points with the same distance right near the cutoff of `k`, which can cause the final list of neighbors +to differ from ground truth. This is not often a problem in practice and can usually be mitigated by increasing `k`. + + Memory footprint ---------------- +`precision` is the number of bytes in each element of each vector (e.g. 32-bit = 4-bytes) + + +Index footprint +~~~~~~~~~~~~~~~ + +Raw vectors: :math:`n\_vectors * n\_dimensions * precision` +Vector norms (for distances which require them): :math:`n\_vectors * precision` + +Search footprint +~~~~~~~~~~~~~~~~ + +TBD \ No newline at end of file diff --git a/docs/source/indexes/cagra.rst b/docs/source/indexes/cagra.rst index 2c1c42f2f..2e6f5ed9f 100644 --- a/docs/source/indexes/cagra.rst +++ b/docs/source/indexes/cagra.rst @@ -1,31 +1,105 @@ CAGRA ===== -CAGRA is a graph-based index that is based loosely on the navigable small-world graph (NSG) algorithm, but which has been +CAGRA, or (C)UDA (A)NN (GRA)ph-based, is a graph-based index that is based loosely on the popular navigable small-world graph (NSG) algorithm, but which has been built from the ground-up specifically for the GPU. CAGRA constructs a flat graph representation by first building a kNN graph of the training points and then removing redundant paths between neighbors. The CAGRA algorithm has two basic steps- -1. Construct a kNN graph -2. Prune redundant routes from the kNN graph. +* 1. Construct a kNN graph +* 2. Prune redundant routes from the kNN graph. -Brute-force could be used to construct the initial kNN graph. This would yield the most accurate graph but would be very slow and +I-force could be used to construct the initial kNN graph. This would yield the most accurate graph but would be very slow and we find that in practice the kNN graph does not need to be very accurate since the pruning step helps to boost the overall recall of -the index. cuVS provides IVF-PQ and NN-Descent strategies for building the initial kNN graph and these can be selected in index - params object during index construction. +the index. cuVS provides IVF-PQ and NN-Descent strategies for building the initial kNN graph and these can be selected in index params object during index construction. Interoperability with HNSW -------------------------- -[ C API | C++ API | Python API | Rust API ] +cuVS provides the capability to convert a CAGRA graph to an HNSW graph, which enables the GPU to be used only for building the index +while the CPU can be leveraged for search. +# TODO: Add code example for this conversion + +[ :ref:`C API <../c_api.rst>` | :ref:`C++ API <../cpp_api.rst>` | :ref:`Python API <../python_api.rst>` | :ref:`Rust API <../rust_api/index.rst>` ] + +Filtering considerations +------------------------ + +CAGRA supports filtered search which can work well for moderately small filters (such as filtering out only a small percentage of the vectors in the index (e.g. <<50%). + +When a filter is expected to remove 80%-99% of the vectors in the index, it is preferred to use brute-force with pre-filtering instead, as that will compute only those distances +between the vectors not being filtered out. By default, CAGRA will pass the filter to the pre-filtered brute-force when the number of vevtors being filtered out is >90% of the vectors in the index. Configuration parameters ------------------------ +Build parameters +~~~~~~~~~~~~~~~~ + +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + + * - Name + - Default + - Description + * - compression + - None + - For large datasets, the raw vectors can be compressed using product quantization so they can be placed on device. This comes at the cost of lowering recall, though a refinement reranking step can be used to make up the lost recall after search. + * - graph_build_algo + - 'IVF_PQ' + - The graph build algorithm to use for building + * - graph_build_params + - None + - Specify explicit build parameters for the corresponding graph build algorithms + * - graph_degree + - 32 + - The degree of the final CAGRA graph. All vertices in the graph will have this degree. During search, a larger graph degree allows for more exploration of the search space and improves recall but at the expense of searching more vertices. + * - intermediate_graph_degree + - 64 + - The degree of the initial knn graph before it is optimized into the final CAGRA graph. A larger value increases connectivity of the initial graph so that it performs better once pruned. Larger values come at the cost of increased device memory usage and increases the time of initial knn graph construction. + * - guarantee_connectivity + - False + - Uses a degree-constrained minimum spanning tree to guarantee the initial knn graph is connected. This can improve recall on some datasets. + * - attach_data_on_build + - True + - Should the dataset be attached to the index after the index is built? Setting this to `False` can improve memory usage and performance, for example if the graph is being serialized to disk or converted to HNSW right after building it. + +Search parameters +~~~~~~~~~~~~~~~~~ + +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + + * - Name + - Default + - Description + * - itopk_size + - 64 + - Number of intermediate search results retained during search. This value needs to be >=k. This is the main knob to tweak search performance. + * - max_iterations + - 0 + - The maximum number of iterations during search. Default is to auto-select. + * - max_queries + - 0 + - Max number of search queries to perform concurrently (batch size). Default is to auto-select. + * - team_size + - 0 + - Number of CUDA threads for calculating each distance. Can be 4, 8, 16, or 32. Default is to auto-select. + * - search_width + - 1 + - Number of vertices to select as the starting point for the search in each iteration. + * - min_iterations + - 0 + - Minimum number of search iterations to perform + Tuning Considerations --------------------- +The 3 hyper-parameters that are most often tuned are `graph_degree`, `intermediate_graph_degree`, and `itopk_size`. + Memory footprint ---------------- diff --git a/docs/source/indexes/indexes.rst b/docs/source/indexes/indexes.rst index 62747db11..a2fb1434a 100644 --- a/docs/source/indexes/indexes.rst +++ b/docs/source/indexes/indexes.rst @@ -1,3 +1,19 @@ Nearest Neighbor Indexes ======================== +.. toctree:: + :maxdepth: 3 + :caption: Contents: + + bruteforce.rst + cagra.rst + ivfflat.rst + ivfpq.rst + + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` \ No newline at end of file diff --git a/docs/source/indexes/ivfflat.rst b/docs/source/indexes/ivfflat.rst index 9cf1af519..be918027d 100644 --- a/docs/source/indexes/ivfflat.rst +++ b/docs/source/indexes/ivfflat.rst @@ -3,7 +3,7 @@ IVF-Flat IVF-Flat is an inverted file index (IVF) algorithm, which in the context of nearest neighbors means that data points are partitioned into clusters. At search time, brute-force is performed only in a (user-defined) subset of the closest clusters. -In practice, this algorithm can search the index much faster than brute-force and oftem still maintain an acceptable +In practice, this algorithm can search the index much faster than brute-force and often still maintain an acceptable recall, though this comes with the drawback that the index itself copies the original training vectors into a memory layout that is optimized for fast memory reads and adds some additional memory storage overheads. Once the index is trained, this algorithm no longer requires the original raw training vectors. @@ -16,13 +16,62 @@ in the index, and [ C API | C++ API | Python API | Rust API ] +Filtering considerations +------------------------ + Configuration parameters ------------------------ +Build parameters +~~~~~~~~~~~~~~~~ + +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + + * - Name + - Default + - Description + * - n_lists + - sqrt(n) + - Number of coarse clusters used to partition the index. A good heuristic for this value is sqrt(n_vectors_in_index) + * - add_data_on_build + - True + - Should the training points be added to the index after the index is built? + * - kmeans_train_iters + - 20 + - Max number of iterations for k-means training before convergence is assumed. Note that convergence could happen before this number of iterations. + * - kmeans_trainset_fraction + - 0.5 + - Fraction of points that should be subsampled from the original dataset to train the k-means clusters. Default is 1/2 the training dataset. This can often be reduced for very large datasets to improve both cluster quality and the build time. + * - adaptive_centers + - false + - Should the existing trained centroids adapt to new points that are added to the index? This provides a trade-off between improving recall at the expense of having to compute new centroids for clusters when new points are added. When points are added in large batches, the performance cost may not be noticeable. + * - conservative_memory_allocation + - false + - To support dynamic indexes, where points are expected to be added later, the individual IVF lists can be imtentionally overallocated up front to reduce the amount and impact of increasing list sizes, which requires allocating more memory and copying the old list to the new, larger, list. + + +Search parameters +~~~~~~~~~~~~~~~~~ + +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + + * - Name + - Default + - Description + * - n_probes + - 20 + - Number of closest IVF lists to scan for each query point. Tuning Considerations --------------------- +IVF methods can be + + Memory footprint ---------------- From f6731bc53aface87d3330aaf8ef9b45828ff7bb6 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Mon, 23 Sep 2024 16:55:51 -0400 Subject: [PATCH 03/22] Adding more content --- docs/source/getting_started.rst | 8 +- docs/source/indexes/cagra.rst | 43 ++++++ docs/source/indexes/ivfflat.rst | 33 ++++- docs/source/indexes/ivfpq.rst | 109 +++++++++++++++ .../vector_databases_vs_vector_search.rst | 127 ++++++++++++++++++ 5 files changed, 317 insertions(+), 3 deletions(-) create mode 100644 docs/source/vector_databases_vs_vector_search.rst diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst index 79b35c2d5..b474a05ac 100644 --- a/docs/source/getting_started.rst +++ b/docs/source/getting_started.rst @@ -1,7 +1,7 @@ Getting Started =============== -This guide provides an initial starting point of the basic concepts and using the various APIs in the cuVS software development kit. +This guide provides an initial starting point of the basic concepts and using the various APIs in the cuVS library. .. toctree:: :maxdepth: 1 @@ -9,4 +9,8 @@ This guide provides an initial starting point of the basic concepts and using th basics.rst interoperability.rst - working_with_ann_indexes.rst \ No newline at end of file + working_with_ann_indexes.rst + +Welcome to cuVS, the premier library for GPU-accelerated vector search and clustering! cuVS provides several core building blocks for constructing new algorithms, as well as end-to-end vector search and clustering algorithms for use either standalone or through a growing list of integrations. + +If you are unfamiliar with the basics of vector search or how vector search differs from vector databases, then this guide should provide some good insight. \ No newline at end of file diff --git a/docs/source/indexes/cagra.rst b/docs/source/indexes/cagra.rst index 2e6f5ed9f..15a418703 100644 --- a/docs/source/indexes/cagra.rst +++ b/docs/source/indexes/cagra.rst @@ -103,3 +103,46 @@ The 3 hyper-parameters that are most often tuned are `graph_degree`, `intermedia Memory footprint ---------------- +CAGRA builds a graph that ultimately ends up on the host while it needs to keep the original dataset around (can be on host or device). + +IVFPQ or NN-DESCENT can be used to build the graph (additions to the peak memory usage calculated as in the respective build algo above). + +Dataset on device (graph on host): +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Index memory footprint (device): :math: `n\_index\_vectors * n\_dims * sizeof(T)` +Index memory footprint (host): :math: `graph\_degree * n\_index\_vectors * sizeof(T)`` + +Dataset on host (graph on host): +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Index memory footprint (host): :math: `n\_index\_vectors * n\_dims * sizeof(T) + graph\_degree * n\_index\_vectors * sizeof(T)` + +Build peak memory usage: +~~~~~~~~~~~~~~~~~~~~~~~~ + +When built using NN-descent / IVF-PQ, the build process consists of two phases: (1) building an initial/(intermediate) graph and then (2) optimizing the graph. Key input parameters are n_vectors, intermediate_graph_degree, graph_degree. +The memory usage in the first phase (building) depends on the chosen method. The biggest allocation is the graph (n_vectors*intermediate_graph_degree), but it’s stored in the host memory. +Usually, the second phase (optimize) uses the most device memory. The peak memory usage is achieved during the pruning step (graph_core.cuh/optimize) +Optimize: formula for peak memory usage (device): :math: `n\_vectors * (4 + (sizeof(IdxT) + 1) * intermediate\_degree)`` + +Build with out-of-core IVF-PQ peak memory usage: +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Out-of-core CAGA build consists of IVF-PQ build, IVF-PQ search, CAGRA optimization. Note that these steps are performed sequentially, so they are not additive. + +IVF-PQ Build: + +.. math:: + n_vectors / train_set_ratio * dim * sizeof(float) // trainset, may be in managed mem + + n_vectors / train_set_ratio * sizeof(uint32_t) // labels, may be in managed mem + + n_clusters * n_dim * sizeof(float) // cluster centers + +IVF-PQ Search (max batch size 1024 vectors on device at a time): + +.. math:: + [n_vectors * (pq_dim * pq_bits / 8 + sizeof(int64_t)) + O(n_clusters)] + + [batch_size * n_dim * sizeof(float)] + [batch_size * intermediate_degree * sizeof(uint32_t)] + + [batch_size * intermediate_degree * sizeof(float)] + + diff --git a/docs/source/indexes/ivfflat.rst b/docs/source/indexes/ivfflat.rst index be918027d..e1d313950 100644 --- a/docs/source/indexes/ivfflat.rst +++ b/docs/source/indexes/ivfflat.rst @@ -69,9 +69,40 @@ Search parameters Tuning Considerations --------------------- -IVF methods can be +Since IVF methods use clustering to establish spatial locality and partition data points into individual lists, there's an inherent +assumption that the number of lists, and thus the max size of the data in the index is known up front. For some use-cases, this +might not matter. For example, most vector databases build many smaller physical approximate nearest neighbors indexes, each from +fixed-size or maximum-sized immutable segments and so the number of lists can be tuned based on the number of vectors in the indexes. + +Empirically, we've found `sqrt(n_index_vectors)` to be a good starting point for the `n_lists` hyper-parameter. Remember, having more +lists means less points to search within each list, but it could also mean more `n_probes` are needed at search time to reach an acceptable +recall. Memory footprint ---------------- +Each cluster is padded to at least 32 vectors (but potentially up to 1024). Assuming uniform random distribution of vectors/list, we would have +:math: `cluster\_overhead = (conservative\_memory\_allocation ? 16 : 512 ) * dim * sizeof(T)` + +Note that each cluster is allocated as a separate allocation. If we use a `cuda_memory_resource`, that would grab memory in 1 MiB chunks, so on average we might have 0.5 MiB overhead per cluster. If we us 10s of thousands of clusters, it becomes essential to use pool allocator to avoid this overhead. + +:math: `cluster\_overhead = 0.5 MiB` // if we do not use pool allocator + + +Index (device memory): +~~~~~~~~~~~~~~~~~~~~~~ + +.. math:: + + n\_vectors * n\_dimensions * sizeof(T) + // interleaved form + n\_vectors * sizeof(int_type) + // list indices + n\_clusters * n\_dimensions * sizeof(T) + // cluster centers + n\_clusters * cluster\_overhead` + + +Peak device memory usage for index build: +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +:math: `workspace = min(1GB, n\_queries * [(n\_lists + 1 + n\_probes*(k+1))*sizeof(T) + n\_probes*k*sizeof(Idx)])` +:math: `index\_size + workspace` + diff --git a/docs/source/indexes/ivfpq.rst b/docs/source/indexes/ivfpq.rst index 143b57f6f..112607aec 100644 --- a/docs/source/indexes/ivfpq.rst +++ b/docs/source/indexes/ivfpq.rst @@ -15,11 +15,120 @@ this does mean that the unquantized raw vectors need to be available and often t Configuration parameters ------------------------ +Build parameters +~~~~~~~~~~~~~~~~ +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + + * - Name + - Default + - Description + * - n_lists + - sqrt(n) + - Number of coarse clusters used to partition the index. A good heuristic for this value is sqrt(n_vectors_in_index) + * - kmeans_n_iters + - 20 + - The number of iterations when searching for k-means centers + * - kmeans_trainset_fraction + - 0.5 + - The fraction of training data to use for iterative k-means building + - pq_bits + - 8 + - The bit length of each vector element after compressing with PQ. Possible values are any integer between 4 and 8. + * - pq_dim + - 0 + - The dimensionality of each vector after compressing with PQ. When 0, the dim is set heuristically. + * - codebook_kind + - per_subspace + - How codebooks are created. `per_subspace` trains kmeans on some number of sub-dimensions while `per_cluster` + * - force_random_rotation + - false + - Apply a random rotation matrix on the input data and queries even if `dim % pq_dim == 0` + * - conservative_memory_allocation + - false + - To support dynamic indexes, where points are expected to be added later, the individual IVF lists can be imtentionally overallocated up front to reduce the amount and impact of increasing list sizes, which requires allocating more memory and copying the old list to the new, larger, list. + * - add_data_on_build + - True + - Should the training points be added to the index after the index is built? + * - max_train_points_per_pq_code + - 256 + - The max number of data points to use per PQ code during PQ codebook training. + + +Search parameters +~~~~~~~~~~~~~~~~ + +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + + * - Name + - Default + - Description + * - n_probes + - 20 + - Number of closest IVF lists to scan for each query point. + * - lut_dtype + - cuda_r_32f + - Datatype to store the pq lookup tables. Can also use cuda_r_16f for half-precision and cuda_r_8u for 8-bit precision. Smaller lookup tables can fit into shared memory and significantly improve search times. + * - internal_distance_dtype + - cuda_r_32f + - Storage data type for distance/similarity computed at search time. Can also use cuda_r_16f for half-precision. + * - preferred_smem_carveout + - 1.0 + - Preferred fraction of SM's unified memory / L1 cache to be used as shared memory. Default is 100% Tuning Considerations --------------------- +IVF-PQ has similar tuning considerations to IVF-flat, though the PQ compression ratio adds an additional variable to trade-off index size for search quality. + +It's important to note that IVF-PQ becomes very lossy very quickly, and so refinement reranking is often needed to get a reasonable recall. This step usually consists of searching initially for more k-neighbors than needed and then reducing the resulting neighborhoods down to k by computing exact distances. This step can be performed efficiently on CPU or GPU and generally has only a marginal impact on search latency. + Memory footprint ---------------- +Index (device memory): +~~~~~~~~~~~~~~~~~~~~~~ + +Simple approximate formula: :math: `n\_vectors * (pq\_dim * pq\_bits / 8 + sizeof(IdxT)) + O(n\_clusters)` + +The IVF lists end up being represented by a sparse data structure that stores the pointers to each list, an indices array that contains the indexes of each vector in each list, and an array with the encoded (and interleaved) data for each list. + +IVF list pointers: :math: `n_clusters * sizeof(uint32_t)*` +Indices: :math: `n\_vectors * sizeof(IdxT)`` +Encoded data (interleaved): :math: `n\_vectors * pq\_dim * pq\_bits / 8` +Codebooks: +.. math:: + 4 * pq_dim * pq_len * 2^pq_bits // per-subspace (default) + 4 * n_clusters * pq_len * 2^pq_bits // per-cluster + +Extras: :math: `n\_clusters * (20 + 8 * dim)` + +Index (host memory): +~~~~~~~~~~~~~~~~~~~~ + +When refinement is used with the dataset on host, the original raw vectors are needed: :math: `n\_vectors * n\_dims * sizeof(T)` + +Search peak memory usage (device); +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Total usage: :math: `Index + Queries + Output indices + Output distances + workspace` +Workspace size is not trivial, a heuristic controls the batch size to make sure the workspace fits the resource::get_workspace_free_bytes(res). + +Build peak memory usage (device): +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. math:: + n_vectors / trainset_ratio * dim * sizeof(float) // trainset, may be in managed mem + + n_vectors / trainset_ratio * sizeof(uint32_t) // labels, may be in managed mem + + n_clusters * dim * sizeof(float) // cluster centers + +Note, if there’s not enough space left in the workspace memory resource, IVF-PQ build automatically switches to the managed memory for the training set and labels. + + + + + diff --git a/docs/source/vector_databases_vs_vector_search.rst b/docs/source/vector_databases_vs_vector_search.rst new file mode 100644 index 000000000..8ba3c29e0 --- /dev/null +++ b/docs/source/vector_databases_vs_vector_search.rst @@ -0,0 +1,127 @@ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A brief primer on vector databases and how they relate to vector search +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +One of the primary differences between vector database indexes and traditional database indexes is that vector search often uses approximations to trade-off accuracy of the results for speed. Because of this, while many mature databases offer mechanisms to tune their indexes and achieve better performance, vector database indexes can return completely garbage results if they aren’t tuned for a reasonable level of search quality in addition to performance tuning. This is because vector database indexes are more closely related to machine learning models than they are to traditional database indexes. + +Of course, if the number of vectors is very small, such as less than 100 thousand vectors, it could be fast enough to use a brute-force (also known as a flat index), which exhaustively searches all possible neighbors. +Objectives + +This primer addresses the challenge of configuring vector database indexes, but its primary goal is to get a user up and running quickly with acceptable enough results for a good choice of index type and a small and manageable tuning knob, rather than providing a comprehensive guide to tuning each and every hyper-parameter. + +For this reason, we focus on 4 primary data sizes: +#. Tiny datasets (< 100 thousand vectors) +#. Small datasets where GPU might not be needed (< 1 million vectors) +#. Large datasets (> 1 million vectors), goal is fast index creation at the expense of search quality +#. High quality at the expense of fast index creation + +Like other machine learning algorithms, vector search indexes generally have a training step – which means building the index – and an inference – or search step. The hyperparameters also tend to be broken down into build and search parameters. + +While not always the case, a general trend is often observed where the search speed decreases as the quality increases. This also tends to be the case with the index build performance, though different algorithms have different relationships between build time, quality, and search time. It’s important to understand that there’s no free lunch so there will always be trade-offs for each index type. + +Definition of quality +===================== + +What do we mean when we say quality of an index? In machine learning terminology, we measure this using recall, which is sometimes used interchangeably to mean accuracy, even though the two are slightly different measures. Recall, when used in vector search, essentially means “out of all of my results, which results would have been included in the exact results?” In vector search, the objective is to find some number of vectors that are closest to a given query vector so recall tends to be more relaxed than accuracy, discriminating only on set inclusion, rather than on exact ordered list matching, which would be closer to an accuracy measure. + + +Differences between vector databases and vector search +====================================================== + +As mentioned above, vector search in and of itself refers to the objective of finding the closest vectors in an index around a given set of query vectors. At the lowest level, vector search indexes are just machine learning models, which have a build, search, and recall performance that can be traded off, depending on the algorithm and various hyperparameters. + +Vector search indexes alone are considered primitives that enable, but are not considered by themselves, a fully-fledged vector database. Vector databases provide more production-level features that often use vector search algorithms in concert with other popular database design techniques to add important capabilities like durability, fault tolerance, vertical scalability, partition tolerance, and horizontal scalability. + +In the world of vector databases, there are special purpose-built databases that focus primarily on vector search but might also provide some small capability of more general-purpose databases, like being able to perform a hybrid search across both vectors and metadata. Many general-purpose databases, both relational and nosql / document databases for example, are beginning to add first-class vector types also. + +So what does all this mean to you? Sometimes a simple standalone vector search index is enough. Usually they can be trained and serialized to a file for later use, and often provide a capability to filter out specific vectors during search. Sometimes they even provide a mechanism to scale up to utilize multiple GPUs, for example, but they generally stop there- and suggest either using your own distributed system (like Spark or Dask) or a fully-fledged vector database to scale out. + +FAISS and cuVS are examples of standalone vector search libraries, which again are more closely related to machine learning libraries than to fully-fledged databases. Milvus is an example of a special-purpose vector database and Elastic, MongoDB, and OpenSearch are examples of general-purpose databases that have added vector search capabilities. + +How is vector search used by vector databases? +============================================== + +Within the context of vector databases, there are two primary ways in which vector search indexes are used and it’s important to understand which you are working with because it can have an effect on the behavior of the parameters with respect to the data. + +Many vector search algorithms improve scalability while reducing the number of distances by partitioning the vector space into smaller pieces, often through the use of clustering, hashing, trees, and other techniques. Another popular technique is to reduce the width or dimensionality of the space in order to decrease the cost of computing each distance. In contrast, databases often partition the data, but may only do so to improve things like io performance, partition tolerance, or scale, without regards to the underlying data distributions which are ultimately going to be used for vector search. + +This leads us to two core architectural designs that we encounter in vector databases: + +Locally partitioned vector search indexes: most databases follow this design, and vectors are often first written to a write-ahead log for durability. After some number of vectors are written, the write-ahead logs become immutable and may be merged with other write-ahead logs before eventually being converted to a new vector search index. + +The search is generally done over each locally partitioned index and the results combined. When setting hyperparameters, only the local vector search indexes need to be considered, though the same hyperparameters are going to be used across all of the local partitions. So, for example, if you’ve ingested 100M vectors but each partition only contains about 10M vectors, the size of the index only needs to consider its local 10M vectors. Details like number of vectors in the index are important, for example, when setting the number of clusters in an IVF-based (inverted file index) method, as I’ll cover below. + + +Globally partitioned vector search indexes: some special-purpose vector databases follow this design, such as Yahoo’s Vespa and Google’s Spanner. A global index is trained to partition the entire database’s vectors up front as soon as there are enough vectors to do so (usually these databases are at a large enough scale that a significant number of vectors are bootstrapped initially and so it avoids the cold start problem). Ingested vectors are first run through the global index (clustering, for example, but tree- and graph-based methods have also been used) to determine which partition they belong to and the vectors are then (sent to, and) written directly to that partition. The individual partitions can contain a graph, tree, or a simple IVF list. These types of indexes have been able to scale to hundreds of billions to trillions of vectors, and since the partitions are themselves often implicitly based on neighborhoods, rather than being based on uniformly random distributed vectors like the locally partitioned architectures, the partitions can be grouped together or intentionally separated to support localized searches or load balancing, depending upon the needs of the system. + +The challenge when setting hyperparameters for these types of indexes is that the indexes need to account for the entire set of vectors, and thus the hyperparameters of the global index generally account for all of the vectors in the database, rather than any local partition. + + +Of course, the two approaches outlined above can also be used together (e.g. training a global “coarse” index and then creating localized vector search indexes within each of the global indexes) but to my knowledge, no such architecture has implemented this pattern. + +A challenge with GPUs in vector databases today is that the resulting vector indexes are expected to fit into the memory of available GPUs for fast search. That is to say, there doesn’t exist today an efficient mechanism for offloading or swapping GPU indexes so they can be cached from disk or host memory, for example. We are working on mechanisms to do this, and to also utilize technologies like GPUDirect Storage and GPUDirect RDMA to improve the IO performance further. + +Configuring localized vector search indexes +=========================================== + +Since most vector databases use localized partitioning, we’ll focus on that in this document. If global partitioning becomes more widely used, we can add more details at a later date. + +Tiny datasets (< 100 thousand vectors) +These datasets are very small and it’s questionable whether or not the GPU would provide any value at all. If the dimensionality is also relatively small (< 1024), you could just use brute-force or HNSW on the CPU and get great performance. If the dimensionality is relatively large (1536, 2048, 4096), you should consider using HNSW. If build time performance is critical, you should consider using CAGRA to build the graph and convert it to an HNSW graph for search (this capability exists today in the standalone cuVS/RAFT libraries and will soon be added to Milvus). An IVF flat index can also be a great candidate here, as it can improve the search performance over brute-force by partitioning the vector space and thus reducing the search space. + +You could even use FAISS or cuVS directly if you don’t need the additional features in a fully-fledged database. +Small datasets where GPU might not be needed (< 1 million vectors) +For smaller dimensionality, such as 1024 or below, you could consider using a brute-force (aka flat) index on GPU and get very good search performance with exact results. You could also use a graph-based index like HNSW on the CPU or CAGRA on the GPU. If build time is critical, you could even build a CAGRA graph on the GPU and convert it to HNSW graph on the CPU. + +For larger dimensionality (1536, 2048, 4096), you will start to see lower build-time performance with HNSW for higher quality search settings, and so it becomes more clear that building a CAGRA graph can be useful instead. +Large datasets (> 1 million vectors), goal is fast index creation at the expense of search quality + +For fast ingest where slightly lower search quality is acceptable (85% recall and above), the IVF (inverted file index) methods can be very useful, as they can be very fast to build and still have acceptable search performance. IVF-flat index will partition the vectors into some number of clusters (specified by the user as n_lists) and at search time, some number of closest clusters (defined by n_probes) will be searched with brute-force for each query vector. + +IVF-PQ is similar to IVF-flat with the major difference that the vectors are compressed using a lossy product quantized compression so the index can have a much smaller footprint on the GPU. In general, it’s advised to set n_lists = sqrt(n_vectors) and set n_probes to some percentage of n_lists (e.g. 1%, 2%, 4%, 8%, 16%). Because IVF-PQ is a lossy compression, a refinement step can be performed by initially increasing the number of neighbors (by some multiple factor) and using the raw vectors to compute the exact distances, ultimately reducing the neighborhoods down to size k. Even a refinement of 2x (which would query initially for k*2) can be quite effective in making up for recall lost by the PQ compression, but it does come at the expense of having to keep the raw vectors around (keeping in mind many databases store the raw vectors anyways). + +Large datasets (> 1 million vectors), goal is high quality search at the expense of fast index creation + +By trading off index creation performance, an extremely high quality search model can be built. Generally, all of the vector search index types have hyperparameters that have a direct correlation with the search accuracy and so they can be cranked up to yield better recall. Unfortunately, this can also significantly increase the index build time and reduce the search throughput. The trick here is to find the fastest build time that can achieve the best recall with the lowest latency or highest throughput possible. + +As for suggested index types, graph-based algorithms like HNSW and CAGRA tend to scale very well to larger datasets while having superior search performance with respect to quality. The challenge is that graph-based indexes require learning a graph and so, as the subtitle of this section suggests, have a tendency to be slower to build than other options. Using the CAGRA algorithm on the GPU can reduce the build time significantly over HNSW, while also having a superior throughput (and lower latency) than searching on the CPU. Currently, the downside to using CAGRA on the GPU is that it requires both the graph and the raw vectors to fit into GPU memory. A middle-ground can be reached by building a CAGRA graph on the GPU and converting it to an HNSW for high quality (and moderately fast) search on the CPU. + + +Tuning and hyperparameter optimization +====================================== + +Unfortunately, for large datasets, doing a hyperparameter optimization on the whole dataset is not always feasible and this is actually where the locally partitioned vector search indexes have an advantage because you can think of each smaller segment of the larger index as a uniform random sample of the total vectors in the dataset. This means that it is possible to perform a hyperparameter optimization on the smaller subsets and find reasonably acceptable parameters that should generalize fairly well to the entire dataset. Generally this hyperparameter optimization will require computing a ground truth on the subset with an exact method like brute-force and then using it to evaluate several searches on randomly sampled vectors. + +Full hyperparameter optimization may also not always be necessary- for example, once you have built a ground truth dataset on a subset, many times you can start by building an index with the default build parameters and then playing around with different search parameters until you get the desired quality and search performance. For massive indexes that might be multiple terabytes, you could also take this subsampling of, say, 10M vectors, train an index and then tune the search parameters from there. While there might be a small margin of error, the chosen build/search parameters should generalize fairly well for the databases that build locally partitioned indexes. + + +Summary of vector search index types + + +Name +Trade-offs +Best to use with… +Brute-force (aka flat) +Exact search but requires exhaustive distance computations +Tiny datasets (< 100k vectors) +IVF-Flat +Partitions the vector space to reduce distance computations for brute-force search at the expense of recall +Small datasets (<1M vectors) or larger datasets (>1M vectors) where fast index build time is prioritized over quality. +IVF-PQ +Adds product quantization to IVF-Flat to achieve scale at the expense of recall +Large datasets (>>1M vectors) where fast index build is prioritized over quality +HNSW +Significantly reduces distance computations at the expense of longer build times +Small datasets (<1M vectors) or large datasets (>1M vectors) where quality and speed of search are prioritized over index build times +CAGRA +Significantly reduces distance computations at the expense of longer build times (though build times improve over HNSW) +Large datasets (>>1M vectors) where quality and speed of search are prioritized over index build times but index build times are still important. +CAGRA build +HNSW search +(coming soon to Milvus) +Significantly reduces distance computations and improves build times at the expense of higher search latency / lower throughput. +Large datasets (>>1M vectors) where index build times and quality of search is important but GPU resources are limited and latency of search is not. + + + + + From 0e4c016134506700826a9066d650dff7f088b49e Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Mon, 23 Sep 2024 18:09:36 -0400 Subject: [PATCH 04/22] COntinuing to flesh out getting started materials --- docs/source/{basics.rst => api_basics.rst} | 0 ...erability.rst => api_interoperability.rst} | 0 docs/source/comparing_indexes.rst | 53 +++++++++++++++++++ docs/source/getting_started.rst | 39 ++++++++++---- docs/source/tuning_guide.rst | 38 +++++++++++++ .../vector_databases_vs_vector_search.rst | 3 ++ 6 files changed, 123 insertions(+), 10 deletions(-) rename docs/source/{basics.rst => api_basics.rst} (100%) rename docs/source/{interoperability.rst => api_interoperability.rst} (100%) create mode 100644 docs/source/comparing_indexes.rst create mode 100644 docs/source/tuning_guide.rst diff --git a/docs/source/basics.rst b/docs/source/api_basics.rst similarity index 100% rename from docs/source/basics.rst rename to docs/source/api_basics.rst diff --git a/docs/source/interoperability.rst b/docs/source/api_interoperability.rst similarity index 100% rename from docs/source/interoperability.rst rename to docs/source/api_interoperability.rst diff --git a/docs/source/comparing_indexes.rst b/docs/source/comparing_indexes.rst new file mode 100644 index 000000000..10906c7da --- /dev/null +++ b/docs/source/comparing_indexes.rst @@ -0,0 +1,53 @@ +.. _comparing_indexes: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Comparing performance of vector indexes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This document provides a brief overview methodology for comparing vector search indexes and models. For guidance on how to choose and configure an index type, please refer to this guide. + +Unlike traditional database indexes, which will generally return correct results even without performance tuning, vector search indexes are more closely related to ML models and they can return absolutely garbage results if they have not been tuned. + +For this reason, it’s important to consider the parameters that an index is built upon, both for its potential quality and throughput/latency, when comparing two trained indexes. While easier to build an index on its default parameters than having to tune them, a well tuned index can have a significantly better search quality AND perform within search perf constraints like maximal throughput and minimal latency. + + +What is recall? +=============== + +Recall is a measure of model quality. Imagine for a particular vector, we know the exact nearest neighbors because we computed them already. The recall for a query result can be computed by taking the set intersection between the exact nearest neighbors and the actual nearest neighbors. The number of neighbors in that intersection list gets divided by k, the number of neighbors being requested. To really give a fair estimate of the recall of a model, we use several query vectors, all with ground truth computed, and we take the total neighbors across all intersected neighbor lists and divide by n_queries * k. + +Parameter settings dictate the quality of an index. The graph below shows eight indexes from the same data but with different tuning parameters. Generally speaking, the indexes with higher average recall took longer to build. Which index is fair to report? + + + +How do I compare models or indexing algorithms? +=============================================== + +In order to fairly compare the performance (e.g. latency and throughput) of an indexing algorithm or model against another, we always need to do so with respect to its potential recall. This is important and draws from the ML roots of vector search, but is often confusing to newcomers who might be more familiar with the database world. + +Best practice: Latency and throughput can only be compared at similar levels of recall. If you measure the performance of two indexes at different levels of recall, you are making an unfair comparison. + +Because recall levels can vary quite a bit across parameter settings, we tend to compare recall within a small set of potential buckets, so that parameter settings that perform within each bucket can be fairly compared. + +We suggest averaging performance within a range of recall. For general guidance, we tend to use the following buckets: + +#. 85% - 89% +#. 90% - 94% +#. 95% - 99% +#. >99% + +This allows us to say things like “okay at 95% recall level, model A can be built 3x faster than model B, but model B has 2x lower latency than model A” + +Another important detail is that we compare these models against their best-case search performance within each recall window. This means that we aim to find models that not only have great recall quality but also have either the highest throughput or lowest latency within the window of interest. These best-cases are most often computed by doing a parameter sweep in a grid search (or other types of search optimizers) and looking at the best cases for each level of recall. + +The resulting data points will construct a curve known as a Pareto optimum. Please note that this process is specifically for showing best-case across recall and throughput/latency, but when we care about finding the parameters that yield the best recall and search performance, we are essentially performing a hyperparameter optimization, which is common in machine learning. + + +How do I do this on large vector databases? +=========================================== + +It turns out that most vector databases, like Milvus for example, make many smaller vector search indexing models for a single “index”, and the distribution of the vectors across the smaller index models are assumed to be completely uniform. This means we can use subsampling to our benefit, and tune on smaller subsamples of the overall dataset. + +Please note, however, that there are often caps on the size of each of these smaller indexes, and that needs to be taken into consideration when choosing the size of the sub sample to tune. + +Please see the guide I wrote previously here for more information on the steps one would take to do this subsampling and tuning process. \ No newline at end of file diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst index b474a05ac..cf4c2cb5a 100644 --- a/docs/source/getting_started.rst +++ b/docs/source/getting_started.rst @@ -1,16 +1,35 @@ Getting Started =============== -This guide provides an initial starting point of the basic concepts and using the various APIs in the cuVS library. +Welcome to cuVS, the premier library for GPU-accelerated vector search and clustering! cuVS provides several core building blocks for constructing new algorithms, as well as end-to-end vector search and clustering algorithms for use either standalone or through a growing list of :doc:`integrations `. -.. toctree:: - :maxdepth: 1 - :caption: Contents: - - basics.rst - interoperability.rst - working_with_ann_indexes.rst +There are several benefits to using cuVS and GPUs for vector search, including -Welcome to cuVS, the premier library for GPU-accelerated vector search and clustering! cuVS provides several core building blocks for constructing new algorithms, as well as end-to-end vector search and clustering algorithms for use either standalone or through a growing list of integrations. +#. Fast index build +#. Latency critical and high throughput search +#. Parameter tuning +#. Cost savings +#. Interoperability (build on GPU, deploy on CPU) +#. Multiple language support +#. Building blocks for composing new or accelerating existing algorithms + +New to vector search? +===================== + +If you are unfamiliar with the basics of vector search or how vector search differs from vector databases, then :doc:`this primer on vector search guide ` should provide some good insight. As outlined in the primer, vector search as used in vector databases is often closer to machine learning than to traditional databases. This means that while traditional databases can often be slow without any performance tuning, they will usually still yield the correct results. Unfortunately, vector search indexes, like other machine learning models, can yield garbage results of not tuned correctly. Fortunately, this opens up the whole world of hyperparamer optimization to improve vector search performance and quality. Please see our :doc:`index tuning guide ` for more information. + +When comparing the performance of vector search indexes, it is important that considerations are made with respect to three main dimensions: + +#. Build time +#. Search quality +#. Search performance + +Please see the :doc:`primer on comparing vector search index performance `` for more information on methodologies and how to make a fair apples-to-apples comparison during your evaluations. + +Using cuVS APIs +=============== + +cuVS is a C++ library its core, which is wrapped with a C library and exposed further through various different languages. cuVS currently provides APIs and documentation for :doc:`C `, :doc:`C++ `, :doc:`Python `, and :doc:`Rust ` with more languages in the works. our :doc:`API basics ` provides some background and context about the important paradigms and vocabulary types you'll encounter when working with cuVS types. + +Please refer to the :doc:`guide on API interoperability ` for more information on how cuVS can work seamlessly with other libraries like numpy, cupy, tensorflow, and pytorch, even without having to copy device memory. -If you are unfamiliar with the basics of vector search or how vector search differs from vector databases, then this guide should provide some good insight. \ No newline at end of file diff --git a/docs/source/tuning_guide.rst b/docs/source/tuning_guide.rst new file mode 100644 index 000000000..adba53958 --- /dev/null +++ b/docs/source/tuning_guide.rst @@ -0,0 +1,38 @@ +.. _tuning_guide: + +~~~~~~~~~~~~ +Tuning Guide +~~~~~~~~~~~~ + +A Method for tuning and evaluating Vector Search Indexes At Scale in Locally Indexed Vector Databases + +Objective +========= + +Give uswrs an approach for tuning a vector search index. Evaluation of a vector search index “model” that measures recall in proportion to build time so that it penalizes the recall when the build time is really high (should ultimately optimize for finding a lower build time and higher recall). + +Output +====== +An example notebook which can be released in cuVS as an example for tuning an index, especially for CAGRA. + +Background +========== + +Vector databases 101: Configuring Vector Search Indexes + +Many customers (Specifically AWS and Google) have told us that >75% of their users will not be able to tune a vector database beyond one or two simple knobs. They suggest that an ideal “knob” would be to balance training time with search quality. The more time, the higher the quality. For the <25% that wants to tune, they’ve asked for simple tools for tuning. They also ask for some simple guidelines for setting tuning parameters. +Strategy +Ray-tune and our Python APIs could be an option to verify this. We could write a notebook that takes some small subsampling from a dataset and does a parameter search on it. Then we actually evaluate random queries against the ground truth to test that the index params actually generalized well (I'm confident they will). + +Getting Started with Optuna and RAPIDS for HPO — RAPIDS Deployment Documentation documentation + +Ray tune / Optune should allow us to plug in cuvs' Python API trivially and then we just specify a bunch of params to tune and let it go to town- this would ideally be done on a multi-node multi-GPU setup where we can try 10's of combinations at once, starting with "empirical heuristics" as defaults and iterate through something like a bayesian optimizer to find the best params. + +#. Generate a dataset with a reasonable number of vectors (say 10Mx768) +#. Subsample from the population uniformly, let's say 10% of that (1M vectors) +#. Subsample from the population uniformly, let's say 1% of the 1M vectors from the prior step, this is a validation set. +#. Compute ground truth on the vectors from prior step against all 10M vectors +#. Start tuning process for the 1M vectors from step 2 using the vectors from step 3 as the query set +#. Using the ideal params that provide the target objective (e.g. build vs quality), ingest all 10M vectors into the database and create an index. +#. Query the vectors from the database and calculate the recall. Verify it's close to the recall from the model params chosen in 5 (within some small epsilon). . + diff --git a/docs/source/vector_databases_vs_vector_search.rst b/docs/source/vector_databases_vs_vector_search.rst index 8ba3c29e0..0d5a7c020 100644 --- a/docs/source/vector_databases_vs_vector_search.rst +++ b/docs/source/vector_databases_vs_vector_search.rst @@ -5,11 +5,14 @@ A brief primer on vector databases and how they relate to vector search One of the primary differences between vector database indexes and traditional database indexes is that vector search often uses approximations to trade-off accuracy of the results for speed. Because of this, while many mature databases offer mechanisms to tune their indexes and achieve better performance, vector database indexes can return completely garbage results if they aren’t tuned for a reasonable level of search quality in addition to performance tuning. This is because vector database indexes are more closely related to machine learning models than they are to traditional database indexes. Of course, if the number of vectors is very small, such as less than 100 thousand vectors, it could be fast enough to use a brute-force (also known as a flat index), which exhaustively searches all possible neighbors. + Objectives +========== This primer addresses the challenge of configuring vector database indexes, but its primary goal is to get a user up and running quickly with acceptable enough results for a good choice of index type and a small and manageable tuning knob, rather than providing a comprehensive guide to tuning each and every hyper-parameter. For this reason, we focus on 4 primary data sizes: + #. Tiny datasets (< 100 thousand vectors) #. Small datasets where GPU might not be needed (< 1 million vectors) #. Large datasets (> 1 million vectors), goal is fast index creation at the expense of search quality From c17e8427260d1c1852fca3208542d172ccfa292e Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Mon, 23 Sep 2024 19:19:27 -0400 Subject: [PATCH 05/22] Another updat3 --- docs/source/getting_started.rst | 28 ++++++- docs/source/index.rst | 5 +- docs/source/indexes/ivfflat.rst | 16 ++-- .../vector_databases_vs_vector_search.rst | 73 ++++++++++--------- 4 files changed, 75 insertions(+), 47 deletions(-) diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst index cf4c2cb5a..0ca029e2e 100644 --- a/docs/source/getting_started.rst +++ b/docs/source/getting_started.rst @@ -1,5 +1,6 @@ +~~~~~~~~~~~~~~~ Getting Started -=============== +~~~~~~~~~~~~~~~ Welcome to cuVS, the premier library for GPU-accelerated vector search and clustering! cuVS provides several core building blocks for constructing new algorithms, as well as end-to-end vector search and clustering algorithms for use either standalone or through a growing list of :doc:`integrations `. @@ -16,7 +17,9 @@ There are several benefits to using cuVS and GPUs for vector search, including New to vector search? ===================== -If you are unfamiliar with the basics of vector search or how vector search differs from vector databases, then :doc:`this primer on vector search guide ` should provide some good insight. As outlined in the primer, vector search as used in vector databases is often closer to machine learning than to traditional databases. This means that while traditional databases can often be slow without any performance tuning, they will usually still yield the correct results. Unfortunately, vector search indexes, like other machine learning models, can yield garbage results of not tuned correctly. Fortunately, this opens up the whole world of hyperparamer optimization to improve vector search performance and quality. Please see our :doc:`index tuning guide ` for more information. +If you are unfamiliar with the basics of vector search or how vector search differs from vector databases, then :doc:`this primer on vector search guide ` should provide some good insight. As outlined in the primer, vector search as used in vector databases is often closer to machine learning than to traditional databases. This means that while traditional databases can often be slow without any performance tuning, they will usually still yield the correct results. Unfortunately, vector search indexes, like other machine learning models, can yield garbage results of not tuned correctly. + +Fortunately, this opens up the whole world of hyperparamer optimization to improve vector search performance and quality. Please see our :doc:`index tuning guide ` for more information. When comparing the performance of vector search indexes, it is important that considerations are made with respect to three main dimensions: @@ -24,7 +27,15 @@ When comparing the performance of vector search indexes, it is important that co #. Search quality #. Search performance -Please see the :doc:`primer on comparing vector search index performance `` for more information on methodologies and how to make a fair apples-to-apples comparison during your evaluations. +Please see the :doc:`primer on comparing vector search index performance ` for more information on methodologies and how to make a fair apples-to-apples comparison during your evaluations. + +Supported indexes +================= + +cuVS supports many of the standard index types with the list continuing to grow and stay current with the state-of-the-art. Please refer to our :doc:`vector search index guide ` for to learn more about each individual index type, when they can be useful on the GPU, the tuning knobs they offer to trade off performance and quality. + +The primary goal of cuVS is to enable speed, scale, and flexibility (in that order)- and one of the important value propositions is to enhance existing software deployments with extensible GPU capabilities to improve pain points while not interrupting parts of the system that work well today with CPU. + Using cuVS APIs =============== @@ -33,3 +44,14 @@ cuVS is a C++ library its core, which is wrapped with a C library and exposed fu Please refer to the :doc:`guide on API interoperability ` for more information on how cuVS can work seamlessly with other libraries like numpy, cupy, tensorflow, and pytorch, even without having to copy device memory. + +Where to next? +============== + +cuVS is free and open source software, licesed under Apache 2.0 Once you are familiar with and/or have used cuVS, you can access the developer community most easily through :doc:`Github `. Please open Github issues for any bugs, questions or feature requests. + +You can also access the RAPIDS community through :doc:`Slack `, :doc:`Stack Overflow ` and :doc:`X ` + +We frequently publish blogs on GPU-enabled vector search, which can provide great deep dives into various important topics and breakthroughs: + +#. :doc:`Accelerating Vector Search with cuVS IVF-PQ ` \ No newline at end of file diff --git a/docs/source/index.rst b/docs/source/index.rst index 86ccf8c45..e77e4d83a 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -19,12 +19,11 @@ What is cuVS? cuVS is a library for vector search and clustering on the GPU. .. toctree:: - :maxdepth: 1 + :maxdepth: 3 :caption: Contents: - getting_started.rst - indexes/indexes.rst build.rst + getting_started.rst integrations.rst api_docs.rst contributing.md diff --git a/docs/source/indexes/ivfflat.rst b/docs/source/indexes/ivfflat.rst index e1d313950..c206758b2 100644 --- a/docs/source/indexes/ivfflat.rst +++ b/docs/source/indexes/ivfflat.rst @@ -83,11 +83,11 @@ Memory footprint ---------------- Each cluster is padded to at least 32 vectors (but potentially up to 1024). Assuming uniform random distribution of vectors/list, we would have -:math: `cluster\_overhead = (conservative\_memory\_allocation ? 16 : 512 ) * dim * sizeof(T)` +:math:`cluster\_overhead = (conservative\_memory\_allocation ? 16 : 512 ) * dim * sizeof(T)` Note that each cluster is allocated as a separate allocation. If we use a `cuda_memory_resource`, that would grab memory in 1 MiB chunks, so on average we might have 0.5 MiB overhead per cluster. If we us 10s of thousands of clusters, it becomes essential to use pool allocator to avoid this overhead. -:math: `cluster\_overhead = 0.5 MiB` // if we do not use pool allocator +:math:`cluster\_overhead = 0.5 MiB` // if we do not use pool allocator Index (device memory): @@ -95,14 +95,14 @@ Index (device memory): .. math:: - n\_vectors * n\_dimensions * sizeof(T) + // interleaved form - n\_vectors * sizeof(int_type) + // list indices - n\_clusters * n\_dimensions * sizeof(T) + // cluster centers - n\_clusters * cluster\_overhead` + n_vectors * n_dimensions * sizeof(T) + // interleaved form + n_vectors * sizeof(int_type) + // list indices + n_clusters * n_dimensions * sizeof(T) + // cluster centers + n_clusters * cluster_overhead` Peak device memory usage for index build: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -:math: `workspace = min(1GB, n\_queries * [(n\_lists + 1 + n\_probes*(k+1))*sizeof(T) + n\_probes*k*sizeof(Idx)])` -:math: `index\_size + workspace` +:math:`workspace = min(1GB, n\_queries * [(n\_lists + 1 + n\_probes*(k+1))*sizeof(T) + n\_probes*k*sizeof(Idx)])` +:math:`index\_size + workspace` diff --git a/docs/source/vector_databases_vs_vector_search.rst b/docs/source/vector_databases_vs_vector_search.rst index 0d5a7c020..a1a579946 100644 --- a/docs/source/vector_databases_vs_vector_search.rst +++ b/docs/source/vector_databases_vs_vector_search.rst @@ -50,14 +50,21 @@ Many vector search algorithms improve scalability while reducing the number of d This leads us to two core architectural designs that we encounter in vector databases: -Locally partitioned vector search indexes: most databases follow this design, and vectors are often first written to a write-ahead log for durability. After some number of vectors are written, the write-ahead logs become immutable and may be merged with other write-ahead logs before eventually being converted to a new vector search index. +Locally partitioned vector search indexes +----------------------------------------- + +>ost databases follow this design, and vectors are often first written to a write-ahead log for durability. After some number of vectors are written, the write-ahead logs become immutable and may be merged with other write-ahead logs before eventually being converted to a new vector search index. The search is generally done over each locally partitioned index and the results combined. When setting hyperparameters, only the local vector search indexes need to be considered, though the same hyperparameters are going to be used across all of the local partitions. So, for example, if you’ve ingested 100M vectors but each partition only contains about 10M vectors, the size of the index only needs to consider its local 10M vectors. Details like number of vectors in the index are important, for example, when setting the number of clusters in an IVF-based (inverted file index) method, as I’ll cover below. -Globally partitioned vector search indexes: some special-purpose vector databases follow this design, such as Yahoo’s Vespa and Google’s Spanner. A global index is trained to partition the entire database’s vectors up front as soon as there are enough vectors to do so (usually these databases are at a large enough scale that a significant number of vectors are bootstrapped initially and so it avoids the cold start problem). Ingested vectors are first run through the global index (clustering, for example, but tree- and graph-based methods have also been used) to determine which partition they belong to and the vectors are then (sent to, and) written directly to that partition. The individual partitions can contain a graph, tree, or a simple IVF list. These types of indexes have been able to scale to hundreds of billions to trillions of vectors, and since the partitions are themselves often implicitly based on neighborhoods, rather than being based on uniformly random distributed vectors like the locally partitioned architectures, the partitions can be grouped together or intentionally separated to support localized searches or load balancing, depending upon the needs of the system. +Globally partitioned vector search indexes +------------------------------------------ + +Some special-purpose vector databases follow this design, such as Yahoo’s Vespa and Google’s Spanner. A global index is trained to partition the entire database’s vectors up front as soon as there are enough vectors to do so (usually these databases are at a large enough scale that a significant number of vectors are bootstrapped initially and so it avoids the cold start problem). Ingested vectors are first run through the global index (clustering, for example, but tree- and graph-based methods have also been used) to determine which partition they belong to and the vectors are then (sent to, and) written directly to that partition. The individual partitions can contain a graph, tree, or a simple IVF list. These types of indexes have been able to scale to hundreds of billions to trillions of vectors, and since the partitions are themselves often implicitly based on neighborhoods, rather than being based on uniformly random distributed vectors like the locally partitioned architectures, the partitions can be grouped together or intentionally separated to support localized searches or load balancing, depending upon the needs of the system. + +The challenge when setting hyper-parameters for globally partitioned indexes is that they need to account for the entire set of vectors, and thus the hyperparameters of the global index generally account for all of the vectors in the database, rather than any local partition. -The challenge when setting hyperparameters for these types of indexes is that the indexes need to account for the entire set of vectors, and thus the hyperparameters of the global index generally account for all of the vectors in the database, rather than any local partition. Of course, the two approaches outlined above can also be used together (e.g. training a global “coarse” index and then creating localized vector search indexes within each of the global indexes) but to my knowledge, no such architecture has implemented this pattern. @@ -72,7 +79,8 @@ Since most vector databases use localized partitioning, we’ll focus on that in Tiny datasets (< 100 thousand vectors) These datasets are very small and it’s questionable whether or not the GPU would provide any value at all. If the dimensionality is also relatively small (< 1024), you could just use brute-force or HNSW on the CPU and get great performance. If the dimensionality is relatively large (1536, 2048, 4096), you should consider using HNSW. If build time performance is critical, you should consider using CAGRA to build the graph and convert it to an HNSW graph for search (this capability exists today in the standalone cuVS/RAFT libraries and will soon be added to Milvus). An IVF flat index can also be a great candidate here, as it can improve the search performance over brute-force by partitioning the vector space and thus reducing the search space. -You could even use FAISS or cuVS directly if you don’t need the additional features in a fully-fledged database. +You could even use FAISS or cuVS standalone if you don’t need the additional features in a fully-fledged database. + Small datasets where GPU might not be needed (< 1 million vectors) For smaller dimensionality, such as 1024 or below, you could consider using a brute-force (aka flat) index on GPU and get very good search performance with exact results. You could also use a graph-based index like HNSW on the CPU or CAGRA on the GPU. If build time is critical, you could even build a CAGRA graph on the GPU and convert it to HNSW graph on the CPU. @@ -99,32 +107,31 @@ Full hyperparameter optimization may also not always be necessary- for example, Summary of vector search index types - - -Name -Trade-offs -Best to use with… -Brute-force (aka flat) -Exact search but requires exhaustive distance computations -Tiny datasets (< 100k vectors) -IVF-Flat -Partitions the vector space to reduce distance computations for brute-force search at the expense of recall -Small datasets (<1M vectors) or larger datasets (>1M vectors) where fast index build time is prioritized over quality. -IVF-PQ -Adds product quantization to IVF-Flat to achieve scale at the expense of recall -Large datasets (>>1M vectors) where fast index build is prioritized over quality -HNSW -Significantly reduces distance computations at the expense of longer build times -Small datasets (<1M vectors) or large datasets (>1M vectors) where quality and speed of search are prioritized over index build times -CAGRA -Significantly reduces distance computations at the expense of longer build times (though build times improve over HNSW) -Large datasets (>>1M vectors) where quality and speed of search are prioritized over index build times but index build times are still important. -CAGRA build +HNSW search -(coming soon to Milvus) -Significantly reduces distance computations and improves build times at the expense of higher search latency / lower throughput. -Large datasets (>>1M vectors) where index build times and quality of search is important but GPU resources are limited and latency of search is not. - - - - - +==================================== + +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + + * - Name + - Trade-offs + - Best to use with... + * - Brute-force (aka flat) + - Exact search but requires exhaustive distance computations + - Tiny datasets (< 100k vectors) + * - IVF-Flat + - Partitions the vector space to reduce distance computations for brute-force search at the expense of recall + - Small datasets (<1M vectors) or larger datasets (>1M vectors) where fast index build time is prioritized over quality. + * - IVF-PQ + - Adds product quantization to IVF-Flat to achieve scale at the expense of recall + - Large datasets (>>1M vectors) where fast index build is prioritized over quality + * - HNSW + - Significantly reduces distance computations at the expense of longer build times + - Small datasets (<1M vectors) or large datasets (>1M vectors) where quality and speed of search are prioritized over index build times + * - CAGRA + - Significantly reduces distance computations at the expense of longer build times (though build times improve over HNSW) + - Large datasets (>>1M vectors) where quality and speed of search are prioritized over index build times but index build times are still important. + * - CAGRA build +HNSW search + - (coming soon to Milvus) + - Significantly reduces distance computations and improves build times at the expense of higher search latency / lower throughput. + Large datasets (>>1M vectors) where index build times and quality of search is important but GPU resources are limited and latency of search is not. From 47b03ef4497531b1b41795b77664e4a446c8af55 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Mon, 23 Sep 2024 20:23:51 -0400 Subject: [PATCH 06/22] Updates --- docs/source/getting_started.rst | 66 +++++++++++++++++++++++++++++++-- 1 file changed, 63 insertions(+), 3 deletions(-) diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst index 0ca029e2e..2d26cd0eb 100644 --- a/docs/source/getting_started.rst +++ b/docs/source/getting_started.rst @@ -2,6 +2,42 @@ Getting Started ~~~~~~~~~~~~~~~ +- `New to vector search?`_ + + * :doc:`Primer on vector search ` + + * :doc:`Index tuning guide ` + + * :doc:`Comparing vector search index performance ` + +- `Supported indexes`_ + + * :doc:`Vector search index guide ` + +- `Using cuVS APIs`_ + + * :doc:`C API Docs ` + + * :doc:`C++ API Docs ` + + * :doc:`Python API Docs ` + + * :doc:`Rust API Docs ` + + * :doc:`API basics ` + + * :doc:`API interoperability ` + +- `Where to next?`_ + + * `Social media`_ + + * `Blogs`_ + + * `Research`_ + + * `Get involved`_ + Welcome to cuVS, the premier library for GPU-accelerated vector search and clustering! cuVS provides several core building blocks for constructing new algorithms, as well as end-to-end vector search and clustering algorithms for use either standalone or through a growing list of :doc:`integrations `. There are several benefits to using cuVS and GPUs for vector search, including @@ -48,10 +84,34 @@ Please refer to the :doc:`guide on API interoperability ` Where to next? ============== -cuVS is free and open source software, licesed under Apache 2.0 Once you are familiar with and/or have used cuVS, you can access the developer community most easily through :doc:`Github `. Please open Github issues for any bugs, questions or feature requests. +cuVS is free and open source software, licesed under Apache 2.0 Once you are familiar with and/or have used cuVS, you can access the developer community most easily through `Github `_. Please open Github issues for any bugs, questions or feature requests. + +Social media +------------ + +You can access the RAPIDS community through `Slack `_ , `Stack Overflow `_ and `X `_ -You can also access the RAPIDS community through :doc:`Slack `, :doc:`Stack Overflow ` and :doc:`X ` +Blogs +----- We frequently publish blogs on GPU-enabled vector search, which can provide great deep dives into various important topics and breakthroughs: -#. :doc:`Accelerating Vector Search with cuVS IVF-PQ ` \ No newline at end of file +#. `Accelerated Vector Search: Approximating with cuVS IVF-Flat `_ +#. `Accelerating Vector Search with cuVS IVF-PQ `_ + +Research +-------- + +For the interested reader, many of the accelerated implementations in cuVS are also based on research papers which can provide a lot more background. We also ask you to please cite the corresponding algorithms by referencing them in your own research. + +#. `CAGRA: Highly Parallel Graph Construction and Approximate Nearest Neighbor Search `_ +#. `Top-K Algorithms on GPU: A Comprehensive Study and New Methods `_ +#. `Fast K-NN Graph Construction by GPU Based NN-Descent `_ +#. `cuSLINK: Single-linkage Agglomerative Clustering on the GPU `_ +#. `GPU Semiring Primitives for Sparse Neighborhood Methods `_ + + +Get involved +------------ + +We always welcome patches for new features and bug fixes. Please read our `contributing guide `_ for more information on contributing patches to cuVS. From 6598ff8bf889dcd9d9ac6afec6b0d989d3eb8a1b Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Mon, 23 Sep 2024 21:02:49 -0400 Subject: [PATCH 07/22] Migrating the C++ tutorial --- docs/source/cpp_tutorial.rst | 448 +++++++++++++++++++++++++++++++++++ 1 file changed, 448 insertions(+) create mode 100644 docs/source/cpp_tutorial.rst diff --git a/docs/source/cpp_tutorial.rst b/docs/source/cpp_tutorial.rst new file mode 100644 index 000000000..4643253e6 --- /dev/null +++ b/docs/source/cpp_tutorial.rst @@ -0,0 +1,448 @@ +======================== +C++ Walkthrough Tutorial +======================== + +Table of Contents +================= + +- `Step 1: Starting off with cuVS`_ + +- `Step 2: Generate some data`_ + +- `Step 3: Using brute-force indexes`_ + +- `Step 4: Using the ANN indexes`_ + +- `Step 5: Evaluate neighborhood quality`_ + +- `Advanced Features`_ + + * `Serialization`_ + + * `Filtering`_ + + * `Stream Pools`_ + + * `Device Resources Manager`_ + + * `Device Memory Resources`_ + + * `Workspace Memory Resource`_ + +cuVS has several important algorithms for performing vector search on the GPU and this tutorial walks through the primary vector search APIs from start to finish to provide a reference for quick setup and C++ API usage. + +This tutorial assumes cuVS has been installed and/or added to your build so that you are able to compile and run RAFT code. If not done already, please follow the [build and install instructions](build.md) and consider taking a look at the [example c++ template project](https://github.com/rapidsai/raft/tree/HEAD/cpp/template) for ready-to-go examples that you can immediately build and start playing with. Also take a look at RAFT's library of [reproducible vector search benchmarks](raft_ann_benchmarks.md) to run benchmarks that compare cuVS against other state-of-the-art nearest neighbors algorithms at scale. + +For more information about the various APIs demonstrated in this tutorial, along with comprehensive usage examples of all the APIs offered by RAFT, please refer to the [cuVS C++ API Documentation](https://docs.rapids.ai/api/cuvs/nightly/cpp_api/). + +Step 1: Starting off with cuVS +============================== + +CUDA Development? +----------------- + +If you are reading this tuturial then you probably know about CUDA and its relationship to general-purpose GPU computing (GPGPU). You probably also know about Nvidia GPUs but might not necessarily be familiar with the programming model nor GPU computing. The good news is that extensive knowledge of CUDA and GPUs are not needed in order to get started with or build applications with RAFT. RAFT hides away most of the complexities behind simple single-threaded stateless functions that are inherently asynchronous, meaning the result of a computation isn't necessarily read to be used when the function executes and control is given back to the user. The functions are, however, allowed to be chained together in a sequence of calls that don't need to wait for subsequent computations to complete in order to continue execution. In fact, the only time you need to wait for the computation to complete is when you are ready to use the result. + +A common structure you will encounter when using RAFT is a `raft::device_resources` object. This object is a container for important resources for a single GPU that might be needed during computation. If communicating with multiple GPUs, multiple `device_resources` might be needed, one for each GPU. `device_resources` contains several methods for managing its state but most commonly, you'll call the `sync_stream()` to guarantee all recently submitted computation has completed (as mentioned above.) + +A simple example of using `raft::device_resources` in RAFT: + +.. code-block:: c++ + + #include + + raft::device_resources res; + // Call a bunch of RAFT functions in sequence... + res.sync_stream() + +Host vs Device Memory +--------------------- + +We differentiate between two different types of memory. `host` memory is your traditional RAM memory that is primarily accessible by applications on the CPU. `device` memory, on the other hand, is what we call the special memory on the GPU, which is not accessible from the CPU. In order to access host memory from the GPU, it needs to be explicitly copied to the GPU and in order to access device memory by the CPU, it needs to be explicitly copied there. We have several mechanisms available for allocating and managing the lifetime of device memory on the stack so that we don't need to explicitly allocate and free pointers on the heap. For example, instead of a `std::vector` for host memory, we can use `rmm::device_uvector` on the device. The following function will copy an array from host memory to device memory: + +.. code-block:: c++ + + #include + #include + #include + + raft::device_resources res; + + std::vector my_host_vector = {0, 1, 2, 3, 4}; + rmm::device_uvector my_device_vector(my_host_vector.size(), res.get_stream()); + + raft::copy(my_device_vector.data(), my_host_vector.data(), my_host_vector.size(), res.get_stream()); + +Since a stream is involved in the copy operation above, RAFT functions can be invoked immediately so long as the same `device_resources` instances is used (or, more specifically, the same main stream from the `devices_resources`.) As you might notice in the example above, `res.get_stream()` can be used to extract the main stream from a `device_resources` instance. + +Multi-dimensional data representation +------------------------------------- + +`rmm::device_uvector` is a great mechanism for allocating and managing a chunk of device memory. While it's possible to use a single array to represent objects in higher dimensions like matrices, it lacks the means to pass that information along. For example, in addition to knowing that we have a 2d structure, we would need to know the number of rows, the number of columns, and even whether we read the columns or rows first (referred to as column- or row-major respectively). + +For this reason, RAFT relies on the `mdspan` standard, which was composed specifically for this purpose. To be even more, `mdspan` itself doesn't actually allocate or own any data on host or device because it's just a view over an existing memory on host device. The `mdspan` simply gives us a way to represent multi-dimensional data so we can pass along the needed metadata to our APIs. Even more powerful is that we can design functions that only accept a matrix of `float` in device memory that is laid out in row-major format. + +The memory-owning counterpart to the `mdspan` is the `mdarray` and the `mdarray` can allocate memory on device or host and carry along with it the metadata about its shape and layout. An `mdspan` can be produced from an `mdarray` for invoking RAFT APIs with `mdarray.view()`. They also follow similar paradigms to the STL, where we represent an immutable `mdspan` of `int` using `mdspan` instead of `const mdspan` to ensure it's the type carried along by the `mdspan` that's not allowed to change. + +Many RAFT functions require `mdspan` to represent immutable input data and there's no implicit conversion between `mdspan` and `mdspan` we use `raft::make_const_mdspan()` to alleviate the pain of constructing a new `mdspan` to invoke these functions. + +The following example demonstrates how to create `mdarray` matrices in both device and host memory, copy one to the other, and create mdspans out of them: + +.. code-block:: c++ + + #include + #include + #include + + raft::device_resources res; + + int n_rows = 10; + int n_cols = 10; + + auto device_matrix = raft::make_device_matrix(res, n_rows, n_cols); + auto host_matrix = raft::make_host_matrix(res, n_rows, n_cols); + + // Set the diagonal to 1 + for(int i = 0; i < n_rows; i++) { + host_matrix(i, i) = 1; + } + + raft::copy(res, device_matrix.view(), host_matrix.view()); + +Step 2: Generate some data +========================== + +Let's build upon the fundamentals from the prior section and actually invoke some of RAFT's computational APIs on the device. A good starting point is data generation. + +.. code-block: c++ + + #include + #include + + raft::device_resources res; + + int n_rows = 10000; + int n_cols = 10000; + + auto dataset = raft::make_device_matrix(res, n_rows, n_cols); + auto labels = raft::make_device_vector(res, n_rows); + + raft::random::make_blobs(res, dataset.view(), labels.view()); + +That's it. We've now generated a random 10kx10k matrix with points that cleanly separate into Gaussian clusters, along with a vector of cluster labels for each of the data points. Notice the `cuh` extension in the header file include for `make_blobs`. This signifies to us that this file contains CUDA device functions like kernel code so the CUDA compiler, `nvcc` is needed in order to compile any code that uses it. Generally, any source files that include headers with a `cuh` extension use the `.cu` extension instead of `.cpp`. The rule here is that `cpp` source files contain code which can be compiled with a C++ compiler like `g++` while `cu` files require the CUDA compiler. + +Since the `make_blobs` code generates the random dataset on the GPU device, we didn't need to do any host to device copies in this one. `make_blobs` is also asynchronous, so if we don't need to copy and use the data in host memory right away, we can continue calling RAFT functions with the `device_resources` instance and the data transformations will all be scheduled on the same stream. + +Step 3: Using brute-force indexes +================================= + +Build brute-force index +----------------------- + +Consider the `(10k, 10k)` shaped random matrix we generated in the previous step. We want to be able to find the k-nearest neighbors for all points of the matrix, or what we refer to as the all-neighbors graph, which means finding the neighbors of all data points within the same matrix. +.. code-block:: c++ + + #include + + raft::device_resources res; + + // set number of neighbors to search for + int const k = 64; + + auto bfknn_index = raft::neighbors::brute_force::build(res, + raft::make_const_mdspan(dataset.view())); + +Query brute-force index +----------------------- + +.. code-block:: c++ + + // using matrix `dataset` from previous example + auto search = raft::make_const_mdspan(dataset.view()); + + // Indices and Distances are of dimensions (n, k) + // where n is number of rows in the search matrix + auto reference_indices = raft::make_device_matrix(res, search.extent(0), k); // stores index of neighbors + auto reference_distances = raft::make_device_matrix(res, search.extent(0), k); // stores distance to neighbors + + raft::neighbors::brute_force::search(res, + bfknn_index, + search, + reference_indices.view(), + reference_distances.view()); + +We have established several things here by building a flat index. Now we know the exact 64 neighbors of all points in the matrix, and this algorithm can be generally useful in several ways: +1. Creating a baseline to compare against when building an approximate nearest neighbors index. +2. Directly using the brute-force algorithm when accuracy is more important than speed of computation. Don't worry, our implementation is still the best in-class and will provide not only significant speedups over other brute force methods, but also be quick relatively when the matrices are small! + + +Step 4: Using the ANN indexes +============================= + +Build a CAGRA index +------------------- + +Next we'll train an ANN index. We'll use our graph-based CAGRA algorithm for this example but the other index types use a very similar pattern. + +.. code-block:: c++ + + #include + + raft::device_resources res; + + // use default index parameters + raft::neighbors::cagra::index_params index_params; + + auto index = raft::neighbors::cagra::build(res, index_params, raft::make_const_mdspan(dataset.view())); + +Query the CAGRA index +--------------------- + +Now that we've trained a CAGRA index, we can query it by first allocating our output `mdarray` objects and passing the trained index model into the search function. + +.. code-block:: c++ + + // create output arrays + auto indices = raft::make_device_matrix(res, n_rows, k); + auto distances = raft::make_device_matrix(res, n_rows, k); + + // use default search parameters + raft::neighbors::cagra::search_params search_params; + + // search K nearest neighbors + raft::neighbors::cagra::search( + res, search_params, index, search, indices.view(), distances.view()); + +Step 5: Evaluate neighborhood quality +===================================== + +In step 3 we built a flat index and queried for exact neighbors while in step 4 we build an ANN index and queried for approximate neighbors. How do you quickly figure out the quality of our approximate neighbors and whether it's in an acceptable range based on your needs? Just compute the `neighborhood_recall` which gives a single value in the range [0, 1]. Closer the value to 1, higher the quality of the approximation. + +.. code-block:: c++ + + #include + + raft::device_resources res; + + // Assuming matrices as type raft::device_matrix_view and variables as + // indices : approximate neighbor indices + // reference_indices : exact neighbor indices + // distances : approximate neighbor distances + // reference_distances : exact neighbor distances + + // We want our `neighborhood_recall` value in host memory + float const recall_scalar = 0.0; + auto recall_value = raft::make_host_scalar(recall_scalar); + + raft::stats::neighborhood_recall(res, + raft::make_const_mdspan(indices.view()), + raft::make_const_mdspan(reference_indices.view()), + recall_value.view(), + raft::make_const_mdspan(distances.view()), + raft::make_const_mdspan(reference_distances.view())); + + res.sync_stream(); + +Notice we can run invoke the functions for index build and search for both algorithms, one right after the other, because we don't need to access any outputs from the algorithms in host memory. We will need to synchronize the stream on the `raft::device_resources` instance before we can read the result of the `neighborhood_recall` computation, though. + +Similar to a Numpy array, when we use a `host_scalar`, we are really using a multi-dimensional structure that contains only a single dimension, and further a single element. We can use element indexing to access the resulting element directly. +.. code-block:: c++ + std::cout << recall_value(0) << std::endl; + +While it may seem like unnecessary additional work to wrap the result in a `host_scalar` mdspan, this API choice is made intentionally to support the possibility of also receiving the result as a `device_scalar` so that it can be used directly on the device for follow-on computations without having to incur the synchronization or transfer cost of bringing the result to host. This pattern becomes even more important when the result is being computed in a loop, such as an iterative solver, and the cost of synchronization and device-to-host (d2h) transfer becomes very expensive. + +Advanced features +================= + +The following sections present some advanced features that we have found can be useful for squeezing more utilization out of GPU hardware. As you've seen in this tutorial, RAFT provides several very useful tools and building blocks for developing accelerated applications beyond vector search capabilities. + +Serialization +------------- + +Most of the indexes in `raft::neighbors` can be serialized to/from streams and files on disk. The index types that support this feature have include files with the naming convention `_serialize.cuh`. The serialization functions are similar across the different index types, with the primary difference being that some index types require a pointer to all the training data for search. Since the original training dataset can be quite large, the `serialize()` function for these index types includes an argument `include_dataset`, which allows the user to specify whether the dataset should be included in the serialized form. The index types that allow for this also include a method `update_datasets()` to allow for the dataset to be re-attached to the index after it is deserialized. + +The following example demonstrates serializing and deserializing a CAGRA index to and from a file. For index types that don't require the training data, you can remove the `include_dataset` and `update_dataset()` parts. We will assume the CAGRA index has been built using the code from [Step 4](#build-a-cagra-index) above: + +.. code-block:: c++ + + #include + #include + + using namespace raft::neighbors; + + raft::neighbors::cagra::serialize(res, "cagra_serialized.dat", index, false); + + auto index_deser = raft::neighbors::cagra::deserialize(res, "cagra_serialized.dat"); + index_deser.update_dataset(dataset); + +Filtering +--------- + +As of RAFT 23.10, support for pre-filtering of neighbors has been added to ANN index. This search feature can enable multiple use-cases, such as filtering a vector based on it's attributes (hybrid searches), the removal of vectors already added to the index, or the control of access in searches for security purposes. +The filtering is available through the `search_with_filtering()` function of the ANN index, and is done by applying a predicate function on the GPU, which usually have the signature `(uint32_t query_ix, uint32_t sample_ix) -> bool`. + +One of the most commonly used mechanism for filtering is the bitset: the bitset is a data structure that allows to test the presence of a value in a set through a fast lookup, and is implemented as a bit array so that every element contains a `0` or a `1` (respectively `false` and `true` in boolean logic). RAFT provides a `raft::core::bitset` class that can be used to create and manipulate bitsets on the GPU, and a `raft::core::bitset_view` class that can be used to pass bitsets to filtering functions. + +The following example demonstrates how to use the filtering API (assume the CAGRA index is built using the code from [Step 4](#build-a-cagra-index) above: + +.. code-block:: c++ + + #include + #include + + using namespace raft::neighbors; + + cagra::search_params search_params; + + // create a bitset to filter the search + auto removed_indices = raft::make_device_vector(res, n_removed_indices); + raft::core::bitset removed_indices_bitset( + res, removed_indices.view(), dataset.extent(0)); + + // ... Populate the bitset ... + + // search K nearest neighbours according to a bitset filter + auto neighbors = raft::make_device_matrix(res, n_queries, k); + auto distances = raft::make_device_matrix(res, n_queries, k); + cagra::search_with_filtering(res, search_params, index, queries, neighbors, distances, + filtering::bitset_filter(removed_indices_bitset.view())); + + +Stream pools +------------ + +Within each CPU thread, CUDA uses `streams` to submit asynchronous work. You can think of a stream as a queue. Each stream can submit work to the GPU independently of other streams but work submitted within each stream is queued and executed in the order in which it was submitted. Similar to how we can use thread pools to bound the parallelism of CPU threads, we can use CUDA stream pools to bound the amount of concurrent asynchronous work that can be scheduled on a GPU. Each instance of `device_resources` has a main stream, but can also create a stream pool. For a single CPU thread, multiple different instances of `device_resources` can be created with different main streams and used to invoke a series of RAFT functions concurrently on the same or different GPU devices, so long as the target devices have available resources to perform the work. Once a device is saturated, queued work on streams will be scheduled and wait for a chance to do more work. During this time the streams are waiting, the CPU thread will still continue its own execution asynchronously unless `sync_stream_pool()` is called, causing the thread to block and wait for the thread pools to complete. + +Also, beware that before splitting GPU work onto multiple different concurrent streams, it can often be important to wait for the main stream in the `device_resources`. This can be done with `wait_stream_pool_on_stream()`. + +To summarize, if wanting to execute multiple different streams in parallel, we would often use a stream pool like this: + +.. code-block:: c++ + + #include + + #include + #include + + int n_streams = 5; + + rmm::cuda_stream stream; + std::shared_ptr stream_pool(5) + raft::device_resources res(stream.view(), stream_pool); + + // Submit some work on the main stream... + + res.wait_stream_pool_on_stream() + for(int i = 0; i < n_streams; ++i) { + rmm::cuda_stream_view stream_from_pool = res.get_next_usable_stream(); + raft::device_resources pool_res(stream_from_pool); + // Submit some work with pool_res... + } + + res.sync_stream_pool(); + +Device resources manager +------------------------ + +In multi-threaded applications, it is often useful to create a set of +`raft::device_resources` objects on startup to avoid the overhead of +re-initializing underlying resources every time a `raft::device_resources` object +is needed. To help simplify this common initialization logic, RAFT +provides a `raft::device_resources_manager` to handle this for downstream +applications. On startup, the application can specify certain limits on the +total resource consumption of the `raft::device_resources` objects that will be +generated: + +.. code-block:: c++ + + #include + + void initialize_application() { + // Set the total number of CUDA streams to use on each GPU across all CPU + // threads. If this method is not called, the default stream per thread + // will be used. + raft::device_resources_manager::set_streams_per_device(16); + + // Create a memory pool with given max size in bytes. Passing std::nullopt will allow + // the pool to grow to the available memory of the device. + raft::device_resources_manager::set_max_mem_pool_size(std::nullopt); + + // Set the initial size of the memory pool in bytes. + raft::device_resources_manager::set_init_mem_pool_size(16000000); + + // If neither of the above methods are called, no memory pool will be used + } + +While this example shows some commonly used settings, +`raft::device_resources_manager` provides support for several other +resource options and constraints, including options to initialize entire +stream pools that can be used by an individual `raft::device_resources` object. After +this initialization method is called, the following function could be called +from any CPU thread: + +.. code-block:: c++ + + void foo() { + raft::device_resources const& res = raft::device_resources_manager::get_device_resources(); + // Submit some work with res + res.sync_stream(); + } + +If any `raft::device_resources_manager` setters are called _after_ the first +call to `raft::device_resources_manager::get_device_resources()`, these new +settings are ignored, and a warning will be logged. If a thread calls +`raft::device_resources_manager::get_device_resources()` multiple times, it is +guaranteed to access the same underlying `raft::device_resources` object every +time. This can be useful for chaining work in different calls on the same +thread without keeping a persistent reference to the resources object. + +Device memory resources +----------------------- + +The RAPIDS software ecosystem makes heavy use of the [RAPIDS Memory Manager](https://github.com/rapidsai/rmm) (RMM) to enable zero-copy sharing of device memory across various GPU-enabled libraries such as PyTorch, Jax, Tensorflow, and FAISS. A really powerful feature of RMM is the ability to set a memory resource, such as a pooled memory resource that allocates a block of memory up front to speed up subsequent smaller allocations, and have all the libraries in the GPU ecosystem recognize and use that same memory resource for all of their memory allocations. + +As an example, the following code snippet creates a `pool_memory_resource` and sets it as the default memory resource, which means all other libraries that use RMM will now allocate their device memory from this same pool: + +.. code-block:: c++ + + #include + + rmm::mr::cuda_memory_resource cuda_mr; + // Construct a resource that uses a coalescing best-fit pool allocator + // set the initial size to half of the free device memory + auto init_size = rmm::percent_of_free_device_memory(50); + rmm::mr::pool_memory_resource pool_mr{&cuda_mr, init_size}; + rmm::mr::set_current_device_resource(&pool_mr); // Updates the current device resource pointer to `pool_mr` + +The `raft::device_resources` object will now also use the `rmm::current_device_resource`. This isn't limited to C++, however. Often a user will be interacting with PyTorch, RAPIDS, or Tensorflow through Python and so they can set and use RMM's `current_device_resource` [right in Python](https://github.com/rapidsai/rmm#using-rmm-in-python-code). + +Workspace memory resource +------------------------- + +As mentioned above, `raft::device_resources` will use `rmm::current_device_resource` by default for all memory allocations. However, there are times when a particular algorithm might benefit from using a different memory resource such as a `managed_memory_resource`, which creates a unified memory space between device and host memory, paging memory in and out of device as needed. Most of RAFT's algorithms allocate temporary memory as needed to perform their computations and we can control the memory resource used for these temporary allocations through the `workspace_resource` in the `raft::device_resources` instance. + +For some applications, the `managed_memory_resource`, can enable a memory space that is larger than the GPU, thus allowing a natural spilling to host memory when needed. This isn't always the best way to use managed memory, though, as it can quickly lead to thrashing and severely impact performance. Still, when it can be used, it provides a very powerful tool that can also avoid out of memory errors when enough host memory is available. + +The following creates a managed memory allocator and set it as the `workspace_resource` of the `raft::device_resources` instance: + +.. code-block:: c++ + + #include + #include + + std::shared_ptr managed_resource; + raft::device_resource res(managed_resource);``` + +The `workspace_resource` uses an `rmm::mr::limiting_resource_adaptor`, which limits the total amount of allocation possible. This allows RAFT algorithms to work within the confines of the memory constraints imposed by the user so that things like batch sizes can be automatically set to reasonable values without exceeding the allotted memory. By default, this limit restricts the memory allocation space for temporary workspace buffers to the memory available on the device. + +The below example specifies the total number of bytes that RAFT can use for temporary workspace allocations to 3GB: + +.. code-block:: c++ + + #include + #include + + #include + + std::shared_ptr managed_resource; + raft::device_resource res(managed_resource, std::make_optional(3 * 1024^3)); From c619e2685d689ce3ba96fae955d5f38c858f8125 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Tue, 24 Sep 2024 09:56:12 -0400 Subject: [PATCH 08/22] Adding images and lniks --- docs/source/comparing_indexes.rst | 13 +++++++++--- docs/source/cpp_tutorial.rst | 2 +- docs/source/getting_started.rst | 12 ----------- docs/source/images/build_benchmarks.png | Bin 0 -> 43332 bytes docs/source/images/index_recalls.png | Bin 0 -> 148929 bytes docs/source/images/recall_buckets.png | Bin 0 -> 70402 bytes docs/source/index.rst | 19 +++++++++++++----- .../vector_databases_vs_vector_search.rst | 15 +++++++++----- 8 files changed, 35 insertions(+), 26 deletions(-) create mode 100644 docs/source/images/build_benchmarks.png create mode 100644 docs/source/images/index_recalls.png create mode 100644 docs/source/images/recall_buckets.png diff --git a/docs/source/comparing_indexes.rst b/docs/source/comparing_indexes.rst index 10906c7da..62b362b1a 100644 --- a/docs/source/comparing_indexes.rst +++ b/docs/source/comparing_indexes.rst @@ -4,7 +4,7 @@ Comparing performance of vector indexes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This document provides a brief overview methodology for comparing vector search indexes and models. For guidance on how to choose and configure an index type, please refer to this guide. +This document provides a brief overview methodology for comparing vector search indexes and models. For guidance on how to choose and configure an index type, please refer to :doc:`this ` guide. Unlike traditional database indexes, which will generally return correct results even without performance tuning, vector search indexes are more closely related to ML models and they can return absolutely garbage results if they have not been tuned. @@ -18,6 +18,7 @@ Recall is a measure of model quality. Imagine for a particular vector, we know t Parameter settings dictate the quality of an index. The graph below shows eight indexes from the same data but with different tuning parameters. Generally speaking, the indexes with higher average recall took longer to build. Which index is fair to report? +.. image:: images/index_recalls.png How do I compare models or indexing algorithms? @@ -36,8 +37,14 @@ We suggest averaging performance within a range of recall. For general guidance, #. 95% - 99% #. >99% +.. image:: images/recall_buckets.png + + This allows us to say things like “okay at 95% recall level, model A can be built 3x faster than model B, but model B has 2x lower latency than model A” +.. image:: images/build_benchmarks.png + + Another important detail is that we compare these models against their best-case search performance within each recall window. This means that we aim to find models that not only have great recall quality but also have either the highest throughput or lowest latency within the window of interest. These best-cases are most often computed by doing a parameter sweep in a grid search (or other types of search optimizers) and looking at the best cases for each level of recall. The resulting data points will construct a curve known as a Pareto optimum. Please note that this process is specifically for showing best-case across recall and throughput/latency, but when we care about finding the parameters that yield the best recall and search performance, we are essentially performing a hyperparameter optimization, which is common in machine learning. @@ -46,8 +53,8 @@ The resulting data points will construct a curve known as a Pareto optimum. Plea How do I do this on large vector databases? =========================================== -It turns out that most vector databases, like Milvus for example, make many smaller vector search indexing models for a single “index”, and the distribution of the vectors across the smaller index models are assumed to be completely uniform. This means we can use subsampling to our benefit, and tune on smaller subsamples of the overall dataset. +It turns out that most vector databases, like Milvus for example, make many smaller vector search indexing models for a single “index”, and the distribution of the vectors across the smaller index models are assumed to be completely uniform. This means we can use subsampling to our benefit, and tune on smaller sub-samples of the overall dataset. Please note, however, that there are often caps on the size of each of these smaller indexes, and that needs to be taken into consideration when choosing the size of the sub sample to tune. -Please see the guide I wrote previously here for more information on the steps one would take to do this subsampling and tuning process. \ No newline at end of file +Please see :doc:`this guide ` for more information on the steps one would take to do this subsampling and tuning process. \ No newline at end of file diff --git a/docs/source/cpp_tutorial.rst b/docs/source/cpp_tutorial.rst index 4643253e6..83d2e3335 100644 --- a/docs/source/cpp_tutorial.rst +++ b/docs/source/cpp_tutorial.rst @@ -33,7 +33,7 @@ cuVS has several important algorithms for performing vector search on the GPU an This tutorial assumes cuVS has been installed and/or added to your build so that you are able to compile and run RAFT code. If not done already, please follow the [build and install instructions](build.md) and consider taking a look at the [example c++ template project](https://github.com/rapidsai/raft/tree/HEAD/cpp/template) for ready-to-go examples that you can immediately build and start playing with. Also take a look at RAFT's library of [reproducible vector search benchmarks](raft_ann_benchmarks.md) to run benchmarks that compare cuVS against other state-of-the-art nearest neighbors algorithms at scale. -For more information about the various APIs demonstrated in this tutorial, along with comprehensive usage examples of all the APIs offered by RAFT, please refer to the [cuVS C++ API Documentation](https://docs.rapids.ai/api/cuvs/nightly/cpp_api/). +For more information about the various APIs demonstrated in this tutorial, along with comprehensive usage examples of all the APIs offered by RAFT, please refer to the `cuVS C++ API Documentation <>https://docs.rapids.ai/api/cuvs/nightly/cpp_api/>`_. Step 1: Starting off with cuVS ============================== diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst index 2d26cd0eb..902f2ca81 100644 --- a/docs/source/getting_started.rst +++ b/docs/source/getting_started.rst @@ -38,18 +38,6 @@ Getting Started * `Get involved`_ -Welcome to cuVS, the premier library for GPU-accelerated vector search and clustering! cuVS provides several core building blocks for constructing new algorithms, as well as end-to-end vector search and clustering algorithms for use either standalone or through a growing list of :doc:`integrations `. - -There are several benefits to using cuVS and GPUs for vector search, including - -#. Fast index build -#. Latency critical and high throughput search -#. Parameter tuning -#. Cost savings -#. Interoperability (build on GPU, deploy on CPU) -#. Multiple language support -#. Building blocks for composing new or accelerating existing algorithms - New to vector search? ===================== diff --git a/docs/source/images/build_benchmarks.png b/docs/source/images/build_benchmarks.png new file mode 100644 index 0000000000000000000000000000000000000000..e9596b4894e1ce5211bd35407ab3a8e7667e2f9a GIT binary patch literal 43332 zcmeFZc{G;o|1PY#MN^ZX3QaeR*R@=;Mz*htr|C#%Lcyq=eaW-rZo+0$xH&xhNZE!iEPb}m@Tj)qmc?GMu~ zm3tz4-YBFurA1weqTciCyGT;qK+@@TQVf#5u`!)ScwjU! zk%K9IZRZ(+}CY1f6vkIHH9&TF&1r( z?fx;h^hMQ9hr)Pf^S%e$&TO_eNh}&$hrg`;YeN5g=V9W@_*cU7_3r=q(C7aze0gfd zji&pDa{)(zE`yxvJFOIM-(P2nlqvQSk@PgoejEHs@y{_$mh$+a`BW;m@BLE!)9MZV zq8hnx>qi{wM7Ab3m<0O7-?q!sp?RZmG|4%wLoKQ|oMUm+_@@85Q(4qf{nQ&7pQ8(j z_4}0NBe_&?0Kd)a%sn+8uc}^sS6B9+^2sxvMWpn~&!1^iH2mu7RJ$i;w=jq*pP|^B zWUw7Qdi1+V#{Mi-rVV**N_)5MWvB0%a!zeF)0@l7YvaSW)p&N6m6x-!v#)=l&6l2= z>)T+`TOB4myR@9({xtr!VY+5!X6Cq)q<=twpn!nY@O0VKYmH=OIwyw9OdAsF&vOO8 zpbVvEWXK)Zep6q+d-u+YEyTt@+;wqnQRst-uggPvE8*u3+^_Bb*fWv;ZcO-oa*)jO zvw{Mt_B;K1^i!L}4vilTt#bWcr(h_=s-`Z*^od22k4&kWofQw|*CW~4G3B)_hZN?u zS=jcJZGBXEKxTO&$fvw#AXkYmw8xV{y13V-Yi40$&Q*C&vMyOcDC|dEkhH#))wZGz z53|mMA&DvH$lYQV?=mtPLZ^gZ_sLxmqzF>q|*VLgM1LNbPjt@v3A>Wv@+ZflH`EUO@ez zp_9|>(G6tzjWsYs(tMA zGb2M2PMg=SwUgK*$-$tMFc}u8x|%fkiwDMc@E;GYDt&b^bhdq_kTT;;rWjaS26=gH zkX~7|y`0EIrBXlXr;q+T?GBUMIP0v_m-E~f39mVH zm8{$QtGCK+;fU`UGUel-T`V_k@HyGW*}sULa;>VU?>O>xXsSBVU0$Vr(5{;IQBqQq zS*F33K&&yDBD^K1IBxAhr4GM%(}r`(BO3-p1#ss=anIfBan@;c+@Z>XD?Ks_{bGHmi^+MNj>2A{rbWLjwY~ z>Zh{c^SBCfz0&ZJMb<^+W1Gd}N<$aC8idj_yy%Y8Ud%yPWtbV7J&cU0sr^>vJC;A7hCX^gGuSnoz3*HCPGD7faOW&pDM+AzSEJGrSvqSwubch)ZXIebYtthw(mpsjg5`h^z;|>3R`n zX69=;I;CSXlWk+}q2~_RbelypewCEo`nAH)!rZ)W#DVyoadB~ORMyi5WXeg=AqO?V zJqASyI}HaN^2{tOeo;G?pI$oj@$1)Z`}gk`XQxkU;6XBx?k?MlC;j6;p%tToHq$jZ z_bf89wAUt#@^iE;54|b<%s^8U%lIwxu8-afId>)xA*{2g<{KYV-s%|KytxK7w6&GJ zZKGc&XC!Gvg3;5$75Vk|5y0@u}B3*B}Hm494Zl+-0t@)HmUvx}3p*5d5i z`A(_)=XR~u66M49I*xq1CBnkW+BHSdGe%R2sFBB(Zuc3lQ{YU^%exq(vX?C&c1QwQ z7!tCRoqm&L;oPDA+URnm{T1Pr3C@*ZFE6jcF~rE5w{O#s1_CTU5YPH1CZAi$YCP76 z!qL{8kDi+t9+v9+2L^WANz}Zli#+L=?J~oM{aRXbE1d1#qNYyN@=KR4NnV)%@bIvJgv95W!Wk*FcvbZVIy?6TYDZd5&RoWx zdR71Gm-T}`8m{UQ{l3Xe6xog<#9ugjA8n&>p>6Ef&t!7AW^PK#F_fidv!|}owWwac zvF|c7%TlgqSxH3ZPc>ezuC48{lc=+gl#eSZaGA+=7-2&)ql5{Ii7BLL7@!Ry#goZ} zpLbj4-+Wuj%S7wCI8n}f>{xK6;j^%?|1@19oP7~u@$vC2yLSt7`kRaHHZU+)?|C{i zE9%MB?ftQ2c$gPAh9K>%vkwao=R#qvA?eLd zNCpT-+Ux5h0JkD{_HzVbU95AoQ325mYu?=EW!}GS^KYYL*Hf5r;e@(d0w+&awYF+^ zEZuznvZ=kDWNjUsk#QUgaJR#VGr57s>&%*>j-ZcSds*m&WMqPuGL(`u+X&?g%|tgR%z10~x;$<|^@o2zeHlXJ{Qu50Oo%zj@=v-LNoLL@!#) z%gaabJXU#ai0v~m+t4V9D_6K^ucAN6Fg!@ zjKPe$dPr(&YT>2O;9hJTT0b^BQt1jWGeWI4<7z}Kw|JxySMX+Ca;P?+@!RVseaPQP zr&XDR#Rcc5SVN9+t^|^k*DO9}8Q}}e^H6fOWTk5IGMAN=q3kHPin7q5@BmhQ(ihbA zSh7dqs5BH}@{^1m6#KLdMvZc=V z-KnX9Z*>diKP{Y9S4W&-li#dADJWQiA^_}flT+Q^&Vl_y$w|%6uf|t_e%W$Yd5Yw^ zF#f4yy>=4k4{XO;qWo9un*ME(RfXP&2XR$YR|h!nvU=u_r^pu?haQAtcKn7#Rs?PY zl>}wB&-v6{t(3B*g&DL?Y$MW2pST)x7R5BpX$jQZzeA$)FaJsHTNPUw{NL_ ziA7c%H|>;j+w-Vfw?x=}*yNlgM?XnQnst>%Sb!~HX?{-S&K0V1V~1x7meXwW+JI4F zZihEt=;>VQns5T=1bX9;bSfSleH*r+(0xgwXlcX})C55Ql=t}YWAw4$xw$*{OZL`~ zCNJvkSgjtG!j-db7g^O0hK7bVBa6!db`xODsX$jmaet%|!L*VZlA6nvRhja2`2jJ} zOWuJT*7h4%SbS-X*GNuHOZ(8>%|-9+NzwC&v=YCRP~U|n+?wZDJ3J39}1;xf@ zvQN>XEmuy2&B)bt@l!MV{_LYnv~J{~T%{C^{b85V-n~1O-fXC^f4`*UG%vH?9X^M= zw&(J<43o7|n6b789#`^5PF!Ky5M5i2eFN*^<>&92SpeQDA*I*d5+P8ax%r6cP|uA0 zJL-sbw$(jwtVfp)&CT`%&3$n!1c*GV$^`I^c5pq*qRvb-$8}B!FyZcnS7&%a)^cq4 zB0rtgU=mzILOKEBqAD2%M6U+5@T20f>lf73|MvFIvFW9ykoEefN2s8wSjcZ{7pb{O8i4nndvE2HpWdQBjhpK}U1*O?(R>k5=<6DoQk2 zQ#aSn2i=J$q&vM<`eXkjBfYn>$ke-LGpvY}VQ?724WvnzEG#Km?X_iBHHV$NS6x@v zO$`l=*&mOU&%L^g4MsT$xv(LNx)kEPa)OyqAxw}64h{|#{gb4l-hksE@>n75goGrX7?p=(?lXBnov1AcT|_i40A4F1@(G)c z=zDNxA6fyR71}o-B4E{FCfYEDvD9n+RiXUpMMbsK)0wR2I4G92EL#k=>J;dryRx@@ zA*JKqbF8}78h?2{cvJ-0R(I=3@>McHl|q^58E#HT(n&-r>BT(tG`Q_w zSz>DYPAI-qA2i)|>ZiJ^G$#Y6w4tLTh26y<$y_K*GA*|6JgfOG|7qHCs3|0uzdaY& z+~zro9Q}`nmx*(#PfSjv3Vay>R0E{ZxO%m@t?iW>Pnapsx_dMLFow>~;b?1k%w8LU z>BMpaPlG*GyMZ}jJ$!urA{iS`EG+4nJYQAM3Tv2jn5*)c{L311wCPb2Mc7I~$w_q&;BGw48@ z1!rD>>W$K&sqm6YRr6!hGc({9eNQd|cq<8ul-WBuu>hBVqk?S7$Kg(oQ_p=jIsR=> zgxSx_NwS?1L8c5LcblkGVZUA=PyF(n##+==%5P294K|RtCY%;4ieFQPRQWmJ^JYI7Z;=}$%dd_Ks zQqNS+=h5BAb_cqwOniP?=1`m4VNK;QW3|>k<@m@vkh*+6s?oPI+g4_F&!(XM*McW1 zkBk3w@uw);JXikpe#mZofwQCU`~rt#ZRBqkWoHl92DjBM{Pg(0Yo2Q7h+j-7WmlBe z$`F6Szr=f?{QST1xzb^J?;e%A6q5|741L!2c{(UuzOg7Y z$U*Nf1-ihFb_A`foj`T;hBAj=Y>_onKpAnk4dH{A`MFYmjDe|W1j0x}@vM5o*?yJe z23(BW_d{`Uag2eKxO}A2ro6U5>WH+1WORI+MZHM|?g~LVDGot~IudTMHRx@$p!=q* zAv*~Xwyg+O-&pqCtAXE4GOFF^A&NrOO7+H(xJc0!f z6%*6Ec1?iuz}BRs?VHy>d-m)GxDA9|S^&$QJ*6fYV4}cQU~HqKd|qct z=-8nzqxJzQedrpGDLzH92d5LWY&QfA0x<3C>zfaJrIBns)-HuAkCJ;yS-A(uX~w01 z1DzAhi@5ae-Ek`18b( z>Tr)HtWLZvyf{@TEgxQBjA2&pqn_MQ>_p z;erH2rqtQ*$!q%pb)cuG2L-NE)hab7=e$tZ%Qg%20qN+V%F6S9|NVC$o%V4>LZ&P& zl@mTEx^{=0DwZGg_3^m_<8!Wv8Ib8{_xU5D5djokml}4;u2;~EuudY1Js-Tb%(@o# z|H>(Id-wkR`r_WI(3%CxEVa3{)zH@VF%keRWV??5;1DE5?8=gf8iA$Q0-P!2mgU(7iJHHB88cB-g{pnr6# zHUf&(L9;z#45W(|3!PCZ;jxpX1!)Aiaq{Fz$eOx$dUxVU={MZK-J)#P4#Fj4HhC1mFrK6Hi!oiz%r5EMSqn{mx=KtK9(` z8%%bpt7CB_nO?G$gOsAei?9WpWo>z^Jp}DiI=UYa^2vpRhYlU8?irbKh9tZ~bvRu}NnUxhLIc*l*6WOeLGpXjj zrCbE%*5?Vn)Y%8&9ngkd)L$`qH9k$ zN#Zv{bgd+%qdE5V_oF34j0Iu2n4tx^pvOv_dzH`ZQmluT0nozeBTGnnRaI30-V=#o zW~()xd?89ym&+k95)ck59>XFZx0a@+pfFwVxgQnY6`swR=MH=r9)1({>aGGqCg?`3 z%*u1D;{B2vdWT|xPAd(8ZqWy^sYt7P_wJp&!m6(RN-O2jg;&@GFvG)2j%rL8Ih~P+ zB)Pdo=H%p@tv^Rpu=KnG<_!Ezufd*4^szvEd|FAE0;$-#j+i&}$E z5hVd}7#z$fTzouUf|RcLbIMuoZu6iW^fTzIg}38#-^Sf$!~Z4y+)@64r618OTp|4$ z%DPelNHFA}9kgC5OkO~f$Q;xTy%$nv9(^p$vSLU3-}v?*w- zxSIMwWH6SC?-Z8*hRhI5*ktKC<^48VD+|kXFCi z>G&Zy@U!;ZYr_2@EKn)HSqgN=D!k`RGCogFi)ti?3=ZCgG}6%zu?P$s;P=UeSB2hs zNFj}6{0_#=>jAC)NMZPk+nU(mL0gGP&}|IeAP^`&AAL>CY964V&d*C=s-Pk|a^An% zSTv%v%W50lMu>XP9kOzBFQhjgz;qx{ZGQT=P+0C;$XJkqaZ5mcDEnAV6<+2Vn%as+ zKqe%^x^?TgeZPqXH120j0*uRu_o@vw~#;a$Zi9-Mj6D8V`CD zDo|*Zky2@(d>l#%H04$5{dr)3n}I{}2IBnOfnVAb9$F$tC81ytLq9joWVGcBEzDlo zuP@3{?D5NhCU1%Oc8tjg4H;%=op70c1Gs%-J>(=mAax93Kul4#rl+Tg)*Z%*FsrE8 z49OG>9$e8!w%eyfe($#8p{XFr_5+VE|fmMj8K?xdXC3~rU{H4QA1LieA8g^smJ;osZX^3KVE>r z_j7yDF${$&+uPfBP+qcXz5}_2B7~dt4y5!LTGEIJAUK6YV_MvM$UNMOzYAcE1( zmWk{(3ycqnJ3IG6G5Rxez@){=gqX#mhY<32K|u|89Egrqiabw9EXZ0;G_V?Seag8; zUlzj#%+W@uSqquReVK2ElQyQ8T?3>8*hhCkWnl~0=#W=HcKU~LkyXyhzi8e5-GwM~ z$nB!v7+tZ8ra;x+elzPB}05I^PO!yxLX|gs5&N zBBWU+QtKoMJ^${8W}>K~Vv`{Idm21Kc@zOZ(moNBenYh(=nt>Z#!Wi1z0 zM+nD-)j_u}9Wt=6%+Nw%0#`XK-FiJm!?rClDG3Vw*>V@F+?O#8CaHymN{MPhet9fj z3XsIntzKN?4W~F`z-J}C_$v9l7B4e&!e+C^H!oU-rshXk_kV@zWq|KPv-u6 zR|)1JVB?VZ@jMlVLWmv!Q&c%p10?|p2-*W=0nFeoDJnu>0`}F0_R)0wuKZ5T{XODh zV%}{Q3DDQ-Qa3@!W@{Sn#RWaP+e&+Z=6R*K~jq zR7^zJHQwn@f`VUCbjj#gF}D-s$E~z!ZoDZv>6=ZIw8pp&78-M*BZk(FO5Dkrz8&OD0cMo)sA9dbdgagLbc zcY6`huAfDPjOiZ~6hVgYjG6A7YZQ<>L3)0T%ddzc3X@zvjHTU^7tjQz!OhB*LnfDq zjBhr>SfHu7SqE@Xz%s2O@P*I4X_|_Y=R;0()jg?AOgus?)zha>!ON6& zf*N5Pn9{G6aI4!a5XiG07pcB6(&?G6I{OLBQ!f=@ADnt^_^aLZxRtg^!#Aea_4JJ0 z-1ht#xS=}BsiG00G_So!L(cz2%Mr9mu^!L_Dq;|B{C-JiuxU$_e7(q|_x-~=Z%jBB zkT_)x=)h%k$ysE%PbJ5_m5;jHeuch^>ILtBlJ#-vEcjUR^IK;;%T5EcL&yr>X%qe9 z{JqhVSgA7Sb3GZJ{+zlTGGYeK6V_KQuk!z>U6^Kb$;Tf9i6Gw&|GQ%>q1o(F4GES3 zX%VAK>gwrTPLSgoOu`r{@m!EoCvVz=GeBS`@%tyf93QyZa?LjWHXD$mQi6fTC)eky zbcSYT{y^j)Z8f^a<%0A5J83sS^{O=dl~@vM{YLL^)mQbmcHJ~I^CwM~<-RT7N=LVi z-W&QCY0D;qkyh)HvPpfW-FE;cF$wKcD+Q?6NHh~+16)sk=Fhlo!`e0JI&zqJZ`&+6 z`9wKRy`smonZr?So4eYfvE)l9_OP?R)FlWOgng3g-nduZAG^eVwL>yK{KWyaLip5> zLmr7=F)qd>p}vG@3*Vg!+8z1V^mWvfj7dIYb=5ULvZ_l2?nllJObkE%*d;*G+!K;> znaS&hz{8F)V|Y?rr7cP4Al)`ozBs);&XPh8$-4RdLZ<{@oZG3?Yj6ep&JYz~spp!R zo+cu!sGop?uzjFmKC1k2Mfg;i=Ww$ zqEpzmb=SY`PZtQBnE08Y0Gw7`T@B8EOcd%?Y+C2yaU;s>y>`ZZ8V@w#k|5r zh3(4OfR$+{$v=89cW4##tMtpav#3Cc=zxG`Zumror~JNstIcMY9KgfF;F|(#xAYVV zhxu#jOlEJYz|}E7K~a@-odr7n2nLrEUtPfxfB6UsHIS4>vYbR@87bWwT_1#OVXc>Q zffA-g1_mFiourghnIt)aPV@>#D#=XUqi$9+CXuo+**XReW{%PTvt* zdB!Iw=#WvE2-3 zT3()MJDDS>|7*dI2`7}yACq&1Q+^wJ%DobF{+P$e7qFb?^4dtIrpf$q-$xvpJ316p znbPZKRW)H}z=$%II+6y{8BYjTbZx|yT1ZIE%-yXywg?i;ZAk0i9rtL zsCW*;zva}`0fXQ}P~!>Nq3HGg8ZSye3iHfpn@e}5Xv9`b8K7H$espzp8Gr7B>{GYM_ zpbAj$NFEvY>`?E2{_L56XORl`r>|eX+79^{FFIikdQDUFroDX#G9a%_l=FhNW{P8CfS_O+5hJ7_HciV#Y6 z_>~5}QM)CnO&xJqbT4Y=#83V<8dOt1?(*V}BuDz0*VcSsv?|=+XWm()H&@pRIIrQ@ zxf$-H_}icW07z&N6;s#CoLK911IC1e{uYg>VP}YoapMXW5Du#$?Z6CSd|UvDL6;3g zf&Bv?NzYbTwbX3$k*(H03av|p+phaQ53m*}Fyg*v;srM?zO0dKG@UKXW5Bk;?R!R? zT{JzD55m(TZIG5K8l|wfxYID3SRBni7#t_)vKjQxEMN@`VDJbkH;A&#Yb)!Y44XL* zdc{s3(Qn!Mv5a9`nf>v=liN0LqxHf9fW@uZ7E~To{hRH|fdKob4taYp1?3>uMUTBK zSvg&nF}bir+P%%mE+A4V;Y3D5wkTi#%#>js24zc@@0|u5@}T|@f=5(PS%+$dmJudJ zkakcwUfs(hRAF6ixs^0^!B3RFPNv#&{!mq3XqpjHW&WFb5INJmCt2RJGGvv7iyMq! zv54=fGY7OAmU&L8`qKu5$A@`cx6>IjwR5w&9!F^|vG>97lsC;rEm|s0;BV-h(_PuR zL@gn~JdYdFC^21AB&+uBU!J zF58!J_-F1HWDUr+zrQ~WC(gqe(iP$y25XT9~f=zD!(QKu2xVb0RAnz_}FtSzTYxf zQh@LFW7SpJ1MGnye9H zP>f?H;@+tp>Ush!ki?)ME$s{pI2imWM;@BBM`b4_oB)kKe*Cx^{_{?Yda}B0%?~sS zpYZG=e^5tE|KR1ZnflpaKza9vG4iLCDLYJ409_Rjo*w#;jhgfB-O<1nb&CJYzAI+An*DQNd73bXKA+aL&s`e9SqEBo$EZTv@=exJ$?H1j_x7DH_?(QB`L%O3-A2*;eorw z>NEe}oG+j=me<#vrYIMB{u=OjqOighwPQ*1ZOIy6F3M>vfS4ySM4SkCS^u-Kg2JgX z#H5rT8hdTL*?V*Dp?B9M`NwAqrJzYbS%iss$5>nv4`kbGynok_T<$3luJY&pt5?|i zytPy2@NpSDzFPa9g!r#@%j-SvtmCl^WeK;sBc(W&pg*RX(+onr#@d z7A;YWUg?r$>fOP~J(g=%P8iolN!3k*0I|=Mm*-PQ;2Ao>TxOp}9q}$JW7&nOP?&H# zJcr?{qj|s({f!K*g=wgk+`cvg(gJS_8cbf*4?d$zafinvKVUq4on*2+Q!7bJHCj3fkNAbAK(;NHFf0C zxdYjle)`D9@|$Ar1?_o=LI&TxvoMkct{Jq)Qwh^|uqP;6Ko7UFsAzc@Jcj*I;#W;` zi%T6VQgf~xV6Vj=N$F@jPQ4*lS=iYnnVblmj-mPljAAe-t$c`Af18q`%FToe)xM98 z<%WzPeDdVNcLVe`=<6U>#ON08*#!u65D*j(J??@BjtkC6!v5A#RO>QUpm1RO;Fd&*c!N!dn;cJ3z2Kw&#gnCE=P*21j{1wP=@EA+?o=_J1ng=p8hAPbKSt ztk!VSz^DPs4GaZcUo((O|5<$5+(7x6Fi&Zk01h@biNk&ipkWYAe~c}U5-J*OGiXsO zvq38rhofCTL@!oG3&_ae><8K$`1i`3%*tWkShS8=PnlWILhmUU(2b+r%3wm5jjMw_ zAF>Nf0`$?Yzy9i2S)L(w>W}8U+{L+k4?YWwQ)LcE*V^K-q?cz6mb>|v?}Hk+yQgUM zaNw-YAR?!S3R0uPKzXb9>T&+T4Vl&^EuL$1%)N}umK~j7S zWhmx0ThVMOKWwbMM1Yo$bY@~*l%?m^N7R(*Etcwm0@jHHfc#1?j-Q3G7Be?E3DaRk zmX0P|YFG=n9dWY`dFCX^6pf=82iW8wDoRP|Q;PefMqBBXM$`!Dw5--+*%toB2eI53 zKfcw;E31hR)4YCtKRf-!Mp%fWg#WDI8|ug}!o&&}g|f%|6+(>c1Ph&LcNv?g(p!Fn zt4neyQ0zen;K+F{;}L~f1^k~7YWllN{YX}h2t{N85v~}-iBdYjWXHdp98=*5!Dy9K zjL8jLd>})@cVQC+c?NI>Lf4@{;YE-C@ZtX492ViE|?y`REy~4q`kPBY=skb)zlu1Zq&WZT+jm6X)f(^Vm7=)B# z4nY{RFeCv+7Ct&BBqS7;WzohJS~Zn20YMyB)Cr07cdhh-KZEq_U7QL+CP8MzU%kK&0g-nEyF3rCo)L??D|13OQLdXTC zS3Gv{7)KMOvGJiS7rc#xpAG;XZg$M*2u4g9S_R`}gLZ3!)>b3>A7F}P`?CX1Cmb~z z$hABh1BD#}GJ1)d?He@aQOPM8%Db%y4?g zho!h^;8YMdlhex5%-rvK<$DS<-to5ygEZ;xN}8_1z6Z1MkBIX{g{5c^;4)?h$Hn8m zuaw+BiXkT25_%g57&0~8xqo2bby%vf*0$m}?!8USbbMpd$`c z*OhXaWkKRNRE@niU~+_*CC2_RauLHr%x|EI!RXxiIpiq+n~EyG_#__8^$Fvq+Z79( z-6@)HAY*vxeSaSYbDWDMoJ2cVM%f+fW0Uq|yu0$_uuNA5MHguKpBfSh2XWZy2Fh*p z^6vyJ3|vNGm;Q!{Ek>mnB#B5#*wRSXy)olNg=}u@(ySF`_M7;~x%>|dU!8Rf+fY9+ z?3#dq0Atc>&aK4Qh4bG`!yzh|qHsBk&|I1t?ISO|A`FIv1$^xqOlQ&fAyT-{Rq3q_ zO0)elBD8`x3!^!#ruMErhdfY~{DslHMRdx2pngU3$wfj5ch_S2Pnh)0*BmMv>6~;9$bp9e$68e?ao8TmU}!FgAgCyuw%A8|J8g;WEN34$GOtuG0wlzTO?aA zLc};S%xR|0v1ZB;3q;fh$l0Gi`)anVK;j@~HL$GbwPD3jC%~m)xpkg~frmO$HZ%nT z4;0u%A}l)4o@+Z6YdXgJJ(l}1@F5i89PLFvj|qN==rp8|&qVtK+Wez^%q`Ik*xFc` zDW#3TJ?}A};PIJ27g{M#S!Y8o90Yb+4#bm=dwa}!gARArF`?o>R1!;2-{^T7Vi_U_ zrTR?FKh1V-78ZeeEAtfWWV`F6;@W$sIwU0HxNAyIsi;0Np!{T2zZ9@>0=&1>J<#L= zKR`nA<3EXqvlD1ogh!O9iWt(O?1sk&D#<=T`hZU?54~3zMe4O<_HnRKbTW#VH-WrdEERQOD zi;%X7!8+m_7LpY5Kb07MVKNRg197k^Dr&iPT;16*B7W7F@<4{_gZ*7Dl<$e z^J#YAOcz7>_DEu!sO3lin0;WyhfxM6 zhb;3aHWG(naB*(_}qG@CmYz=*&*Wyi->%`no-Z)XMjTs?Cc`xACi&?qU5n64ICIVXFyYo z0Ws*s#}Nlaueq*XoOwCMXK;Xu$Y63!9JhqO+xZq9%9pGCYBGui$G#Xhy4Z4*`?=%* zU=4gvM1LaA#lE@>AhMm-3pNXQFW~_CLtLJ*#(FN`G29n>IIoZ?NIJwfdSeaq8k}-a zO7N`eWXh zftfK03YTI`NJs!ZM2W=FHlR_QEkpZ`QK_J71diXur;R~2{AbmP@5gT)8<3ehyz&5Z zT&Q(HwI&(Ahh%+5Sp8mS+PYYt!`xWm^Sv1Pj40}2TBSHkqcPA_^4h`8OY%YgL>{q0^ zHkW^T(^j7@S^Y^;v0O6eBSV9fux?pfW|cZm@`cOlw|8X(a~&DoyLtP8b9bAzpE_hy z>ryL8^I)^j%a=T-PaX1z-Ymr(8qv?iZ`s}<)xTJq(0}Vrl(!&Bll&#cLU*=9LT04V zynWa!-Q1r=bMOOp5sD3xnLeX1209k^m=+W}%L5S|o>5u9u zzPC>AwUpz+$yV3VzL|-^m7<1`!UU>Ymg53@r!ckf%7SF=!li+{Io7})*FKk=xs@zk z_tq8;FKNc$kQfdb#^B=>#+wHHJIbR1DOWhl|KkN1Q+-|S+BKso{$4(-j(u?D+pe$C zj&)0S{7gDV=lE^*tr$tqD8-ERIgCy(l9P&xqM;`9h31Vs+LdKMB`ou!qoe%lFME~9 zOTTR$T}H_BQm)OSt@C#rPl`$}6}F=6!vDpjog3@^l~wJz>*4>f9j;Cepx z9?3Pc@4VYiNb>N6IBE}#JMdOyU)MaHgqVGpPtO%b@j*305tt2ZeD;-1q zhl|TJeC_jx4<9bhyEiZe|QIWp_B`mygPHpr}gDp(?+^YEXG_5Uv~R@h*1_q&(@T^WmH?YiPLUNdT0+8ZOX zwazXBx=eGAzB{Xzo15aEt@ITvBaU$e?^kd_JtJ&A)T|&J;`{dP+kJB?xFiftcjkIS zS@-NQn(V8QTo_J&b3Kc`+*-qB*Po}kZ{#nS+_pJU+1A!JfBeIi{JB0t>%b8Z6v3P5 z%Zuoad?_g@>?a-kITt%O0S|>oMBvE&%e=e@nYqU@G3n`7q&!w1OmEE;?n$6b09Gb= z3zem&n3N?a*1H!)U)8lBg%f$V}o_Oiw#Z&9APU z>jt$}IgJNKns#<}Mpsd1Vz7j<|L^k$&qk5(FzW{c8?V|>G^^E4#Q2S6%;l>X90cF($X4V zSfYVg&_bmWBpO)CO5C-4#PmrTvDt2QBGDmnorjOU#c9d8Uq(vN(lvNQDJd!NZfMtU zhxDn@c9Peh%JDK|!qb&Ix+hCBMFZBoiw$SwWPdi)_`SGp>EMpXZcaL-Bkix;;% zH9B_kHk$zgBs%Y<=TI(Wx~-db5}~_J%yPNI{C()6Gt;7>9E$;_ji7wkqgB7gJ##9% zwCghr;?a=3jWjPxdB6?i1ykL2pY``nbp}$4PROc#l#(M7!?Q=cO8$Q3)r%#{8V~yK z;H;2!_ou(+7Z#dR4K`((V-`Fv8FWkJ0`3PFhZf{8)*fI^B?cilX95Hm+PBme*dw01 zA8E~X*;u69vX_YQcf4Dw(z3B7=szG=*#88jNT_!2?1wZYBqW;9dWbG-Sun+nV1h)z zQI*zI;36j7(Z--0$+8;fg)GN9`Hmifa)n!>=aeSSVbKuYG)~ulkS{p0gTYHrZ>U}$ zFq&HO>X{3v-`fBv4y0Z)3z<1ZT;-)SaXUOA7vN42Jbv73C>NQ6z}f8m@!C_b{yTq+ zt#CE~T}zxi85*^V$kbpxqROnsV&s)(_SNM5e_RW+igk$Jyk4LT_BzZkouaZ@YI62d zGjVhVnRv|Opx(1TS8tv47leO?2s;I=`h{0bSpI;309ubND4I`{U zdEmtOM~0)bmAhiQ`H0-NskPM!H50w~>aylYyz9QGejF@G?{hh1G@|mSO5VH~rFTqq z;4%N%{{toR|7gAa|1b8x-=qF7tXco{k&Wlvgv%i8K@!P+dl4^`+3wRhvk}%tWFk^3%Gf>H7-;VA{_Az76sHQjRvv`tQ&*oLyXCtwZ&Oz%TAL zF9PWTOk*8`WO2M=1OS4QFOjpZJMD~)9VNNDM?pLb!a08+6Od@z{x5<;LLZ?0)c(D7 zH;fORgc^d<2&+8ar-2s$^e`-UG7Jn4e@c7P(7u(pnc#2$fTJH+K7tmYg2kq!1VE)H zTyYp0R9EpARRcwXVB}AJISG48l*jTNRFM-N%aR^5jvnQM^$E8vEh+yfuU?CM*Eg>M ztyU#PWANwC?qCIuo8^If-slxYLCS=Qi+KGBv_n8)qCpa^PFa~@6FvKjjEs=_1ZAw4 zhKm?cT4aCTcShyE_Ehfb)m@k-84CH-_7jIEp>CK_4~1Mfwi7RfvOLT}r?1J3iWwM4 z50@s|1YF+@?L-gx&x57$qA3t(8zD^MW>2`z3ZNeUo|*!o5G;e<{j~2_bEYEt@HO6U zqCa}1E2!cIek7x|36#BY5H5Uw;niiaxR<i1wzrI0k4GtHA@7icfzcXi$20MlL^HkKjO0g>a!iSvpL z$yf0vBD41Vu*^)+(V`XicJ~Ey;s8c&Zmzh?H1FG?G)$uMrf0DY0xXMcGv|o?*Mx+Y zX`0-*RMQpQW#yC+!FBu;(ZYvvFDWTuilC6F){Rj$u+~~+n>D*PFdB#Rz&=Ql5GnCx zegCd7OWDLY&7a`P$QrH#YtAQ$tv9JA3;H zyy`+&S{k-euI)ZwA=2$S%Z&Sxm2IR2m|@H`uWxTxGBq7CKm4;Lv*3FirKRP9t*!8Q z<*Wq}b&PXAiFTZm!(4TBO5)E~8eW?cPz0^|e;a?-+t3rYzDu$;?s8(Q!%|bmr~b*V z$-(WHWD9jAhENL5)f^4EppB!mkQjYSyTiA7JT-=!x-%Hz&CI87OIHSm)BA=`(gF97!DZ``?zKR{_y;XcQVAUTBLkA z%M?8S@_$J+)N9!c{WW>i&8ecG`}(O5@xlm{Qt>Z5SwsS12exl0Vl(jH2e&IW2(OM{ z3$Xb;ph}GVT3TA1X1lxxM@KbgSLsjiL0b3Y@z;2NO_X!4qAPasyXMq<4DBGxeEt6Y zf#BZ+a#!QYXi@xAE{>lmA^+VvJd%g^vb-Gv9TRsLW+`J?9tc`~h%-pgKcF1HNK8yT zbn&_$QG_e5Dwpb*TuM}f;_$Vl1zz}mOjCif97bB?jExUNGu7rx$hfz9?}k42^#cA{ zzZOSxpoE^V?p_bOAHof%IpL^;qlVG1^gHzr6T8T?X!6iV6LV8!NVav)Msndi4^3NJ z8!=9|RM8Fudog0rEM|VFbo60<=zRe zjlsudOUPwSnTjxJkF^(|_COs2c8ZGPz&r#6qA63tW;I%W?7#|^O-kZ+??Avm;ryE$ z)67jAPIoZ@#QU6rB!6wbo@oX(j_hz{^(PDiR46I|IFU+-w2;BJ`ISQLK-0k85xu7( zct6bB4Hogl>OFk3@8?AFzA_e^Aj-iReRFej$?@WKuGmEp5s`P69nye!#DO|UYT#E} z=T>i@@syPN@dT|Dyn+f7MyxfY?o4yMw(7Qt$%jcvFf0?XD}RDz+th5sHWXOzk5CQq z0)h($2K$@Mm;u@`$3VfsyNCWcci{bV&(*u(RUJc6gMkVX1*h_>|ldL7WDs4#3?{{-^12;t!h93K7p~<{@7^28GR6f7$fr7oAtHpq2W`VG>ahVOe-SQEe^GsR*7p;e)2S^qWYL zy6@kyp8sB*NK&AJvuD>4@ds}GaZ=J>5KBW0b8*%Iu!D=)FXcZGxs}*fG0K3crKO&8 zHOohQ3UNb}ON`|&Cw}M2y!vV z(SQzvOrn0y4-xMl2#uPQ3?f?Aoga+Ia!4;P`(S~y2LC~F@i%A{2U{u3@sgJI3}Vn73K>5s&k7m@d%N# znC?EQ>J5JuCkcrc7!nid?=_+*>wnE>Bng600dEpR^*~M;CFy8A=P%amnP$xd1XcF% zDC&zwf5Jd5`|b4|F$~hI&SSZ6iLm;N(E zzOu{U4kU~ry1d~<+1Dq^{v*7ouE;}%7=B}{7OmM*l4-6ZyQ<4n5WS&iat`MOh=%?= z`+n4FUbmv|ZcXL&1H;E#5Rp&BH0<~9-vscxuP4r~%d;FSz>OZu#q}o^0pVX=?!uUg z{e;bfw{Q7SyWRD>V{V!9b*-#XYqZpCq)c5b-k5}vuf-oHFb&{G% z;8G^qBftOazB=!;Uv3`1Ay9!CDIQfb^X^Grx(!>s&fr}h>L!V+H~K~nFOZRyCI|x4qrPev zxfA{#C`afo1O&xEd%KUUMqcykmZ!<17H+Oe!;Blf1VmxCsM+4t!!%`Z?}(#IkL6F& zBj$gc?3kmE5H-&=U}Ran&zvm$B#QwZ{9|}W8{PqiH%j7Q3oXSSZv{lnW-qT(`L7DS zUz>99aixH$Xc?wlfTs|QunezWc>#FWq1rk+is8BbrLSqo@r^{Whe}GklMU}^)aKhk z?~R?0C6j|jM=c1d|)kWmZpcB&nhT*J1_lbPp?mc-=y(kVPVvUaDxx#7!C=BiKxT zE+6K@geDAjF+pHah{}S02M03}!Xm~h;B>iss3koS9gBGNoqk5dsuFa0mk8q}4K*B$ z0^Q+U9{2ti3U=m>ezV2vaY*pG!EpAZW1*nK0Kkaw1Q4QK?W9J|)auhrA@Mc*1^))l z)^+OIKp+aCFq|;um68z_{y5Q7J~%dJ3~9R6vBQm^gGf2>!_%m%rH!k2@jJuTsV6%P z{4vB!zg>R_JMa+`#B`A?1{@2;dnlWL*>Geqx>n)Jl^sBP0IEbu*&w&NW^)Y$uRyK< zp@^C_ox+yRPV`*da2*00UN?7#BSz)c?YY13(?E8x1YLNQ7GAR|^(F2m2rE(mB5AL% zx`gGIuDmI0A`o`hP70K@TIWUetMY`1FsQs#!dcwB4MWbjV{s?ngSjl%cdT4Zh&xv$NTMza*~_<99d3*=Jx%WGwM(p8SvQ-o&ft zzIz`<2t^VK5s~iZkxCQF)JR38c`gk~sWeI|Y0|7&G$=G`63v5VDw<34s8REDuDAPp zpYQnx&RXaEp7pF}J%&C$@7H_3_P+MD_r7+OE>ih-UEUn!57p2IKGwJMiJme*6VlN` zYj>U@hYN?x@a6(b3(f!_69@wNzoC&)-gtSCX$SfS!&m*O{X5V_a&A#5$WeYw<}inl;bwRE;H-Pn|mK zMIe{}Cs7OGgmSTyh3S#mU zJsuP7SzfDCg`1Wf+a7A~_y1uv7nxxBeT=7~d1CSAyWc6-vewj^A9M_kqX3}cg??+W zM1;uR0naUDj7i4@t)~;wRq%OzfgrOjr+-~K;rQnkh+(S(e>0@z=P;-i6jbMcVz<^z zNuY4Nva&Mr`?uOB9Tnr!FaNrg3p+%Y+dGp;WyRwE{>GhppYdlI3+e(_6?o zW+~0-x6t&<&Frz&kEO29FmbTnU%7|1!@c;Iot|py(_&4n-qx+OERXfN9GT6}M-1XF z5=|6{)%2@NwjD(^2 zg`$N;e!e=*igw3L+t#9$3)T{Mf?~&lxhl#HNYmfvckwF=6@0E{8$L@KbG$6uA?NeG zOJ!|{J<6wmZiKxt~B#V+l1%i z!86`oZf*^``&pe9D{K|M(5ED2m`|o#OcWW^3-5FtzWvrNK7KAtXW3%nmqEP&{rq(U zN&lnz-q$WTd%BE7S)*epT=Kg}?l1<;1sF|)BClkpZ ztfwvMI78NH8Wd*QPF?8FAH8S$_!Wm!%X!G1z_+8;!}2*t!v~EfcpYw+d!zR{r6`_N zDJV8v(9z$6e`N*bQ`n>zl`q+evoA^THd zt1F@7t?$R9MpH8XT+@W!#yzF-q)(43y&%i}BQ;aa+rj%Kwn>MMJV{id`X`%bNS@m=1U ztnUB4cK1~tx?2y1(^(^aj#%3Tkb17jv#e3?S9V0mJ@;Oz&iujJ6#s*DQiadk&hJ}K zGHCt2AXs!+w$5sjp3Y}|haTUx)=*8w4x1b9cF%T(Q)|fUaoa@urEH3-hCiZmjb-}m zY4|EkX2Hqqm}*P;+~?rVy41+wN9S6GQYC({Sg=_>ZR!`*)}0jNjyG2LOt0CSQ_Abc zmytemYtndA_%6Ll(Y6MK+-QM{t#Zey%rjSnj_81q5|s+AVboYDwbXtjRr^0T4rgWFuymdeVF z@ZPYTFgD*`;oO>(vkZn_rUsI8d~NNsrhP8w$=y};o@Y{VaQh3AUA$&Oy_NB@MpF}i zHr?ZZ2tCeqNnh1X8Qr-+367T>vb~wEgbfINyJ~Dgdib#B*Mq5+c571tg3DAAX1;F=!$|WtEI(1L z7A_R}e7%Ptl{b48dZT_B5WpOVvBeO-WhcV--9n_%af6Ip_D__9ode+|}tAc|OH(*idaG zO#RK=^C`XCsb8gf_YhwJuf=ELM#3!h*PyFR`2t5v$GL{;PhZ*dgV%sj{a61n=Ni0R z3A4k;;2Y@;4>$xYu|1KA^lfDih@F=jV*h1#D5buj!Msl`Aoj$Qf}PA}F8+BGN5UVu zj+W@r)?71~5NX#Dq_%(c!4m!HE5o&iQ_)xR%Wg0}@1f0j(g(XM^)hQu7q4#Y#Y@zgzP>%* z(68s3y%n{mIUn2i50dhSX**<%t&R!ab&l}teAKvnYDl7mMxwC(lRN3ivz|&Tt!Kr5 zIKt{Fw7ez7-bQRR^L#9~?rSYh?d71)Sh2dzDCf$o?&6XZn#()4GexmD<9$c>Om^C{ z75P7j(Y@6Z_IUm`N>TmQ6RWCWj-k;5V&n}4HHJg{w^OBfze(L6WBSQRcXp&RVyk#Q_WQ*QJcFRlM*ey?75`z<2#`^U7e zb=>Eu-4P`!UBF#87{=GuYWpl<`YLKx?wXrc%#B{ zI5&FvS^d&vyg+$}%I0pEEO+_1)X+{G)D2_ZV@+<7a1zEAW-(WctpqXi%i5{1w9W03 z?!$rVK-C6^vZXa;2kU1_uY$E7`pTWNZ|VO^<2N3rh4c{+x(RHt*HE@P530*dYDH{ekL4RhbwvjJi=enFbcF2P1NkaSKOgyE6+&SE znAnn{qRw3Qm`l^8VP8f{@7BqwCE+3W)WFoZuv)cWC4Xwr(Zv_YQc`ec_^fkvR^Dl^xp^}V#V~DVzQ-P3dIBcbMI;nU zNtU8GqdAHbD#U0h?w?eQex($qr@bMRqF8S2AigGTa!;u2N86%+qLcsShatHAjr%7~ zsvOnP-XO2Y6({*%VzhB~{OAH3OZzEfKONjGyVmx#59QqgUUPw>I;^W)DS0(}xAT>@ zRXOYO+8=0m_w7Je^~OVIDcRDk4l=e{+0n{x58AWTWZZcAskL#9Ri}+gwc&(8imeZ> zLprNRZ=20aHubG!;iuP4%$$4`;xtaH3`DABTIMU%1Miw5 zqtU%8F0?LTdK?N>_W z_sd&DSa_!7QuNc==n!EKCR=`fG^i2CA=}Tt^)CIoiRs4q-!rQrd!)85MeS1CyE-B% zTn7iFtry~-cUt#(OBjc&oCVp_wHr89ZLwLtT}HZIojTxb?>!q2J`(N}Wf{mJXKiW4 zNMWwA>Rc9~$lf#Wm_7J{e37;OjX=H61QWvTkR(-ZLPo1(W=wapyca|B!%MgQXgeB) zh74$`9KL3id&Qn^**MPQ#-h-*jj_@hpFAU|^^5soE)4 zK2{stnLkTD7e1zn70iqA@$tJid3!Gz|Mzl^7&bF+|SrUPeoDs6Y%3$w4zO7MHh;U}Lw^fHoe3wgAK5&8XBh0Diq zpZb)TX+HjRTlabDuvbY*DMT!rcSFEfl%1UOC6lUFw(TJuRaU_w@yRvq9euZw)h&9@ zK1uaYpSZE_{i?_pY1v;Yv~p%zP5UQaiO%!34EsJX8e*FbHhLbcEF@%7bIsm_(NS79 zVAn%uGkftdA0h|iq)FH#20NtyVI|?YtB5lWqJ;thI|gG}Y?x#M|?#52i5b)Lx#Ktz6LQIs3USL`85|zZ8CoG@zYXqlK8=@I4tsOwpJy#`-7z{+hrM& zy6<&mB&MD1^2Zk#Zn*JLE}VBLPA{dlpay`cKac10!$A$I1qS>?4rk7Ih$(VaPVUvMKO(3N95T!ozjjiFmqaMo zicQvQ_vU7VXgDZ#3gO4R8)DR$FK@p&zej4Io=fHPW!`V^pU^(6-jb5UdV8Im-(p|N z>TqYzOapSgPI<{-4n!$XtN6+P>L7tC+T{ zqYAe7wS&*Nn9W;pO~<*|XIY+x9Qb&96b;q#*0ZxDL0v*=({p!_i-nX3r4$(|A~auDZYPl38@?5{IpJ ze_>ye^d^<8>5Aghb?KmY0>rV*q!%Xi4ThKk6%A^yoJ*<*d1R4cwl1Sf7#Q*L&qgF zx6iFQYLD^_;k5gCynFd?T{yO|fA7a%N>1Bkf)G>U=p)iu$y$QKSkyGEqm%6zQVjeo z%x13uxw0O18fP^jz-Hrp*7v(d58u&Q6u%Q}7UtRfcs%TlYM8K8TT~@Y=^%2$uTiBX z73U5C{1jcD>cYF*SWnt~DedDo*PO5!X7~zt`vx_;h4m~WaHzKO`Kp_IGn%A_O2c zbkKNEX{(8SZ$`;H^TA{u8l8{}^Wb9at_T(RiGA~AJ2bfEtD-(l_@h7s^0&5E)?}WB zeAQQ#*W%{8m|*2PC(qH;mKL$M>Y9auWA3Dq($mQ83)(X$bX05EgbOt((@M8H&ct_r za(h%k#urY0BD0(l>0Qav+7Y4t17sf?Y3|BqJGkcc?Fw9;Q;po%@jLzZNAc{pMP8lq z&HapWB(sCO@ndU!?Nuq(3 z_cC&g9UDqwU)XYt94@b3=5x_}tp)js^ou-#9ZnwW;~Ou3j+{?3;f}wBG_=@u(%Wu* zlFR?%^Rz-rGO`wh>(uPd+^S795jTql3U+My%We!FpUc0qRNQ=9 z&!us+l4mDb#49R|@NxB&A)2t&$Hjl%1-+YF`h9XdkW}E?y5o6XktYqi<&*1qUT!c@ zH>KWsU;m-I{`vcDJ4F4DBC3CjvV9Rtc|wl@tM_KE3ds$KR#OuwBnn1 zBhc=rvx)>lXAZ3%{rA5HC#JDQD8z-DvK3YhHD?kaapodC9DK|Jt+7M;UW$qjo%Qs# zNAWu`e!ls-$5yq5R_{Viny(I06KrT22!xIoc9d8&%13 zyLwVc(V$1?g6PL@@A|!(9@J!z!YM4KqBoRuz+}d*8d5kGLXX8AWCPS~OvTWMt zKUjwuoh!f?B(d(Z2wl-qK40hW6j1!IHAJkfZf<6&hf9s7?3dhw;Um$HJUlxEiuRko z{#b5vz+Btwi=(0d&&)RYyh}y&N7R9NhJkIR5z1e4edJ8NpKb2uQWg60>t5CfNoKhz z-e&o@ohBwi;)4 zmdgcagsVJ8ZqS}NY8cD7ynp<_D$Pr~u#AX)?Uhig^Al$e8!`+?I&|eTvsAq1`s`A_ zbfVEsGALR3>X7gBSp25XULQZW{m3W>f0NOjTbu;TRtChx9hkf=!q`-M2U>er4^cf| z32$ruQ&w`|r1c9QACKH1r@(iXQ}S=?ADE3@0b$wpeMQKL(ILnEM%1olDgg8lK<%nv z#yl=(_ZYRMsS#7V=Sanxs5UPa1XN@b?om01zx`F;;1QUieE;Uf^a~z3%LD@kY5;K) zv=#sor+~Aslu?kC*2_70(Kl?_f!VF~9d}}a+ken>7cKnkp;yZ-b9fX`lX-u-@mHZP za`f3jBN@NV(vr)QAu&TE`Z(#5`yNa|2DzN;WbOtO{b749F(TDoH<1#N5c*c(UU4lq zMBs7)uSEKVN*pt}%;po_w;t{{6Zd@iDvZKc+|H$mU&ljD|H;oQ-?lsE^ZBEO#U-!% z2J=J~DM-MKeh9$#%tkhujsO^ zw;SG_rgT2N`e{RcNa8d(NBt3Dp5<)@jLzF6NruUt1Zxu#DqFjjh1XF|&IpE>-F7z*cKic6w>tv9(~$GHy^9Q6;%ds<>OHe-hunKv5%FuK zWRT%JvF|viBgE~- zRWG&nQflUch~P!m)(WH!N-++GM(G_d3SE4pJ(PNuIGq@u?*Na(ai<|d`OERZ)JV6w zm4Q;Bycv!3ck?!2`F34McRRNlx#f~`WOnQM1?zRnLX1wmX2{s#dM$ysdeoEd)_eWw zw%GDP@eQWm>Bl8Ie0Saj$fg|+r0CBSlN7sgjr-a^@9fm2Gl36!8fkAjGrr-zkC~3;TqdtOUtcB(0AXar+HUzg7rh@|dcQN$%+05jUCNisV-&2` zKZ+Y3mtEZ8pdHy5Mikip=N#+tYcraH5dnTPW$w9J>A^6*nJ1nvX)i;oD+6& zlV)v|aUL7htMo7xu&S6EPT#x9ww}Z|Q+s)Vn}P^`5HWx!qT6 ztsUztxX@eS`a!5`wJ;+d44y&5O)fXL{h|I(rrjL}B`?ROiOz4iMui844IyaPb7SgDB zl3stBA8LB;u;Jjwg7a0d>3w|}C#!B!EU>-N7at2F(mk?eDj3F_2B8A;OIgvbU7Lf$ zW~Ec&t*utiV9|G8d(Lj=*d#Vss4lKg^$4?Y^0c}X7r36hemN-bWwXq!4A$yw*7Be0 z4Az5+o`Ka5;~m~jHYp^_1v{S%mS7HUy){tg`8%ke#W>yYM%)?p*SWjSh-))EbL_FH z6%lr%o8~i>dwo9QK+*T%+vSXrJ4ks*yVgf7>laP9_LX|}s7XoAP|>ZbmYU1G?D|TQ zwK-KO(lTBhn>510d&qInnpZyi8LJ|D{u5cp=ZZA8YAr0Je1}9<{2Tu8lkgrExvbNB znr_awrRA-9q5t%ykoX8P6*b>P@2>dGmPA`Om&#x<$6~Q&YKNt~s-jE2i9ua@o8vk? z-$$o!hUNQJcRI$rK29GdCjWPkNnD)!|DN%4|8K^x%KU%&G3bx{H{*Av^uHNDUH<=O z{K)^C@eAnvZ^rNcXCIbQXZHBnT8LhJ+I3TF9+3bSlG1O&Z$3=;j$Cu6BGJi=IF+NT zrF{0MK41UoBQ)+}j)r8J8?vJ9)lMBA&3*%32w}QpX=%?GT9iC}sbpCF+e;X0OcX_y zmCr48?%4BK#rw*0bBC}Wt^EE2%pqClN@GY|ALj&pelH)uVPdILeH)sRM}0(1ASeF*iK2O@k0&A$%6g1&8mzxna9p-iDM1)S)v_J9~JCd)7)xd={opzSNg!tUp*6N zJslE0rd`%$$t8Ji(K%u_vp?{|h2Uwjs*K6EBHLe-D2#MmW@X88OC0RFTPtb9O& zw}p(POS$JpwS*~tNuT5TKV5Fu{~veqZa-s0eFp1lj1~<)?I*fZMElSKL8hgu1Pqcu z))+m&vS$vQf?Oi`PBaqJxSv<}+ufRX9Mg_XN+HC4PDYu?Ur={>5ACp2=?>90UI?&2l)UlVHz;8hagy)Z zv*+!H4|?b;vK1<5lRil4FZfi?i0(Y4zu)Vm=nN)|Fdy6lQ7lZWE(FOygjGxH6qHrY z9Pp%Ybc7;lm{fp7mvTgz(@R@c&m(-CR5tZrUfR~4IqCy(4=!maNI$&hKFh>JAQb<7S0c`nyNdVOb?`8lmuXuBSd9B# z*vHV{(sBIZRrKF2q>sqpb9u)2gQUuzl?S}En0)D^%TIfz?`*oEw?eZXoyTNQp>Y#0 z{w}tr)R|SA_Ws>V$vnMy1g%Wcb6V7o*h9xpQ+x{1Bc!R(p9fjfn1?)~)UJ~9thK|G zH@XF=SsmYzy*xdvtl>lKY~!ab(2)s(;L*pdEdPjz{5I)4Q|+Ohu5-q$9Ko7*Z7K{O z)rN^xwsVpR807ur`wDd=3~^>g%zH{>1sSxVbZd9C;Nm9(=h|U9cYMH( z8j=i9=3Vy{*tZqR<8Elmt23)G4a_(omEyxGIAy?QZSdzbDZBgO^X9Uu^oJH9JSWRf z+dZN0$V`aKv9K6tM$Fh_ivlES!o$P4`xe`t!_?$y+%H*MpNBpIG+1NaN;m21=#Zqg ztj4;jvDRoq%ZZrN%FSJRbbgg5Q0mB%oCZULAmdUU+aqPGk2>+J85tRG-@a{~IP0j5 z&+zs2_5Sq6P?;$pNqhEvqKn{QNsFeTlk2U@9RW820|EjB$6gDENj>N-aq}L%#>Clx<7e&Os+E}Dv7^Uhn9rydh}PyGD3ZFRXmthEXUY{s_knR+$XHWuI^ z>Dr}v1S|QNSyViWPMWgu`R>_=?pimMl)kp?y#3zfzg~cXg>%qe9`3wHbPwfqk(U-tEuzzlFQ4fM zW{1y9U%IsQ;lqb$9me#CxU%vQv~d!PKcO(?H1a3uk0BI_`-eNViJ$ft4AknnF?a9G zo0kd1rqHGNRA(RP#Y`<`=H^~^auVrTpR`GO|GxHF64~G0!U*ZpP8mkfmq8aOy1Oy| z4Rb?Vu{myab#?W@mA@TVW>h}7^O9$YnELHHO`z*g3p zPexN>iA~EcNf~@15Nqq?O2=ssmSG;>WfWhBEIY?|_R|JL| z;y!%}pO`S&f~76Qgyq|}&pJCh{e;qiboptC22bo5ftU(jm%^PpeOQIX3Vr7i3^>9i zK}rXHGU@GG2zIpF^TKM0#ly ztzYGwXNR8hVM$3zOo3#6{3w3o#%{D$4-5}G|FTCo!1Olu6IP=`{CB}eERw_lpfP`J zY%HQ3+HEBe`nz)F3UtysR>GXV!>pQ{52F1I`Xv;9Vj)v1TAk!G0MUs(YR>6xgG6l#Na!o ztI$oCocz4K{Vp!~L_ZcQ(7>|=1qIzyQ!@<>r|>8xmfAqTnw7QrfSdG-0;7lTIIa6F z(PfGrPRIha4o78VaAVbqyVicJy@z-vCREVUtEb0s`}S>J1A`7fioJWG>UIsUGd=x# z_XFlIDG0hK;OXv5vdpb%ym z>mvn9A;*P=dBjj0D@K4+3mn3rOMqW|t*W{;y(%0p$AmuaDm z%9S1LEsTXNxdjFF!~g~{R0+K_VgWGJpqqdG#0nX+qYtqav^TVHje*e3fEX(d9-ZWv zP)I~mHsnVW5)-TH>f+PVUZ4rq-d+&Tf|qRA@V3WQPZ@?fml?B;Yqt+gyN{sdfoav| zA3v}I1B3aW7ut>2{v?VLnn#!@UAIihV4&msM3&>3&RwYl9q+|J8&B7#i56n&Og^_v987 zuXNlmtOGW}EwZvypM?NUtb|q~l|#z$vCzJNd&KRRNt`3u%6V{)#m3qr z9I}Ll1|1Ft`VUxF%f!yU^VgTf?*8~tb)st&*A!zXoaC;sS=?TH2Cb-5@aMJJnk|GB2HIQcAh<}J4`db~Accty z$dU%-Q8OJ7A^JDwTJder~_TE2#-XWwzuxE`^IuN6P z>Q2jgaCVO-(kpYvQ}T%g=9nggPU!cZ9$DCNN6!7kgao4N9$nY41&rH5&gma!=9O=b z0{6>#rDdXpsywL}3Be0L(G7m|yz-Ez)%aUPi0GZr3vN3 zbe+n#sj0&lP^1;IeSya!4gf|ApTpN}>+0jP_k`1u7J5{t z59G!SEegs8mFRa{&Ia&wp%tdy8xgg71! zW;n_QOKA>LQ@^A88j_e4O>e_llYz(xS<4v=CVj&=FwiC@RN^6L1h z1-;|s{$10ATrosCpv;1S$)M2CCY%=@Pf1M;sYH3fQd^cWwSa&CT}L#$29)jiF-O0; zI0fP2Mm#rOlJE|Awh%&#hZs18rw6hu+?Y?r)QUF7^f1zcST>OzmpGu7Z@srB1LwWj z-1KGOC!7`@>+jhMJlKx;1xyoRRxq5+K9Xxmtp$PkX2XO2A zJQ<@Amm}f!aK{s6V-nuKpGcrA$DKcOAT}w<6Y~WK15vK7h{;EzLLiK z@e%LLPk4a_0-{~{Hcc4Kg0<&Di~*675M)h-XfiR*O5^TQ*^fmcnlw8RZn4mI8qM~Q zA|nRg5O1&x1tXo<8?&(M591+Eabn_#fq_A2zF8i~1vAKmpe)3&GFMj-;Gw`c=t6qp zBmyOd@erG^u><4afDIA=r}%b=$EG!8dbLd{1Ova5QRomA_hs}(Xj(5Gc%KLFV{{y+KzDt#BNL#ZC=KK z!6@*LHMO>0$5bs8aEOUtj02&k8-dtiwWDW4eE7P5UBWNM2c$gYuGox%mxwzLm_%r4 zM@7Y_r&D4j7b^L1MR*J`|OaQAY=+x z?%X+SY;26lUBD&v)KJVZVYIs~&yvy2%`IkZN(5$g?%WlO*a2t)FygR)9N~As^Iyd8 z0!;%6U?7#O2oHl_;XXmw7g~IXw?N_u1J)WbFJHa_bP5kYij7(;-_BOm*Xv{Ew7Iz% zI>zOgAH{Safzyz+`6ou(Jx6>5KK}TzSv^xxRaJ`hunyKPLguoxH!aaIzsn4=O4!{A zIFIj)K#W({ak?Cr4&$6ED#w7I0d(5kG@2=NS&;tZxyH-t;~ofYuFW9}$HjQAr%YI5 zHc6D8GM`TX%+^0DO8PJPf?V+ij*C!9hf(nI@e#6|9lsL*d+}zVuFAy0k(iOusNTHc zggXhX2@gdL$6z>jj4}e6gFhj5d8(^pGjk+Vgz;68E(i)5ryi+kXh=kg0_oB{H_kd&<;b))yQ*77>g2*)}4yc~t`nSI^8r^@` zQye3$n02%G-F^=9WG-4X3-2I;^Pfr;cl6Lr601WJe&{QU5e$PT6!eKGLCmv9IWyKCc3=rpgqcT);zE4kkGr~4x~ zVv_wjkTh(Q&vfoZNXp}o=zDn>xy9Rt+H>c_7BzfJ$owRMPs6-5a4!Zg`)ABYzMVRH z^eAr8!d(_DgT`V4NVJ11C})RrWn(WQ6_?kR|d^d^G{v>5~W> z(3*B}+TFr2fH!HoGMc#j-Os5Eak>(#1nMx{hcIDX{2D116bfqJ_=!k7e0UClp6`n+ zay|k}Ffw}JUL(rD17lDacmXM%Hqy3S^|=1&-clHP5{IM2=(!+>3T1k2+q-u!uK2wk zljOH=e{n$dzctS?QEg!K&u9~YEeRbP3qI|ut^&-)3ul`woLrRgc-@GPNi zA+Mky&U$#Y=rE-bY&O+lkcH4B6k6=Gbbz)pu_FLMz+P<_il@LHa5)+9?QqNx5(2jF zs7!;p5fTJk3t+JR8Cycr7z_d->(9@>OI38I2%rGQ`NrjMrv~@P()(*dejU-ne6acu zp&q`|)nQzvhnNS(oUMOMOaqoC1I#*=;?!`P7uuz<$rFM0(4j-bgTWGU{V_eskNL`6 zr_UYvHXJyA?8e2B0axX;h@^h_KnUjsASAFJ?g!A*!DQ-%kgp+-V0nObK-$P}z)6(S zt$(Kj!K@i7FA9ARNRsI4>9y%CHX*;jOTg{5zbgq32R8!k-2ABmubRQvpj)7)dP`Nc zHrup=|IzFjLTY~9N!?-Q3UT|t=MZ|#1bhQ6LANUl2p}04D0b@>B|;Cf zRC`D4t;niLfv_fc7?6Xwd-gtMO|qgs_{RnxK)jil^as;rtpe5txB#RSyoaFmL|S4! ztewE8S~N)au{*{peM9hhW&eLex)1~b^n!sB{s(~{wFF5$y?DpP4(ML{RRZ5a^-S*O z%`K`icZRL=Aq0S*)S=l40|Wc@A~f_G5C^OVCV{sF?SeQWvx%TI!qr05%;&77aGEB8 z_ksXw4pbl}`$w4^2yt(u{-Ca~_TCoStWY%v;;B^6e0lv0^x{w%aD^ge`x_rT6QOeh z4g-7b@3);;HiSW7*K5SeO&OVeMC1msUB*ccg-xiv5w0iGYc1~DSYwlMlnf*OO=MgM zMIf>|oj2E!79!2SX97<^YzwJzd`ikbY^JM;ARTo)Jv=mo%w@1)bFIN;oM%rXUnHV8 zgmIv+(bfTZ%PTM+{Aa;pa=o$wAp>_{BC87Cf%uA$n6Ury#=!}xGJ-KNO$~fBJ3BjJ zdR(gMV*{1DmN`rasz>l1(9Ho&hlB`%NJs%Z_8lh@L_*WqtDIOD1mI{*`Pxda4bX*P zO28dS!Wuvc5rn~~PZz+0ph|P*gdfx=|8=r+wn7uF>8?-%AfR?~ax%y>(1S-3n{I&m z#lH!r7Si*;^u<^h+|gk?mHDvWDhONy6oDq5dyGoA=%mwNA!rjX_(Bt7oWgN+ zfV<%BAwUhTm{4T`TLttA(?F_SGPNkLu3q2RcmRoSR?a?T#+pz9LZ$`f8?$>g$hUyN z5yQvE$6;c{ywcj~FNM!FkIy?L-}8Tv30IROp9tA{LM;Yc?!ULRRKnj90@c_6YJd#r zf8=lh3+n(a2-OP&W9w;>iHuRWPLrO3E_PGI3$O?aA}X*8{LZ>q;z&GM5mBim*c4(1 z0fAlsF4djaAMQJTK^!SRp~m*x=>>aIcUxN(m8{SuHMK73CV5;#yl(nQH zbv{H(9GfnQ9B<+!gtP#Gh30B`2n%Xz5(CXgXz4%4x)0##O^EqITr1sjSO$nb57J)h zsf!7PtAF+;Lr3AcmlyC{KUBLG!t4KhDnL!)p0ROKYHH!2l@=<@gnMy<*Pjk`UJm6w zf1XIY!PN>_jqHbw0S7?wdt_wf)%*A4H5n7@|13_lSvFlL`M1`9uSM9LIK7KeIP$oB zx_GiPRl}fUzW?|!?{cmHB{K*i70iFQkIl%$bAW;un*Dlu;Y1An2+TXu48juvuXxZo z(WJK2gz*^st1|UF0f9z00IUS87qU$-pbj@r$d|wqy?@U-PtulwqW+{N%mnsA$d2@E zuJ^z&WKbRfok383)zuPPz(XMSGoKh8-^{6cfja;PM3x+)e z%*DMs>~#yCt7i%almOXD=*E|cZqA9y-@1jT#8Q-LY+91n&_MMu0US!GIz})QQ(&<0 z5y+RC%~(`L^T2!|t*-|vG@T*|1=>M{45x_8c8_v>iiw5A3!Y|nBC`Q>&visVBp4Gt zg)u-M$8tLA3LIDY&3|nnR(vESC!y?E(K zDJdIeJ01~18>YQf>AZFg@BKGD4U9qvn8!3$)g-Dc7*#6q;AQGeI!mpOq+COtsUEQ~`f4TLQ69gX#K9Kl9Xc2>V zLQXJU%hQ0bU<9>-q{QexLSq091Y6S-9oA<<3r$ENW$-dAObg`!*b7`E*hnN=0G(g4 zumNzCa6lJNPKP-U&8ootudS^`Bny*R$0q$z=!g5M3S|8K1TvS^>U1VFJCV{8BA*LB z4aBwr@SX4x_xD(!WCwebb`eXRE$hb8Tk09Ycep7^b7a{jDDM`shL@U z(OoGZYtYDGX7MsmN;nlP!6htQ2Q>ZL>%A3&#BvuHS+IJ9k~RW4A!-9mgLH{My$wv((t%^%D$%n&=|)D0-HFN_V#4q zbtejt&0q`&=P&XR@^^xd6IX8{kwn@AbPazi4KIl$+#(?`BLpjSO`zi48q8l=SxGz) zu2}x$Cod?+$y$|G@=AjV_$~bf>$+QW?$aUb!?IQceXx*#h~+6X9i5#fbjR*ThJ>fg zB1&Apb<6)t0jr}4kX23&-7SI520x?lg}3Xgd8+L8tM1Sg!#ATjdK5$xzHl|5^n^}1 zSY=clA*R~Ka}&xUh}B<4ZF#^V;e>EwQjgDy#qKATG@wReO(&^%;85E=cqVSF;>#e* zjPp|LV9h5l&{IZG>NHyHS|JZ5fVnjwXuK z5LHB}aANc&pG>LMg(y1PhChXW`bYiC{)|gEtrg*ah;7^d!h=#RtnnV6zXegz!>(uI+pdCbHWuuaH($8pv;a63b;>@0_kaKMc5BkGM-w9ods|&MBQ4F<4YNGsRW}3aGe`aE(!#u17Anb zXsBA~puzz-OVsdvP}#y6fj;P)Z_yaOCb~iiPZY6E`S*z_Ar$N1fraS=y(PNdB@s68=TiF zojHYsya8QNl$mmzvIiWE!X7}NxMgBP46%&z*Mm`B4tn~QQSX!V^s`5|gUiETC9(;al&w$-@t)5|QKEatj0Z$AsEeM}?nTrVwD3{X>b5SRp@9LB zDd3F*xM7_eGJ6oGkH=loQMv+M`>(WhOoIWn333Erk00tiHZX1wp7@;N@p))&^UWzz zBX{6vs8o}GkU6!06|p_S$7q|&){)U5{FPGO4wq^ zzL%Gmk?fcBrm1D|_+Ou{W=RE8jM^PA&=aEYN2JnL7X+YMZcrcDHXw`j1ldOq*`Oh6 z*&CM}^lTR)UH53Zm=x9hU+B3&e;}cp+cX;Y=8Y>>KOx)_NEoc(mZ0F^Tk`Uy#3e*o znovE(R@6Sl!j(hd-a(VYGQrg$j$-dAK|?=Y7^9-}k1#Cz`2(65~p%u0ZLIw)~Y;Z3Q>*E@U;NChrRsWWd17p^nV&|4~QLi}^Y;Eyv zi9JFn5fS&Ta5|K8IQ8XRqigh3J`DpT^7O z2BAuv5Xxf+nkYLh=ztHvT|^ZcSp>>ikfFt%Ky8rDU|-Q)0-6u!OkRo>c4Dy-dT(Gb zU_-#}5&X$NJ@_45bhsWk7%cOPO-`;(Svi7c(p7@ULamW#KiWABYc799SlR56G}dvk z(jOmANXGbzj-4~2TWp8?^_zCIrfSgsZ{(p`|35hL{C}heetlDkPV}-(mge{t{O86M L8S#|Mnos^OLj>tJ literal 0 HcmV?d00001 diff --git a/docs/source/images/index_recalls.png b/docs/source/images/index_recalls.png new file mode 100644 index 0000000000000000000000000000000000000000..5e1bbda55a2b85e670c9e98ab294a2616c01de37 GIT binary patch literal 148929 zcmeFYRaBc@)G%0u7AeKuihFT~BE_Y+LveT4lv1F0aknDDDFkfWF{^?bY0 zdn%v3a+j3XeDm}Rcw_PNsZH$hS=U3u*~-J))XnmlwUe`>C7Zjso28|byN$EQ@ykxJ zr%7!8CP}$jntIqeJ5g!cI$A!{^t7bn;-S*8bfMz<_*D5QD8$Vx#LY(~p-lBzTJy(@ z^V2b?p2LwEL(eMPA_zwKBlLSC|S`bYrc5P``+)k9bHZMkpbnrY0ok{6X%xy^;2l9|!d6j+*GrtJl@dw{E#B`scy!SAwlsxHBT4%;F^V zH`G7gOa3c9f4cT2{Pce|dy$IYRsOFLng753AB6tD3WxraC(PepCJCof!T{lY!m(>@ z`{fs!4!0jU#FxqO0pGppm8g2kKWwa!N&PB%gMNnjNUQ?ZNa_8=x7#Q2Y9SN>A!H)rpi{n5sZ?mFFii?L>0@m z)J1t4XDie?3(7Q^*I2?Lawbx{t@y zD-sO3-ax49UA(XBODP?5x!}EoY3q zLc|vHsH$8XFNK0w4cjS;9y&uKeo?q*V8Q$ny$%;9Is*M=q60E%2+beQ0(g{r_9$f* z*|XnXoKXp~#g|9yHWq29pvv-bY@7}}d@k3n+*x#f!{U{Fi+pC-EGJfSxD;|tr$&RKJ~EvGR8OrA?U2d3{c9-=p={%1 zMiW)#2PcrisO#1WH-n%|`;}bM8INztTlA|H?Xl?d&axD(7s;|iBz7l2Kc8SkZj-?cUi|252uCf7oWEwzuDW11XMP5^P+RAPrdZ^4wn$xnW zbWA2H)ovP)B!r)u0#LR(sj(F4XRc*mdG65l^rrvGo{|i zG|!Gw6tP-iO$V*&dS1%lZe{WoH=cd;Ei-lo0F#ERt~k;w6z+VPmM{6ByO&^#D-9a) zTY1ZsHR3JWTrDJNs)lhNxWt>_K>aEcJugX}J$wP=6Uhv>*Pw*^pjUFsQ(Y>eOz|2jv;76~IUy*J=XChQ+OnQ@dRYO7iBVm4-iYuoC0mtMi6SjwFX_1)ab|3{!Etkti_s*fFSq_3eeQfS3(`}60e z=?y#_mg_azG7O6YQcJ+R=_g9z`^#`NAXI*Gs4~E1OcFWvvczC47YZ~l!Npf{QsCJL z6CVnX^7?whq%v@nU7{z35qH4a;A}WF^U~k-`O`Pin>AL@fT_O{ozZcYL*8J=G3b{4uuEd)$_~IwK@`RjC~C7nDjqQF9(N zqPJMZZqif{sc|5;X}gB%ObB;RVA}QkE9P*-vsJHF7ojWxqbf=N{iNa0p50zJSlIsJ z;#Sr7@&;G6u3F{iXc_=!3;8;{ zfN41DF%j@d6&&v8emm<-bh~iZn~B_=+os;rWDFh4?kU=6)$7=@)(*U8O0=uD+=GsC z-VdC!FVDHVHw9c#<&NHF*r$8_WeL>~y0|TC11tZXuWaMm#3!JPEwX^O)Y(o)zX6~V z%M;Gmn!XfH0rY=q?B@s3t5J`meG@%o#tp|>Z~xr-=XL7lnyj!|zQr5Cp_&SV`h=I^ zd>`F5_oah`Q*SK;1v@Tf!!XH**DM&c>>7)u%7uIuS?Am*1v}2I`5tR?FIPq^yfWF_ zCP43m(w&+AQHm>sb_dTpDJMro8c2v(>zUAG+>Mif%BddoHOcyOn~zezJ6QQ#RY0CY z!7pRm3L;B7TFS)|BT{4npkHN?iHs1$vgVE)z*|4Q(%EcR?bw=_(fX4iI=p7gSW}{~(TFPRHU?CU>%9%Qo>q$eygga5{ZTASCin*c(%k+Xfd zi+(IH!X?TUb4E^K+DDkkNWLB-0i1%vLX2-z7{c|s@c$ImrcQ~0l;@!2D7yNP;%HzL!Em=;&E&bV5?i75QYol-+? zh6jiq%>Y(Tx4~Fjo~uae?2qf4n+<&<291;}Cyd3A4B6}cZ;Xx3^XGk*^32KM3HZZJ zckl4Dpj8yUCP{fmuBu%_CJY(Qm1nLj0@&x8+&{%+?n#s~HXQ=9k%=MEGrdgm+OifT#1(@cvp*P?S8C5)FPs8uk85`&EWaNp~ls|sn$E_5Dl>raq*5n ze&OSFmgXYcmGJdx0j^rNt^qH1#R+7on#fWuSN}etFVU!`<8tC!;LJyt$pAhF7&t1f zp;r-b3@yQw;qK@VOB22L76zO~2LL+7+r}8mI34Y^EFj-|9OeQu=${b0SGbh5+@3?T zjT=$296M95m`8)&Zj9R0`m1qlnTbY@g9v{yf{h{;o*)_yM+Jp}G*D`Z4Uh~eAvypI z+i8&SIRkitLAw$(2GaHo^K z-8%slpeVTq3MH?zXRl37lSTKDXn-Um7RN|mmP5;2^|UFOM0PITzAZEy8ecr)^IsU> z^t+Liu*) zS9d9|@#%(iDP4@CX2HRSLhod^PsD;wEgxz@qn^t?wR+Yq4n zRmAE@gRj!v<&}SH!dODNN3V&9vY9r!wmBN`o$Q8NdI_EuHnj3My<4G2Wf_g^jFmv? z9M%6~^)>dE8U_b>mWmEqxdL<-lYjixDilXU3` zw2z^RJ8N{W(lTVyhLevFh=?kOq9)RFTL#{93Xc8C&Iu)Ly1r40Psx&N84e~Omf%#~ zaAe|9R+ZCzTZVED$p%LbfiB@{uNLadKLlVzR@`ZCM zuAOxDY`EKcXSxJWp70&WY2P@yZ+POj>*4*T<(}t8RC;OKRX=S*ZuHi{gGsuV3+fi6 z825`vbn&78=~!kDBiy=q|HI8I?R9v>vD#rHn?_P5i}e*J;xZLs(I++&*!fe%lLXge zSk!2KezOT*UgzrkI28-;p)TofC7v;?*`j4xfD78>!G;@3DG}TqYUE1VNE215Xw3=l zaTW+*&$u{nsO9D!su6Z=)q?Hy-1^^<{>Ne7zm^|oI+BC(95vf3LJKMuWjqj zR*^1hBfDm!_*@UhsmDx|*Z0zC0=$n{3DuWaZ)|Au8y2sZ+i;bEt)~g`pv+jq)Mo060%l}wWi%(FA%rW5v z8&o(9%dLXzrSTfCxrMl(f+U2>Ree89bg5A4UYzPmD(pEndI}v1puhS>B4LXCgCr(P z#4U-%R!M)lc$#7Vbiy0aQs)o~{t%vK^VKlzu_ULKtXl8r`)25yEeZ_SU*ok=3X+1;e_#XFh@fVRaO@DI`*2 z&?+?ZBI*qj4o-sn=n#{OveRGc1_#kwLV~eR&0*v=+V!-AkOR}pTDXIF5X#Cp7f+Q| z@XE|LH%kVP#h5>a&nC9mt*O0n97rX6$bpXWH$4QCSmjdvIiod9o8ysl^iw*0Ao0WB z9$m8eiZs#S>RE~r@k87%84Aggav% z?RDRjilv*}S*?HGrFB5ollbWD3bAU>p=X`~!z^>hkCU$jeE7nqqLsK366fo0g=M1s zqS8G0ll47{jYeHONQgu$lj6!E!c`z+mboBP2$MP#Dz6w}zOv99opHg$H1@svPrc7; zKhNWzdbxcE5zbux2q&CG?!iI1t)=)}U)RAmSH?~4vlrqUW}_7m(T_K%?8Y$++}zns zJLXC`0ts>2lzl)=-j+JY?+<=tMzxxF1R*&rce7`g-=}-&fWOk?!SFqGw}4Z|(8bDR38UQ=I593UsR`x4unM6>wt&w8G37O7qN(hZK+S?BhQLh@@-L~WnQMI z6Cyld>MBO^R5aX)b#*@PkhR|l{p;ZSG1g~Wv0HPlLuQiyNh3fcMm7E;em;K2`4K6E z)8-=FAzd_uKvBGRU@Kw_{(*@)wvH9GkEOwSowSGPD6{cboYh(J=K=Pl?D~uX)W&g; z5ltk*C%>7Tmabo;*p}3fBv22}UE^b3y3Mrz$CoqJUuX72F?OM`-Y<|Jcd{%BY2%a1 zdRB%rd+X{t|9CwlvF!I*(@>fp6@Fr+$2yqb?PZ`f|C99fW=@``T$@YW(`P#DzFhvT z*osTMEe(Y=Y}1U$lSFTKsL;H?KCa5)#?`i)RjKc1G)8ycN+16*ra5v+{(@L2egfjr zeHPBbzH76LFf9wJ|Cv1ySKu24J72np$TT?Nx75C$P+JB;>iCNaL-17KuT+yDvP`CG>=!INR+Dr27sL__(sOlr@vf%6DoDn3;#mqmGb1$y3P6}cf7 zuB!e7T>nrEh8rZI4kf`q3^tj_sRD1R$B9L4XOBHQ(s`2^{2ICyu~IpVH~+$U6p||Y z&a0Kb_JONy2=M)&ilu5T#{6Or2QU4+9mDi;yNjX-2i0X3t^vtdz-Sug)vIY2$8LDq z$u1XtoKH%Li;3B94mVDr*NK{Zw&|jLrXhM=3&xYHj9h#0Vcw=(c~RPLq(-xO=PSR^2|SH#}N1{ z*?*J=;PBpPaaQ_RW#exjC`+{|vd7oE-B}`e4_c(-sQPo62JM|x?Nuy@W*q2~ABcAO_gZ%5L(leN|Q&R3o*JO9=@ zPQ5W516~|2;3O(+-y_fsn=?Mo+p+n=N8{2ETj)P-(G2u=RJAxuuy=rAO>>p< zee}GpPh-q*x}urn2dDPDe-dZbB%D$tDaJ!nW4r^E$Cm+G8s=H$-NO(M^sOt^3O^MHZ_*lnyajz&C&M zXPjJ~_F3>Oxpt~GLM8NNnA6jncQ8>Z#DeBVWXR@;Y3Ym1&aQe^_KPc}lRhvAXD8KR z7&@I?oORDyF*Pylkuz~YapaXDeP3(USi81k4JMni-%UI?tM4dVwK>3aM<`a^ zekCQz#p2%Izx++fTPPnR$YBGGH;j=CvpvsrvV5<^=0a?*Vn^_S=6x?)#^R zi>t{{jPTHK2x{$?$2+NhmF;<_7$-C(Je@1{0_$AhTu^s^n{qiqXJb%2d$v+yd-a{x zLxRx%#Ian(;{~(VJIv0|^A*w)0vc?@H2T}d>fN!RRAqQVY^iDPvNwJ+WLG(=`Bp&K z6qC;VF?-ejXv^24PX;-%uP0nKCotR6-uutixE0LT(!KjUBi3zNpX;ujGYP8!$@UrV zKLM}~hqHEarHVZYl(C>c3a?a(!o0RPvVXx9IzYcYEk30Wj|Fl|6Pw%bxj>`cnC;k$!vb4 zkhzpxsI{9Gstd})a&8w7R~!<&pEoNqT!3(6H53j*on4slzc_iZrO7}SF4K4b!hnl3 z_Cr01yR8^v0WWs>tG`fMBY{sVsF^VB@V{d9d;F%m9RfIz!{5&5VHW{gp6o@)Auv+P z0%-Vk9fWm$b&$Nd)%nS3eeA$;vMF^3Rzy5qrd6zv%v|RL4PGl)F&mCo1lXrXse2tz zFYuMC=58}2tT){~`7Fnw;5er_Quw7L#PU4sgy58fx-4z6!wm<&N zEG(b+w?KB^V@g9Ew)d$HeLLa>GQ3k!N#QCNEMa}p#le{N31D&|^z5^h0vRwich$}f z0Y>S6V9C zA6;f0M0Mg?^?+A!u;p@AX+RsCD`tQ8Z{h5-zWI=0-#S$tScX`fyN=c$Ab`$b?&eW6tHI>4Oi5%4jsvhDU^ z_QF>8!(-YooR$}Xfonq1YpvK}c380N+vcft#4C3Gu5x5C7xI$F!|w)aRNUFa&l-6x zo*Spm@6-V}#oN24klkjG;Y1XLb0V*}lZx|acPCD^kZ$lZahN_os*xzl(rB zFQfPa2ZIwiJxXvC&WZ{gmV(|Z4lUGGQAVf%3TQ+a4w?679J_VPf^Z&1009XHCDVhBpCXH?%gx(f7 z`|3}Xp9rTn$wi6{beLSbv(1L8*1WOfuT=h%y6dOYv3|cse($%++Q%|eAipdpQe~1J z0LRPZJM72PjGu9|Gk-u9O9N+DW>O@mwRYx<&{sH!AxAyiMB`=587MWtYmn_M=Z7-h z-$C$6pZgiBF^4PXmEMSQ!70bdu-bgs9QXQsiz|Bi%tw7b`=6<@`JbDNjf6QFu50$Y z^lCjjrYcr2+V8w4w{<>ARdWyt?p2G+WvirK=PcpqZWS7vLl;Jy?t5%ub=bx-O#uOf z266opbz126hVlnXEg^FlgwL^)Z5Q%Lv}pv$cgf~J$%)$uHM%`E3$=XD#TphPlQPr0 z;>rC5$w|Y<@IDnS@J6tbN2}?a#JDU#1)H2H2UJWXfgrUOrPp>daepymUjg~qBtI;8|# zfl&Xw526n^{tZVALNuft$1ZmKrSd~&jkDNAgUXTC`!Kd%<6(NLCIMa&o)1%t=U250 z+yWimSh8b%tNqgHXw#)e1*|eXF}0;j9()eEaZrYYnM<(y>CL+(}TxIKt5 zaiWk_TYS1{)ZFeqN;C<`si|z}_C;{ncZgh&*lvE?`RM@p;9(1G`b0gwN-^d-O(|ag zOQ+<050;`OJBVs-iF$_&+^C@Z_EJgG7~KL`cAe{7%1V$)a=A<5IbbKqJvBv*#Z*2m z8wfg7&zBF%7FWaPU!3|A2T*7CHqNoP>L`wuN!+`h5lK#P*I2*M`&1sGqZ?iYNbchTk;ub)#OduowS;iZ?>U5!HSt~E1G1! zAO`K^R9z-fK3$0bzJXgv(g2Vr7fLNz0(==@VF=P1Wxg#(HO^97B?g+ns`Qhv)Ve)e zOQ1R$Nl?$b>{@8TRFUEYGjFGeg5cr2-NyBCX&H4ecoC|Ru0(HH<5T`KQh9p7{<_p~ zAjZ&ejI90+m|kWkomM+P0=D2{j+!nj<3Xe@_=xB&4^CFDkCCaH`_+AU4l{llWlEgT zK-|14Z-|T0oXyE1^6~Bl=FoNa_m1MBsIv>?%mY7=sd|~IQm8$OcxXSdOEC;0D^C86<`T4_X7H7_uzk9yo zKbE~~v3!0~Irk%Fn&zh%C`8J-dWmRlSHZkzGTCBY(st@ke+)&d|5w$RwMtIhajl?l zkPef-XGP*?9 zbI8K?`<0FKDi8szAuBf+5VXp4PW$I9I_%rSs>WCYJMQ|9}AdB6|E({d`IYzyvV>Dr*r=AmXpJEI@YW^ z;!5T_kkb+Et(ed+&veV1uVdb<1_E(FVXf^h2zt7ZKeoziMna%B`*yrBa-LwjZ*}4r z2YXm>L1i!ibmAUl>Xx4J|ZF^6<9>Nl22EUfWf>>wE*A3WaeHy%^7b0)69xIn3t zbc$hXjzjx?UW?p%-0oyA!;~diX0zle14qHhEH_u>r&DiLS1%bT)(W1YwXM&K1Q5fL zIRIB$9nlXyfnZkcCr7%s$-d%VX>uEn#VbNwZV9Hz0<6;|TrAGZZ4Hr=-trNXmMr|J zt#b#644L5rXZ9V&n#YpzdFb-H*s8n8p{JCNkxXvg1~)pBQ6gnxwg?UxG054nbu|zK zvb$Qm0QiyJ4iYFm?xAWwB_NdqW9prrwi+sSxT^ZwVQ8!bCDSpe6JIRcu^QAzCJ%RR z?N({&Jt7U-M7H(T5!Sx^1}GLl*#HyTEUEY`<6Mo!x~-^}j1 zxLp3V#WC!N&fv>#fk!zx8Lan71#koKAGCxx@6NAFDZWZoOPmupo^W?`l_vriYulWQ zme|2OfCMc17^=UJ7fmPbMrJ+c=JJ!NYpukIw(qy1-jx{lgB!>L1^n*ctEB$T#)bUm z5a0Tez;O(@{rUW`+P^Yb^P*UQ?Kyu@t z;3P~AW!-dU9%txxK4lV(QbVH77zLF`!Tt-?Ei>ko9OtdkJat#`3n9!R{VzZ_I+W|9 z(TaJK$x=&CFv8YB_m+G>9hrVIN{C@+^t`}W_M9GbFs8MCr`dka>z-@vvPbvv(u40H z4!DmFP2(%(53-^miKLIQTel2|#DcB8# z^iN5SjJ@!HZ7(?y0%OT7j$!SHTq#s6H#W9xd+Teiqk8$QrmIC`A`(pfI_fE%fnah} z`DR$TkHd8}nNXtqK)4X@-km>);#lc;O`^gj2vc}HS zN22~iys7I$#)offh$eVGL2=2{(u0r`y#pQ_-U*=aYxj%V#NyFRY%}g(2?NreH zyF>xZlX+J7W6sN6HRZ(L!c?ULVLqke6cMOCS2aQRb>d+vs_o#AOP0LOD2`$9;rjhh z&+CP=)$W(l477=Uml(OKB#ui?8T*O&2il1D|-~ZB_Oy`*)Z>KOXM*(BTeTp~^d>bx8Y6|hmKJnzO z5Dl(c?8`FgU-20VPlp{EV%ltC<*{Ez>zR&nSn_*6dY~L}Qrnxm08X1_a9_QD%>{C+ zK}p#@bk;r@MRmp@_SvSy(DcKRg&1~02_$q+D$$?YzDlhe5Y7ca9*~_M9q*|sP=tZ8 zM{7VzG0U4S7?1rz$|*?O)SKxG`#OYttGCEukJm{e?X`M!r~A;5tq(y|q4`~Z@6mjJ zFtkPhTApwrtW zHB6AnTZ-Lj+|l?U$UYz8+&hLQB|X2X=-!LPHp7sJ+W6%0T`K;8=L+$Depteg;xCo9 z{27|DpQ>9Q#?snsXX-h4Upm)COb@4i`$mNm$8I@SVw)8mOD((@3y^jcGi-_!JjMWH z2>>>5NmN`pSWdTR60TPEZ3v%eu=kOB66%}Ftru|P)kCeXb#ruq6Q$F7> zbAuYKG&yx6ZEV_Ay~bUd6(W5vo%Ad|J5w>TV(crM_!XFQ#P1BQLMM>vm&Fe$eLwNu z`Lp}kKa*$b4AhuaJMNYcsSV(T5*|=jEQE+GHqsDmEF-Vh*LM3*qrA`Lkffqk;9V*@ z&WveyF|&(0XCL?|@%;_LAn6R*p8V}0YX_a7hP2{GYFj)1^d6zy&*>3>JvVXon9SF2 zI{D=nC^v}T$g%eu(LV&(fedbYgJdM10|zSYpEqZ}BU6wylq6$UB+Vfb%PG*21=$XH zWK`VzJoy=^S*D#xfaQ^AUcQyYhV3%Yb|9!%SV(WqHS^bsB>-WR-7Ou#rRD2)V?jZA zPaw}}E!c8QB+Os6xo9bVdb&u%WKc>L$H%H1a6qS?O+1o&CZcGC!RA9#!C~CjE zZ+KU443yQ`kyP-)QCI&0R=x)?y*uIcFgCtPSF(T?Cfa*{yQ~Hr|8td{KIT4U z7Yk#KSfOk0kD3rvCS6s^c5b7BEKc?L9_uo=tFM4wX;?xa4y=o#KG|XV+MsKB5|q#y z&ui?zq@ofr31Zi6_=84j%z&u(snCdGe!R)^k{*Zpbyp~%b>ncv*w>ACYm*}P#1csp ztM>4S7E7mBODeUlh44egjPzs~NqrIpUqTX$6$L0yv=zi)0mnhCRS4GS5VzkoJY=!(Qq3W*PmU@?5*>@6Q}OM0Pc ztcPx~0k%4CEmXyxnM@yM6TgR<&6MM2-5(21i(L0zQkN+ZhH+(A@Ck=kuG>3AF&> zOS%AMe$;-FjAA2*>ec=~&#Uwr?an==)#=+j>Ad(rhK-2fvGyQM*>pQcMh!QTeZp#> zcj?D>*%YV8_zg>~#&O^z2s$-7`-4yu8bNo|L3_)z z`)tm^*oP1&Sdh$X5~^_|7>9!$Pjo2mO2_xm;I231Dm}36c1KV4D3YU1YL$q(OCQr; z>HmJ$8@Q^~bLf?joF3yXr$=#M5Fgu~L&ZQ7!9C!`4NqQUUHFssT3o!pBzzygBPQ-i zeB`+hVnlcdkr8uN=(asG7aO0=8g)ok6|cb$ql~QwNDLEl zMEzkku^GkjnP2G!3%4cJG2VrvJMA5|xw(;=%S4A-g&_WzD1VQV{d0FAI0j_=9)R8; zyEm;nR#5Pxo`@c+3n9xEP=H~Ijn8RI@)j5uf9Mc?THJ}PV}&!ovE6kUYt}d{p`8-) zjca~^Mk7fXXFNqzRQ-$D}jPdPwia2I;C zW~#~-@1EFkcww*kyW%(|KOv<{y{a`lKRk$g17N=FY{LQYNqpE?khi7=p1t|c>!3QhdoE_uHg6GUtcG6R7)!zyeYM>&mzvcyW_S8}gH?qwqmutEoMNF0iwAP2}?q z#;sZ05d?cYWf7m9^Q4PfG~d;n4o4y!1QWln%glq+&CvS2Wco>NcCv?TD5v z7QXE}g6q6iR{b8y4xj7;}j@+I(Y-cy-K9%hpBPO`Fc`#MTkW5sHvRchRTDt>QJZ~lrzSsT-*3Shh znG|&u>!Yc-Ch$w|&drk|n=$g0f#|n1h@d;$EdIk-wcmdB#sRe~Z^QXcz=}P3ZI*Y+V zfG%IrQXy?GzRhr2nvK_eTN=L!$@OCALoZTi~6n96f;IOtgWJuc~J zH#s`d-IYLW;f_=0@Qv(!4>Ppfz;D8rv#x|8Ktk@Cg^?Xh0wJ+`j=z@^7=cb7#j936D~V8o-H;wCCCeJCEz29G)6#k%+4 z@#6{h!km?{VUAdfwb-8C{{BBl;i|s7F+&zxup85XPHB9SDfWTx*T-B=d#oy1;*h7y zHy{OaA*sh*+Ev(|G=a`bhD_@PkoaHmpNGP-PdS&(*PaaNH>aH&$1p}sy!a7CJGp@t z1)wc>>#+R({vIu$t=$pKkyqz;HVT<4+8kMAhX~Xg^H#hs9Ix0rY$i?aF_NPQP5bz# zv=%X~$%17B7GCu#t6m{4e9FM)y^msJ9(L7OFNBs^C)R>F+7k3@8TdGwhx5k{45}9R znW~@Mm1n1;EdKKum{UzUx!TlQA9-DeB;akvIU}3)cKs#K%YNjhK6z8MgMxy<1GBPv z4zRL)?v3f)@?CJHq1|xfaoLwlJ>DH&86&w-?I)+CTVaV_^_99dD3#>nsXx3!gFKtq zZtqukQg|^_c`C%wad%^*sI6T}s}VR*O2Zj}NNqMsw}a|DYFmalf}v)^ipP|B`d2|R zfu}M6eB%#>qxt~J=4A!CELfS4G`^17VXh`H@od3yV?>R45UPlYw5SBvu@ogeeYBFR z?c6zKJY-(CIs-dA}IJPYMG|RAi;}qsx?jNBT{XsP}-+~%92>$lu_#d2<8*6w~R%buq``IO9!vDxQcMY|Lo}s z=}!Bc*d|*-gx3a#G#68Xsz^SEF_%vl2LuKtSLasO`p#?4hZ#*rnn0Mt7b}z8%BSWE z_L4hdl5*p;5XJz?C&VW67FJ|%IVXa>U`g|iOM5$8f@O8yzx)szT8-JG3e9B%L1!(I z7uVVZnW<-&L;l4bFfp(dsEmFYrBoS}9p35eJKI~%0VA%EWXpLC6D(KdoW3xrF59TF z-D}(&ThgrGTjbaL_78nmPw49aFIs6?mNlrMH<)O8=@=_CF6O^O-dFBvcP7;8IXod( z6@j>A&YV`%vPW>#BQ@LHj8*p)L@F1iS@S>GHE5?MIt9%+)m!hPf;aPZL$Llv%``917v7z--wK0T3eC&Rw+v1-#}#6x zzgIG#)rBmk7idF#v|rz8YOdreAhNBgp5DUGM^7%`c@TU}`3|PKs!@(~e2K-YOw*)9 zY&YvHQC~~O8H2qS)4^oc4rJg9@DW7kz(d|K=i*{aQB60K8B8S5D^i+2|1onD+ zI&WGy@#gq{TKB5}jUdk*^xBhmvdBDwo=-VW#mZ<6ubOiPFGei)L;=Iv70xMC-ydbq zmX~wioUC%w$eqVX;=3m%Nv-t|@?z~Z?hi2@M=mX-mOytI2?%7STSiN?YItBd{4M@G z*`dXcB>j7Pqq_A$EUv_QNoDij-i?xc6|WDqs4kmg)WYC7x!{n<1%@ZA8&HUI=j}3T z70~s__^7vg*`5_c?R;J*!INE0CX4P=W*%CyqYWwT8XXj~)#+zKQ+u_FVdUEl1oVYw zQZjKA674d{AMv=3x=Fr7K9@4-S8K!R^bFG}nGIH>`A}ja*3`u#_IP54L5dVA%A92! zkJ>jyE4)7Mk;97o>w^5zZIN%fJ)mb;fZbWO2wscg6o{r*&@3Bxsl1O(u$lI?p?G4I}@}|WHw;MtbdKkskrr}BQyuSiDi^Zd}mamz9bG9Y$+x#nY_U6 zFodNklRm_?-@OBs2@#4vEF&)jxs}Yne%)!8$L)skrj$f*%9usZ(q5C8`<+iy^*_R25(oyjYs;$(!X(XtV;KwzuE*u?FRaDQ=T6NR z9`~dpn-5x6xGy);+fMq+HD?W?)4iC1Dx|&kfxk6VW2tSPWys{JF}PhrqkPPd+ynkt zh`u!fTJ*je6GaHz;^TXgnB_WEtu=3Ne{uUepEWIBo9q7HOM(cEp1=D*`}=0Z=KkjM zSlcSv2ox}?@p!I<_WdkCk}P{k>H9q3{w=q2U}z?#Z#D}9;0%Z5_dZ|8-K#{%1M|xL z%D_!rbvC;@?|(unqA;6E=)}fJAc>_+D_!mU&$r{nl>hxZ}M(Cf&fRPNtWH71kV@5u|Bk*M3;7B@srxBH7( z{)k3FN2OyaoOLoBQl!9BZ#kivQ!9OS^*>Jqoghn5wDnPj(Fu5V}$%gNUB`*FwbV_xAZL}_Al!{-UFdXhcXGB8WPTWPIzo2rslt2XM{JiQ#d}) z>jiGN2#WAv2*&=~ap%RZ$Nyv-Q~&rPL!t3sp21W97a5`Kf8+nmh@^Ox|ICSh4a-6( z(|?^$1@Zs>*Z*xgC)V}5pGVui6qS{g;r{$T-iN0Z+rA3vei_7(IhvpX#OkJaOC3ua zX1Xdc_PjGy1vq2{3jZ4S3Slseqyj(&{>ARYi8&83-E)j+m%9Af9MoLO~c&`raXMS?j@yu@Bc z71;RG&M*844L@$lRz?h*B3Z_$El9(86eWKiO|-d-c0P8RaBuz2b%7;4KLlL2u@0~J zCd*oh4rCl^;iX_~9z?OjH*INLqZ_P8>ys%@FQBR?GYQGG8YK8q^loE>9ACjFXiCaG z?w|^TGm<#bOP*mCdl%7gSEvfyeLHt(y!LN97@xW~en6S`^sJ)Qr=TjD(g^u_y}RL7 zEY;4m&)?m5J(N%n)J$Q7#f>Ng3EVAd5Pcz9Vae9i;Kf4j^0Q8dxes4k{kA7rjW1aQ zA8hBYH_xun$768X9B8t)ZJPC8LS$1VB@ZNE9%!zw+}#EFs*NNZ%lo^qX^3j}JJ))` z5(sOb4>lZZb_fFo#J5=KV1PIHmYiZfaO@!OS@Q!Md2a=XyT%A$%ap>;ms)r)<5c9> z#X|Avv+)x{{XM?ZDWgt!eQ3F@ij`2{(wF9+D~}I+Gj*FpLIDEC8g!kHqa41-0ydvd zRhBGIjcVVm%KD^B^8P>h_zni=D4qAm2EG6(w$&QfiKr!yb4S)?vyWf9Tu`n!HV+9Y zHoXBsX}1<$xvl?5&!-SKhSldb=;7+bq0FAt6}bl+t~sc*zBx8{90|*y3PHgHwt4?d z&AnmqUb>@IE;R%uEjh3#ngnJsm>5ZGIlc@6Q9~EH60h%L{V6g+^gjNi4)FuI&zCP< zm>`JrGif6hh+(8NN@ zkz{o4eBL!7^@--B1IWa0cN>Y-aDc9Nb`L(o9EgD)_cSgy`cd@U)gEPflh(EEEWQ>p zDM*bcy!C#au#xL-8Y_Ly8O6Edlw@~K_u;(%c%u5_3nvyu#)OaBL5Ul|rv)O{~1 zAzdQf-Q6LebP7m=G)PG^q_nisozgP&07FT~&_hT!Ln+cd{IBo(dq4g^%!m6tbDw+f zIrpr+_gZV8E0Bl>nf2Iw0$O70`&Q{UD@Pr8#I{?#{Ds0>JTOhQx&A}9#2d7{KAg3# ztonOe7jVRUeoK0n)FZd!f%HW9bo=G8tKkWEaPX~=T7!o;j z2Gn|oX{U$TK_hUU_VK^2rfLRg@1k)5*y<=dN2un&z4b>sWt^YP+MZbF4TsB*4#f2w*{`6T{zWPm!{=%p0 zdZzH}6#nY?c0mNwDSCcS|KR2Bd{;}!*()Q+*+>#Q;D{AtxFfdMCy#hgaG%;Ix?TvZ zsRtjx;<7%y=z3V`Pr2j*0yY}h=_kf5iMZgb(8d8X!sNtM!(0BGdu&aJam4ahxz(Oc zr-HnA82a=+@g_!--@fGg+5VQ>?6vJ&yFWHwQ4^jVLvPI=Q4raDDwq#!q9nbFakQ5` z@3QKV*&2z~56pRfN6MH9Vo^nUH4T;W;#^*OjeBkhxx$%d-<~|_K#yA=t$QokX~ERQ z#l4Y%T~6GlT&DAd!9d4KNT_MdDd>{g3{()A_C8(f$5%3h*|<=8zC7(z-@u4^iiTL! z$Q9!69ZR02zsDE8B-*6{P%9wKJB4|bIV;k z7v{8dP53U8|H|Mn)CjUInZUu(KLc=2G3dBG?rVjGd?7mT<+=<8%|SCx1_PB`Y^C`U zPjuj{tbgEarW3GQ_wK`^Ml8~#SKnI2=dJYy*NP-54OU6w1^V_Wsq)*9-l!Ux5l0%P*3DZFR!o{dYA@)dxzt72_VoTn z=Ql%yw?O0}1uOe{-)%{^-MQ7HpMPE37n!AyUI)uOq_*?fecEu7zdCu%4jTbYe45{K z75KeeBT&vdZMr}p$BG~K2RcZEj&$Kp{&12Lm3gs8MLo+@^pR9P?6}z2~%$Cp-xq}7rS4>e8d@G)e0;~-0WFWvR)=tYNw;%_J zWZ@3}4jX($=44A8t{d!M~ONv|MR#5^-9Jzw=5$QZ)F%I;lGK&kPASlbyj4lm(#jK_$KQ4zUdHe8ISh4gs5zQ-;n&k)oc_NpKEa(9S~9=iNQwx#TJDN~ z?M(95Enr3XMR(5`_?|*lbv&Lf8k;qZ57*SM$?rXYsTCX6yTcg$^K}Kyl2|0BS^WZ|9R$n6>)bB+j8|7Z53$U^I(BD>mzXO zASIE5e;%YXxhpq{sycfOS=5|dzhcrQxQywgO8Q>WWOONS)jNW>8Xc@&7`4fdDfnO5 zF1Jjz1+O2F0#isK3W*0hG9IwE9)lIjMNb08v&0i1tsAPoE=ZEw98bCqxoHS~?tic? zgUIuGFCJk(2r!{oUm_a7kKXY(btvOU%qg~9xh9^@^5jSAH)IAc zUx*wqJj;*_A||?<_^@f%7PSbN>+VlpJ@y~mZ{6%SnKRPkDAW1jHhhi+ONQkyBSnMR zxOb-o25STUY~4RJHBtJ^>a*gIe>axitqJzeD%#@S>4CFsZ5vfAE-YSpNW2a*3*V`c z>SyqHe;VQX_<6s0L|J(5o0HSh8`>;QL3?Fhe!2t&v$QLObgu_iRm*|Iynur}o4S{f zAc{SCp#h4O#_iKC$&IAT{FVywKP@MZbwiZh!N_|fwO-oU_S{JI*z3VCu5-3q{nHgH zNx!v~faQz{pG1ks0?u^$OWJ>n;e(Al^I0lS-ni97SC3bX%nPuFXjdGTof+)J>aMUD z?lS@kxWT%RJUzz>cx;Lq-WRXC?;LHvWgocYc?}m5PkYEKEOr3YqvAvI(;}TOc{b~D z9Ab}Qz@!UgKrRNSt5zx`P3HWXa)&Bm*0+OIKFaMBrxnqC&_=mDemoTlRmw+i$pgbMN0Z)MH!<2h4f*Q{>l zj4t)$-^AYX{m!5ZI3IrsnQK1sXtOCU69Ai=JpD=)VOdA|-^$1P&$4o0-~-(vN1>YV za6J=N#Vl4Pzc*<@#W!`4dh2dEZ^5QN7hWs1q@ph3y?#Ep9>+WHIL#ZsvP+Fe6Dc;y z{J3-{^ZsExRT@<&~B*96z z%87#gtl&^Q&F0&fZK|_8T+q{0KdsY(G^13RJ@sJviVJq&4o%p)7ifc+Lsu^)b!Lsa zAohqT)_7|o9JsYvY5^ia>a=qVv)Q<&d@`vxQh*&fkd7OMy z*mLczPkOE@KH8TS_|g&$;4$qTJ$`Ln5ImGI8X3%P-UvU<;B}+#IF-xoOjd=|T2I|M zRXNgj)zWMY#M2TU1teuxLQawBFncHU>EOQx3OI*Rg~!2shy=O9ucufBda|UL5f~A2 zGxPHtZNtw(o-8=MSH8_EC5Ts;5{w@hP+&sH!UZV@>2ftV*qppfS*R0}RZ8p~9QH5I z14CYaw&t&3bcPEK!(hcFC2tpmf!TpL8yOq7Nja;-G0?)EKSlZI=-76i4Sr8j>OhDY zfqUPtsk|fqN*z7~gFVxN`gV*s646p8KAW7JEco#QH>#JB@CDQU-IczMPI5h3NpLVz zvN9702X^jiYMX0`5tyeb7^Lo`_=9f69MxqV}A!jQBwu}YH-Yw2$IDm~{GDvq3SMD_> z=JehfssynIOYnV<;da#JN-fOB$oDK09kia=Y_DN6IRy&(!#CQ#TNQUWkAdo8E3h89XH|bg>WsA`W>~Jc%4Ky`b-JtP8UJSyd~152CaBHd%x;%Z-#CSYBzfm zTrJDQcFBl@Q-CFjmL`1<7lL_xZoJ3yZ`7IbI4x%>CeGH62HB_tb9lx`dpb3i5)#*T zzm=JnO}HW4hS>%up&+r~wqpY7fbSU=^TC+~6%9x6iXh~}MUK}(Dy2+4jpJ@tpFEEY ztXF>E(}l8pp%=^5N)Ss`^4owNC=KtAU%Ru0c&C6nmDje9Hi68*vmCeN|KRB!4otK_ zcDdC>8grznXkPa-w4(=9k!}vn_DECV5DsqfsCt7Z&E~F+iz>*!ryM^z*BKRdX8(@+ zJMPJxJRO(XFtALJeO&E#TD@H3^xY8}&lb=4(xli7ht>J6KCQp$yFE*DYttp~0 z`mNv+97IQ>ADDL%q24Zycy^ocNQuaIm+a^fei;~30bU${CCUs(=yXOkgI>nY&wlEe z@9eHf9<(?lKY?|QraeC3xCXaGPQ85lix8>&U$k{g`8OSNrD&B|LTDo3-L!0)($WR% zzP>H7!;~$y1hlCnF3yz&rJL^-DGHQ=KWpXG^7yF_Tm2VW1jcITaGV^5Kb@X$MgMY+ z8p0ePCV+t92f2Bpb3CT1Sy=`>TGC?cpl{nf@9w#Sm$6j5+Yld?t$S<=gh=x)|MoUT<$7 z-^0Vh1*n%}*$sE3h1R=w(>wEfnQEnhtI>xx3sdbmKid4cb!0z1vGBj5cyBtt^o+Os z1F7@ioS?dmJj%Uu`8wWV2&C?uJ&Z21w89CD-k9sChaeMWoc}iByaxrJlitW%LbHHq zU<$Rvz0C0BWWcqR(WQDNlc;#u^dgEGkG*Cr;VOOP8YQX{tX)!dwOzJ@mwEWnbaM7V z?3W`;I&1^9Dx0cT>cHONegC^WrfQwWClw@`;o)BkL*mFk zBQJux@OD@AO9s8f4=dkKB`E=u?|(jmW!a|Qemm5`Y1EQ)9DGHQvaf9P&f!*}cyePs z`vU_A6zfgG-9?JoSbBebons0r0(sARg0xy0vBV+&Z-$|H2>GpkY5XBh=EX$zc# z0}6XVM_%0TA4g$g3_QY_vAdQDLWFPa07n1`F4uJk<|mWIk7Yrxp>2PaA)izKC~7Jbwfcm+}{1(<#V4D-ffz_v^ zdsNmir8*bNd+~VskexW3w`kaDKSl>Uvb;P$SV>PZBUpA4P(^h1RbK&OKc2=$XdNF= zsq$V9-Gak%)Jl{y*(5dXCWJFfo214(Rszi_7o122c_$^tuV;RqNhVbg?|Tx?!+EPU z)?OF?mRK-J^JK;mJ$fjRWyn<70Od;L5;3c-mu9;hoD+}Y#V^m=2Si!`vBsdqg5>NF znLFo%P|D<}%qx|YjJHw&E6BSB*MDax&v#w|f+S&d__Vzl_>f)M9Nu_|+ge%v>GhpE zZDrnOWUod2!jEZ;Jcu(#d;l3{gc1k<@_$suV36ZRO$n+i3(DpL@zz@;Qcoeun^JM1sp3$4<5R}m z+I{U@Utv2b zO%eA~3hc0gPoio-Thkm#y>m8@w{c?DuW|oNE2f3-gI8UrZ_UtBIutxfw4h&ZPnVtDw*_ z2Xd4_X5g>CGinb*OOs#46XFi?Hd2nMi;0tHgAa@L#Zi{JF(O0A-&Kr9WFVk@McSSn zXAo5t6;{B$eBNxr?V|;Y50|5R8uEm~du}qTYU$8p9(Ibn?Bb(8=2u(UZ`@79L=sJV zidTCEzSTSAzR~O#RR1lW8$=qUr&{`Ysp-2oUgh+agf#ovI-b6ECbE0k5vf)4OsYci zmla{W!V6%?AP``D)pT@-AkIYb3QHUh3HO0oUMBZUg$Z-OOx0Lax>K;j93GKCd(Q~< zf#(k_-AS*Gz}Ba!s-@Rd{4ejl@Y{@DUAyW*+v5Jxe7&)GqpGTUmn3+{&TxM!dtv|O zUsgO!JbM_OUAlmre5+TcVn%O{JCl9aMT*}kx_*M1^ZBeR7zU~szF@MGm&6ry!T zm)j1xnMxd&rBPEWTT_pzQbgj~bwMJe&PEG6z+1IBxn)+s`4j7rF>;^_iOBWJuBfRg z1A`M57mcR(Ave|RgGE?$36w!m{Djf?iG$JomZAyr-02LdD%$4ent_2*LPA173QMz& zlcXr`N8};=5RTxe*|`~U1i5Wp%rNTHOiR9?gQm-1_V6&M!JTL<_tObxXJ`Md=fGZu zaMAkn7LnYC!TcEHUa-f~<9+DHvyowD3WE+yuI`xR4Cx?zs4f3aQk5@qh32d++tqS= zvXik=q``8zVN$RAg_Ma0vF5dV)!hVE7SpcW)I$jWRy~atAw;%8Y{TQ#-|67BaaWfG zjkm`E-Yy2iySh2~$R*`)^=H*iq_6Ttze?CWhCP}+NHg7#IHqy27z-v;Fx;{Qs%Q~N zWJ;EWE=rLHX4y{WEcI3C&AX}91eb`4yo_HV*q|(P9&%sKN!a1S9}UyMDO__i@VQ?Le!E^tqU&_e~+}D}IsjK6F)@ zeI#H0ARTYmx;`)=9|8JZ%;dDhO#IPU({zzQ?p$&pUS!pGYhyL~&-R;5YzkckVQs~o zT`FsLlC1=ZWLG5Y6)Oy@k?Y%LAL(Ql&oJ~Oky#)k3Akg<8(;!lmb6WB@i$;y(EIfWa{3lM{ehiLAK@I zi%2QUt-S_q81#74p|JcyexJq@Od7LQE`#UpCQz!O0yP}OIa!T4bv*Sig50w?!9!e6 zeG+{!pSlW@kr?Co(2o1auA{6mwJzEdpbAt$INr!M>6RShDLOhk|2{=Ne;V>^)s=l$ z>_*m{6Gl%G+awdtuig;&=8cDkhd)`}k4p<_7hligK#%0Uqt28FmY_|@JuNzWm`O#u z@W$jeQ2v+5HUbb|p{ru}1bg->eQioR7hN5mQgSSc`>(L?JUE0Z2~(t^rW6fB*gxmTR}`syN$vp%_@s$-oKg5mbLyky7tx1|TlF z9zaDnycT%8)h@I$R@%Auo%c%R?%2|d70ck4i=e(L)e&)uOvsjfv>;M0<^Qa}OWXqy zx7LG`iQ1As^7~R&sF9e8U_{81Qeg`9Ch+g4c#w%*Sl5GNEf&M&LU40CG2k-6C@^Z% z&<4k7Vs(K%kpMz{y*T>J@q$CNZY>P>UWUsTx551WNh=N7WLQ%P1$Kdtfo(rFvHK#b zx#jKttFLF0PAp14>%@rhkSg+9V!=P9d{_Ta{NgLJ&ysz+$`+vefS0Db27~m;C2%|t z$ZsOP(hLorwO@wvWvXpLUHq*^jz0$yn$81pjekP^3v~}(-eyR2x2@ zzQ_FoYx!dh92WeZ1@k_0bq0LQhP9eKv<*#@dFDYVUK;;?IN7!_mAGnFBoX$;5et25 z>3%qF?Od*f(HqMs7E;B@(KW={mUnQ%;ls0Ag|gb(TU=Y$TRM&dTfV|Qa6>n>koP3Qp$_)^%OnOHLHm)oJhJYXBPj zTezSTLnw_@fM$Ga)6t|^p{q4HUt12qYuNM>0V*y@7Y@K=_)Ay@JI3D1Up+M8>Tv^> zhwJn0@%tIU&?(l^lZE|H-oy>COzlgePF`P~l{{0KY`}>f@kJK)~S|tu>R{o|TiV1pN6Mt_3;$plspI1;gpI}_bnx7!EfeD}( zDI>mL{t3BPzN*Z-z9E>nKX=9?dwDGxc*cApd(pvJ(*ABg z04=Qhviw7kpv3Ea`sq>!$ON5sx2C?{bKU|e{IJS$@1R=rRFD=K(^!|cu1jy?Ksiv! zJOuEr9gke;nrE9X;4S?bJDr{=1x)byQpX+*Vr+yKRnUb zc11+AbkCer=h}w;{eqdR)B{$)1cj!J#h(5o`}A>(bj5*!qdo;WI;^HPaaWkP6 zJZ$8U7;0ANeG%PxaCRW_q|J%~4L49ur*?A(J34=&J?n9ML2u3K1cFYM=0m`M#XT_H zCUbgG*y8f~cNkkL;zn!5yz&rbcIinIqHxP=CKqQ!UhmQek*)E#mf%QT2}bBOH|fru zMHRtc9Q>73*wHsu27SY{M{w1%kC^y)4?m*SR z=)otc6-`{>$9dEVkbEhHtI+kCVfXg&ho&qJx(^pbWf=IqbZCxl&30Yin0-~pRhRK2 zTXMOLEnHL$*sR};5(qcXDTy{Qm~u4=<4X65Q5D@k-qeVdZQ@#b%d9S6s^%H_Fac=8 zcv$KksOcE&!Tt>qj-UTVojcgnj(%KVj+JUn8B;?T%l!CuwR6{Z=GKEx$3^&? zyeDNy47vn0fY2X79eawln>*6_GT7RrCUb^){UfrgoU=+JGc9`yBsFzGQL{}O1)na0n?K=-s6?8w#1~HWsDtE>piDO zrSzW`QDJGr#n`>8L_7P&56fM(_gd zqXOY>M}`!r$Y*9}zbP?(8Ql8*j?HOlVDf2-Vwnyh^`N2cm{<)p66Vj)-}E7ZyUy$=c#1 zk<~0svqx?0quj(&*`3XW`^yCXU(-oU_J>Yptr<74W zMq=FQcMTEEt7fl-&AQ8D2nBJx!KOpi&8y?kNJhFrYgksv=wo?O;TFp~(z>Z-Y}LF6 z0UzmmLL9L2&vS#$YbxnqGd=PlR?^BM@Pr>D;9hV%D>V;f~hi5 z^;lajQG9aacx~HYm^91?j5%ZSFhrZ74Pr%x0;Fu^UviZuN9`0hpQX)8%cIUJbo2ljje!(<6AG?}eC^UG8yTtzG9EJ5>}39-?ZDAIs3 z-I_o15-r|&X%DKwDfzjge_jAQpUPPYO^%R24vj#|GPFEj{!0;;|3qPz5Ea=?3F0h4 zmS8Bb`f@OsIa3WHWk3t-fp%$So51&dsHvBX(rdvM!do$&ei@f$U-D|AG(3k-kU55u zH{Zs{STy_hMX811eaoOjHY(w0GApzX4&o>E-6n3UCyeu*uzD(o_JO7!2IdpxWUqJ| z8=p{xNf(#$TlLV+@#HC5`O#=%wo}q@naa0YQnDma#Z@~3n)O+UM`?1k(b#nLi{(<4 z)w}XYZGmwsUC;}t?WBL7u11bzT=lW!#N{rhp7C5=D|i11OOD!cJ3cvyRiZ>@^utq5 z6G)ylUi@=2YOyrcDCzF-q8goKv*91J>Gdrec=j9Or<VhXvG(cN85=7rph^eo@;Nif^SAQ}pD}R^jaPqhtFb3Pv^QiKIYHx0L1b?*047G$RdF)hwqA+@w6^wfdw>mZut`3TyTB@Bft3C40Td zYHMEk^=k-VXT>}C=e=^xI{etOvG(!2gvS1~2dHtzh!!3vk{bK*(Y#-q*mSv@f2bbQ>eAn+ z%W}2H58z_(c*y%evxf(*CLEc>v{_-zHOTMFh#8MofId$vEoCN*e*R5~uNO;Z@aKK< zujza0Gsr{aD`68=2j8bUF*PNUuH+~JBYCdUdNd3&c#3xqHl&f$78XE8+R^peIdC!; zE_d0l*;eHxtAB;iN_}9jq52gVES4ZY?)8Zh-RQDQ6TJZkz^^iaib{lDo?RE!_j+zE zQ-$lR5kS$ebT6_pT$%MZPo~;XC9}O#kM|`e#P$~5&x*8Jn3)whB5&@zYiM2z$~J#E(9hldosyFB zcRdUbSnqh3f4!pfW%N*;d8)%v-s=JGHB4;9}`C@K6MUxD(t$d zCQ9iNjOE9xyIZ(*GXaAgUY=hqM~+B}CKNk4b#+nA&O5LYzw5*a4$6yZEIgdo*5{~} zcQf1gyK0_hQ)e}ZOaE;Md`47OUce)-Qr6&m7uF6%&P#Uw!9?55|JiuzNTwIiZMLNC zwQ{{r!-}_3hcoDwk{6}KSoDKi3l!|72B_`q6JZDEXQ!vo#&~cVLH7r(1-IiCn#x4g z!9(~({hQthdoS_4c~FY(F zHAqWWiAgvM0-O;*)GO)!$_aZ%ve(BOcj+%{!}}60n2JhT>r4Tww|BS+S3L9Q1tb`% zl7ur_y}$Y8mFx_m-LybAxDuoEi3C#2!P$9*VdzbU8>)HnOkU5yq8MjQZodFWSMTDY z{=eEKCd}$d(>!yd6!wW^4NSI3`rqKA{RJ-0n5?`L0@ck$ht&h76u@l3i68w4QPH~% zdHtl9SBa6(6qj&1RYQS2^&RyajU7UVAYhw>N0wKns?KoRxtXrdipDDr_QDLOJf%M= zS9a>>@EB8uty6Xit-i7XHJ6Ry$N^6m)i*$vK~+0bq^Q7-{0`;%UeATzwuBn}rbu`wBu(S$L+e!P**KAo1B#{GmcT zmnv$b;#aWD-nS7nrh=o5P@2a6MB>V%%JT;WML>{(-~`(j0ta+brs``G9{NS|m6A2c zi<(NltuE7F=G6BC>nv}X$C@MgTCFh{PGu%?)MYkSxO`5!FAd}NFKn1!1}EPuWlmdp zvKGzX;=F&U24^J2X8)Z-8adUeo4_7)Zlrfna)8d8W>a|Rm(K84JONH^8W^mbPLQzG zsFFsMQUllsWv<3%vj%s+nH`N8wFF+R(q62$&r3wZB(DDHhDX@ zBE;FNuHEW}wP7IE{c|-+9gcHLLh`OHkb-O9G&MHe>UVvj!v($W`HQWT%aR)BT9?_6 z?^+>G`#j)IKbFI(N!83|=*>*zlzbNIomtw1)r%_9E>iW*kM-l6gD>g1v$EtJu1UG6 zPM?=uw4vhU+KB|k<7FY093g3e?II*{U!F}*0<=+p%g@}Ll}GQGPzF%f0Z+RDh>8RF zs79)z_;MR-_?|7uTWkx)(gN99E%8=y|!C3w>u~_)XmhgMD>@ zAlpCkqra{A>dkY;XxhiJVcxv^n>>WVfm|Lfg##Y!#+?qFvoR+sl59eJO@AIr%A$v>3DVwe|;5)yiW#uN8r#a+9BX zJ*s4Se^EniYkE7f<-QbuEr}w{3#NN08f+*$(OX?{qnH5S2s3q`#Bt;(AOB^fe?Ppj zst$biajdw~3l$*$?2KWSgqgXV%Qq|oBxV|Y2Z1d<8egcZ6}`m0B;F`HL_LwsgI%`s zW?5*Pn8-;^UqXF`^z%E|4NSXh8%9}YXjF@SeEPxOqpr_ z{iv#bIG8hcYzy+`3YU{=^8z~%<5BjaB8UlUeeCp_8B79ZuSf-NrRH2}=t=+d^XVj% z1m}ldYM9DCzA(8;4z@#vzSR{G($Sj9XB*TgV%^pJ*-R$r%^8cB-^4?Rgtm+GHCU1<=hvU}o`FfGIe5{_*y)om~HgHZ7{gttS(q>ZaPkcHmo$7t; z3M9e6V{)p)Ut~q(7z91*J>}lqf|eP!Y$YzYZN7;_Lca zIJuZjIv6D&b`0hP)78hYNXcccc>fgQOBn-%NRhU%HGMkH!dGRp zq@$N;a=RKZ$Fd>iR)LaSP(`US4SO@;}lm zTtU@_k-!^RJr;jlC;b<#f4uK<$1@y%MNvFAKWn%^SqJY*Bf!=$?X9e=Y#Hy|zGqg@ zrC1+kWPQH&{?KU$@q%*hwz=(590GOh-%aZiZuqN69A|TG(C!bym95c zFZ%{KW+u+Qv?a4f$5|_Wq>DX#{7GSqj4j#iRr~VD!4wkxCKT1WVc{}HUc6B5Tn*si zYqZfCd4B$ex|9idN_fA1C~P_L%`xGg@7%S|hOjP{oj;&%Vb*MSP&GHy-8h?--wO|> zue@Fxs_Vv!7> z=bZ2b*je#%)HJ;}upJQ=wvB}29i00UN8 zRRuU_U(>!8Bwwl0Uf|#jhu=Nm;Z&$|F5a-1Eu8BVy?2e=%^%!O3TCh-p@zl*!u#uD zAK#~CscSs|%eLv~%s|U2HTpft^&8^cO6i_5X+iuzCzZ1%Fr{)ElSBlGYn+HLd^b5S z_N~@Xp+YKJw~_}62E0DeZ|I}MTmiHaoH)Crw4zOL&qH|m#jLH|H=_i}SGQ%BMmsP4 z^P5nmmjR}OM(}RahZe#Gj2$)otSL?PvO) zo@9sno!*y(sgMtwTLuk=QRK6Q?+>d79>u|5!-fjm`8USASK=*13t&kB`?GS_ma)f6 zfUUvD@N8B8-Q2DBRl^Q11-4UOQ={*Pa`&E{9e%qi0G?*3!|@df#&e}IxC*%dFqMge$|dgr z`rp(4w(~V@!H+5DEM^c0^v{j46e-hgtXE9?AI1nEHGWTDJxJ2nm+Lz^R!yHB;3@0r zDGzSBJptOGQ@`Um&)dq&KLZ$&-?>DCZne&LM2W5g&KAzSlx8dU9@8#Ffedg`ebMBY zJ>?8P#Lu@;^mrSe3>H9_HBEr^+lI=xm$#Nvb>V>m<2goDub`kHtE2B79*1rTO}qk7 zE8X7Sa?hdyl6dqRTmyxFxA*%9FrJ9#zMs=@ql~f_RFRjLZ#K;zcB0Z7-kB*At}pT? z@v87@w64U+wjoAWHXFnXsZ*Fe> zOFX7J_0n_{Rf12JdUVuXc7}$6G=uw-LGKkb2&@gaO z1oBUZJa5AjZ4B-`8b2;A%(bMv6w=h1g~pAj*ANK%>5!6-OkTGVM-6BZQ-WH2jK45M zdW!ek#S|L$9l0~7KVZGx_%^!a&Y_wW1dE(QkWorRC6ZGYh_#hG+y(aNh?Z@7pB@E8 zxz$AAM*(j?V1}0~#0TzHjai;Q^Lt@o2;gStge?xbdiq7NXpvJ7PF~+|{d??~3|7`;TK+F_bffqfFTOq>dOc)P zu`znKa0fJoXjZ<{Q;wLa{LX%OB`NuGd;!>1;?n5Q^tf%VZ8beZIE-vc)q$rI_UHcN zT;qzYKBZmwlQuTc<5ym7d6bDb01a0DhC=JABv<|3k z%jJFZL)xn)Wc9q>Qa#V9?b7cY>0N(*`jR*9aYd|{X*RQuj;^jX$9|W06uRWOnI@te zEJtkZV+_Wf{Q^smB0R@PG8Vz01T4$PFQwR$`sHuv#l*E=hB4c5tB;%P3J1iPRy4J* zoe(-UG}j58Jv|iP0ruwppBLcxSpkzT|CW4g27xhT{z+<%x1dm#+-@CguJ${pz{aqVafRthzQD!EKF|}tdw|Xz5!Zsf9+6SIf}xB zsm{kYi`1vAazUA|x*+@edUrQTDS-=$z6O9+-pY+1mxPK_TNEQojhJ|$OSRAgZIBz(rY zv*>$rQ)bxVf({jki73k-4{c56Z6kBZyv%LC9&Pxs1WnfZ9t}^+>m5}d+q-c5cmxW5 zrpXXFV1VRQfNYfXK#O>O39i|X3ZMJ0icm|;Tj+6@@Qb?o@%sUi}_ zBs=awkS4DrQlU&HJ1&dO4Buw=Dkr})ulMcO;uUh=*cvn}hc}a>BYyP8MKIs+w)|5s zW%>LR&Cgy={@H&IRs4M}sYzZ+IE>Mom|?)4YVYcb4lS~|6q&>6EsrYD=M@W_k@*yG z#^PIZ#`oUo&c~$2XS+YXqN~2`6PMLftUkRr07P}C69hfs-@{a!E69_RZ23xdS+?-a zKSHXc`Im`r_{zePK@&nG4s(>nR>RVL+lkqY{Y5E;YBR$(Y$1m6Jx-gY*2B{Ly3LN< z@Mm=GBmV8kgR`7Z!-(tC5*8~*%%%VepNCuVyujvo14cEWM`oUOXv{?L))Hr)1TN~Y z)<38&_d8TatRKUv!n|DD2M&RT0{IJqm9I7WE{)tva%py4@c)1M@`L! zRsJJ8Od~hpsi#w5HU+pXY7Lfu42ZC9aTc$BT*IGWsSJx(7{Iiwg@c&easUWMYawWZB|{>CJmN zd{<%Y{j9}0)7u0w;JnO67QU-E$84^t#EH8|Bbaev(s?MDcRigIzWgpcT|hH0&rG>T zS@Cb-PYT8Zt{r(-)>M^?$N3lhbiTq?&+K?P{FtgEMD{tmU~`*rCmyi1&Mn`MT!O6X zh&aeA{*g>Y-Tdi8%9eZxOi^#PSL2TP+ZW3|vfMV|n(b))X18wru7T88&4Q%G{@gAk zTW|+LIE_o{+pt@1{KQIk!{K7f-?O##;@I7!=6p#jy6ks69$6XI=c>D}_9Iu> zi4QP=Z1Kz!t9QmOplj__g8*{72^t^)fLf89WHq>o`t-Y!O|&q;MOcNs@n8hD?BRwWoFX7 zd!*wu#%*V{OGoTEuwW-JAf8)yY01eoz2kR$G4nFON}pRpIYB`oXNv3-dCM)#-1fwQ z)=^bn%Bhu2Lw3Kv7hje#~bc0Bjba%HjNP~1q z9`Yca2Y#FPj_>>3|Lz#~9ggvWM-J!N&wkckYpyxxYKqM-)idpreKV+2_qC{g?M9*U z_df6}VRIEu%5->cI87nwz>?MOjQqG}!+JUkeO?%2xf!IgrQ$mw_ zk1lihQfx8ect#|P=mx14JL$se1ylHxESxG%kUcLr-5A2Bj-erTRkzneA+4<&A8Ed? zclqOmU-BrzfwmX!9GwB(F0@hvo+ce*#|y zmbF)^DW&i_@xyIfxYq43dI-#Go$y*ckQ}x{F~Ti5Z0~v7wm-#GU}^vWm3bCTB5I8^?;Bjvg)t)Ktx_6*)NzTA%EW6M~ThLIWTI8(XZI# zg%MeYL=;$Dyq6dD|Ep7h^q3mnYIij`iLI%9EtY_x%i<27ufsCExG2M&mYCUmN1uqYfQkdrl|QO#eX7rOxL_jg?mdkQDO%GY0v)4;)?SAGvzTEY}{e&NFCpk!@X?ac{W`>6>-+dFk_Fj231(;7UQF@EZMiEMp@ zABp;w%AAWc_e8M!l9x#fh+oB+@j$np%>I72g{3Y7#;06If>nIX6#lK^pnIP}IVr#g zl@xxaSYpfz94Tcg){ts5jjDI?h3Za&xB|t0*Ar!NWE1rIzY?JtJ_F|0&h-J|R@2FB zR}z_OzZ#gC@;SC;5@7lnkQNgs57BHkZ63-zgy+d*&IQVOQRpQVGE}x#q2^ZFcz<-l z^Rc$&P+}y%b@^<+DG9e(aE2-(OlO9a7Pv*n)`>aFmrH6uYuBR4H*1Z~ibU;T%NEmpnwW--B{YT#7bT!Y~L zszyX?J%_t}kGZ18*6=73@gS=>{I^!T@7EG@ZuX{n*QI&5+zu=lx6rA+pB zlZy(GD%u+=E7!G`?q$|6J_j!>PHb{oJ>d))7p8pWB(nA4-|`v53(Gfh?T3kj&%f}n zh$;25u%Ws6IMa-zNCl)5uiE{4KW}`q`uKovS&w3s1)~<}Jd#W`dX>)80~Z9}BDmr4 zHDqqQJ5N-k^QGY2!QGNA`K6!TBS%sP%ie*8Y!Zv58xnNy8!U1E$MZ&XZZEI^c}#$w22+A( zIz5^7J?U52I6FFC$DJqtia^e}0cDMy=o~NL(t?fnbtwJE^%B`yp~{`fKu!TU$D8wy zvo7~xjRH_3p9eSV-;ZEs<0p2HagRupPXiO>dbW+xn@Gf+57)q85{959pd%i`I$Q(P zzjAcc)-3SKm2EwOw<$^PuwBfP?ILVH?WZC5jN+x5)N6Bc~3eze~zWi z|H)9*p02z;L9p>me<;yyMiv>2vKCm^Pl<1`oXA?6qqeMH44Dc-_;!p;X<2b{pjfOL zIGU1Zss#S;DHhnX4tMYel%Gg1dIck5m17W81A@ijpM^QGY-EbCn8AWEn#{)pt;Grz z2i>$Bym(@ly)&7Z!Zd%FGx%2|N}HCrlU?WzDK|%2@cC3&Glt7B?8d&`xZo%9{v!Z8q!iy6_i>+=JOoD847nKMB{ zn`cH#^J}Ddx`=VxcPxM5-*$Sf31~J`t%aFO5C^-h1^Pt9z&D3{QwR`>?|Zvu=DOkN zx}{#zuxUHHW4qqIEF`C(-~H$B0P@nSKzMo0FG(kir=&MNa_3$|P^VWE&dkooQt&CL z-YJf3gzT!DvOl_Zj%C6!AriA)XKkKimk*2TJ#=5>!JjWWyQ(z_y3UE*wd^+pt*4c> z+4dO6+s9qj7%I+7jLpSlo#SeYnk$}&D&6=-MFCdkXfq_tj^3|=SZI{VgT#_%s8r4) zd2`PK-i=qWlW*1qojF9di>ggdC@SKu< zKhB}{2ijYZ32On3{e}lwpHZB*@F)HZ$vGFJ>iIZd@i1}67v@!k@l}QF)D4jgl*@I)*Qyscg7+XC!wr$#G9O%hS z^e7V6{y-`h!w(qeI09^)WoYg=H@esI-6|kE`VXlC`komk97CD956MLlG58OSWUY)4f4a|1u(KyhrjnT*+} zl!2V?da9`|;y};Zc0!hsaa88Dhn%V1>~8@{m(9(cmN?tkHaABU*bYJr$BBqaEF z-BY%xwb`43y&z&bNQmv*ab(t?CG;@h|Upp@5o-BN}3 z!Km~P$rdN53{~;?{Eu-`Zhn~;M;P&yb>w?e3DPuT(cf~+HMNr^&A82?VDv1>|JGha zDf#HVy+9gLWZ!V*FU)L-Q~?eFcXrMGg}H{730N7$SoHeW7Gr{jC@3KLY!42yyC;Ws zUYM1+v&qpW(LsW4y77~sQRbx%V|_{gc3yREYs|qyr6*|SCWD3cRE1ldS67Q__%fTe zCDly_PA@uJQ|V;Q_CJ!ZpC{Oz{w3u(>DKRQ3PM}Z{u<-AF1q1Gy_o5_p3ruEIevW< zC9>mRC0|iXnrkZTS9?gqEH7b^K3FK?AS`kR0bFwHf36_you)<8-u@%#XXd;Tb(nqjaFn@<~@+;wdtDa2v=~7Gg z`S6$|K{Vj_fUDu;>(^JrpG!Z!6(4VQ=_K*B*(|;L_lV@T#f~|XP+o9l%1X}e~k zSDS?neP#t0z@x$F_N0&vcCB(-`E@I~Z+yvI3lGq;thr^YHS-i13Gfb;8nlMtr7kmFu5z_KRRcVCgr z{bN*zcKFw%mHOEh!4*ic^v&ji29Kv8@_<7iwMY#BD6+}?gcw$lezEgur_2Mk!A#nW@S;ta&=N)4l zK((0P{@xmtGDprFK}>-Fu#rXGWpyxhYn0|vZo4N#k2xA_ z*lUe7*cp8hq6sBcwr=dhO!BfB5mKk}TS0bSp#47kp*@G;Lc)5^g)u>9>+FWCq7iU~ z2gT8 z{=oVokDkK0(@--oBN#rwYiFfBA{OHRSLUXTI35WOz6~;xkg}eMDGD~Dk;W;GE-k>MS zfRm$K{O2sl&Dj}jVvsFGNn*@5!X6kM(eV~EU@c)~03>(Mq}N#^EmC;vI3k^^a+)1m z`WaE`T!2XWCOqel?iab~!|JnQi^UK0XhVbIJEj7sB(F9VlvGpGn|lZ3Ox5pEX6qjO zNt+Ye650aZrkEuw@y2*WW#tT|MC6>8myx`>h$SJa8KnvNR#GBFwFPl(z7=gCcP?Tz z5s5#H;o_H~$b=}Bs|PVAKK!=HSH@KC=)6-eS1+Ei6oqD&FoBq1SC_L5t`b zrgOQZTB(|j+uxgd@3|!)m#fyKdA)mwmefhK(^Bz|Ka^?h__1DFeqM-6UQA6b+Zg0I za7o7=WSJ0B=NkN#imww7$Bjq!M``bUmc!dZEK=c_^{H*;%=YFV!w%H6)^6z4>sR}& zT5Y$F8boS-?hTKG>dVX%L=+G)6kTwG(@u0CY0!78a#y2Njnc{A4?{vJFu`ug)n|-+ zQ0K4LItlRC8J3UpLBg_)@rbf8cN=NNMjajr-{G{p}AWoG5!Pg^9hr=UdJ1YKn0djF9Dd!Ui@8JU?AM zYgDIPh|AL~Hd^>#RLz#^`FCjQOve$BSVb|_IDx1mS@P4B#CSRMZR0n?m!T5WyV~Q9 zrTNb@hPV>6qYM}%X-M89Jda1C3(rwxz}bJ3;ET@k_^s@Z2Gt z`}UaEV&h#~^X3O(>Gr|Yh9!?u7|YH(@9>4NH&t~E<44yTtK(%wNrdeEj?inrTt2o( zq**!8zd-OU$p-CxV>n>#oYJE*MGpXkb4%RWj%y^prpNKN_F__93^u>XX7AfKSzjC{ zR$Dvvt35`~mOT$L+UiZ|y9OD?VDo0pjthH%BP{Tf<+z$RbBIkBnVnU&oD$Q?S~{>k z)*9z^uJZ?k?~ocFSn0Ck8EM&K6E1!7-!j?Cb?l@Ji9f83+9so?A)cSULyzw_&pCHk*Hg#!huena%^lHX5#WHlHKgaqu&HCS z4(v>h4*oC#deCCK@^Vh3mXiMIwr}z(2pJg*XCb!e?0FMrAMl@C6MR7w9w_@&U#G6O zvvBm-E!*n%`}eH=>v!~%>N-ogBXS-BmC%kj(|EgQ%}b1NXjZCfv_w(L%y?eAn1EBU zJB531UcBu{xYUqO<`Bub5jlo2E7*0>cpn#xOz(KJ9rXj%OY>&m@!?2c<$m+3V=ORG z%yz)i_Tl0!4RySh>w<-eo6W|Fm!CgAn*GrY>I9DF*_P0za6mMw{*C5L@by-#iuksA6jK_qWFP_ln^q+UxwzY)!i(M^NJl zD6;Iy1z{GcE<(?X4p1Mc5+EMHKu0*Mk~nSU0!ifE$ipB-2z3mmnX|3uCZExC(!T^& z-|#SbO3N9lflqB|eyw|yvv$qjPr}&S?h9Mzx1-hpm793ylljl|Rr!si$G%NJk8o7H z@m$(ivzz$wbvfRIO7A_?$SA%%H|Cv|#|O+9;Rg=-Pn|fWNonQ7pT|Xp?iLvIVGnbB z7BZYOI*3t?_dhxX@3;h5bb14&Q_RR8P=D_Rd9O$48(FFZ9zvs*onfDv4l)ZhuVJ75 zLOo+bG}=bXV(vw}FI4ccYSp;MGJ0Kj?xQtkZ4Hg49UL1~ILrY?EH-~<#)Tt6Nr0xz zk~9Q40};7p*rr&uq;(1hB}ng0qVMbP(A}13Q!T1esyOKvRh^M+qZPSK!21n|?!VN6 z9Uk6CmH5tI++I-rQt+Y1>uvL5h2D0872)?iSG30L0Z4(x>~@Se6``JU5@xm{(tM%R zAIoIp7g@`*7#fcd_E^qpEgvkz(S`4tA}C#^YBNTvW4TLIwH86fO^fiLNm2v+6Vallhp3!ImS_ej4uT>UZPIRxDrSxW0Z9bska0 zBjbq@>8SoL=Rl}#(5Y}d>u$EGPE!{{175 z=)3D6IrGq;3@Gc9DttGADQ&k9dJB2|=$bQl?b>%0{M$H2u5&L0Ep|>wL@vhLs(s%K z{9OBN*|Nh;yUP4}J&I_p+`qI&N$2j8O3bp=KWf#<#*p+#K)%=I9P88(0iDIsXj zwNz|+=wSpXx%KBZrcCIQIo4Yt_G*y9*66X2ixp*PsF9|*iq%Wo&ymK@Jef@Z4o22_ zCK%%_tl!yx*4Bcbp!18YBa-*k1j@OyNVytEnfHUo@?QeF1}^0{I}0kz9QEvzZy91m z-48XNGdUO3n7k1fQd6T-Fqga_wZIJ$7gQ+!BU!I(`nYV8c`%hJ7neNwb#=6!jq3}V z;YU_s7SXqe1Q~zvu z=^^@K*IDw4J7aFZ83W#Mo^r7sO1?;k_A3rO}mrr~4KU{zklxa8EgNR~=_SP~F*x}Ho8LZMw-`G2ivdzO!52UpN z{Y9M=~<7`AZe#E;}|&%wM{Pu$O5! zXVKM9eD1?^T~DG@#lh6V!pCy;o`xezp<_PHmWj9>^H`^zyH{jiWwPV($y;L9gdVV* z#_w?EGIn|iS0W(lKVr`5O*)dBAmw(zdD}fVARlwA zKJMQ5<}epWT3`4OR{Rp_b8ZT5NAN-%#H0Lzj%?+h7>Q7q_?hUJIpNQ1D{v>9!!^w7 zPEjQ#MJ0>l0{q_K#B8Ym`78n#esTdH1^iD%2Bf1$0Pq9A`KadR;;_??jO^i?Y_Teq zI`A)LlXv=M<5hI+vP;HbUo{LO{2$ADGAUbO=IG5>VE$=mckIxhqo!sEIvx9>Fv_QP zkf%R6i6g~0wPm*P3!)6N6d?hR>yEY4-r>($-V2cC!!{Mv z%e@%f5#(PPigqc@YX@CkK9^fKK;QMq;~+#62sFL)wJ9mVLVmfBAg|)oHE%_n$iFz4 z%U;4R$2<8o*TdO)t950GI$mMj;=?z%2yL!l$?w(`XwKy_O$jK)&cOu*ytls1;sIR- zmB@MOr891c;(uG=LR0f#5Ij?y+pE`VAyAgeoJ{f}kWry)C^Bf9Z(`kFV72$l5@;jfzS7PLIWB>JzCUw;nFrJtA#F{vO7Z@^RYz4GOX zZT^+Sh!~G4_O3&m&^l*nHmN@6;P0B)k4e7t9RojhV1;sNU;Wa-@y}ckh&y7F=S4zjS75=bPF^WLog!F}GW#~oVjL_*Uo^fx*l6lQ}YWpU}_71^s_5S6m ztgze+ZsRoY@e0NVvwsOBV`?+MulI#6?Jb}8_Xt+E1mq73hpR32rK+_atVI!tB%F*o zjJ4kyGe2G}H9Z8cIZRG%?s`AUKU~l8B>PRy0iEgxo1I9Bfs;}x=#E<)5xtz9NzYl# z$$hY(#^Vtci98&W_2);HBejr}T0nVWqR;f8^Ygf&^Yw(;=s2z-^@coXgdgWf3-ZSHkTkI5j*?nNu`F)NT z0U^7^PLoBh$mApxTTFi{bqPNZXn!4=jIlR7duu%={Cj6ZOz{3;E@i?2fj;Vi%J`^n zcRx-fh6X6czGI9daq>HrdEaX-YbO?xT?>6Wx;lC}_aKzjd5wcLJ-!oen$~<=Iqn3r ziZRrLqa$@=(a^Q$*Z~M|p{_G6c^#hS$$YyY41%S=Hb#iAKePbmZ(IzHR^MNBuBTi& z+)w-Y##cvwnEs=3-8;bJFef(duo;sj%4tnEraeN>DuUs4jJ`c0k>t1+EF{>z@j=hM zE)+Yo=3jb@<6bxNZpm{6cn!acH+gm)Nm9ohL5L-48}ggYTi979x5@L09dcnNiYF^I zh#QNvPhgs=1g+s11zuXeH$-N(Zmoaz5Ow(xiqGeA?v{+#9vk=$S3Y463vm1}MP$m4 zro( zxPKkH77BSOr5)#u>8=mzQ#Vv#xtdPiz9{#R$0Tcvt@QTd_}qAur&^9>Mmeo^jd@>n?E5Qvo$RFoRa|K9P)bkp<|9VRZ`axU3Yi5PxhCH^R>}!86Pa+ef9iM za96ua2(-27eINqT2_FGR*!u)a-$)0FMhjpd3om_+b8L35gRS z0p$J2W=s)1j{)qnjBRZPo_ba=+G9F$BjqjYXuEro20Y`wXukr*qoO2s9tT{`jij8& zwpd(Hva(VI=zQ#@eAkH)CB@4g@QGZ6P%0@tP27>~o@N*vh-5kKK(P5rQDmdouNkVql^8GeL+E`t1u!+M98G2e!aKaudr}YqIJ}i`UIGm#GwOiDe#8_g z>AyNQ%jHkeV)F|K9CV(wGqe~lxni_`AS&31lY}7k#{8pYiitb!ps;*?h1kB!51`CE z^bX=;KXS}#aCmj-R}6Z+Px?lPh5)zkmksF!z2p5y=z|Y`M#W=ifd!yypR*~10*BXhfgs7%=fU58iE#cPC%Kk%j z&8K}Q(c)k^C?6_IVYkG6;@&oJ{aP2X^DsM=6=T~2!Lyv<{t`DKRL^Nd}y}Rz7#135NF>Rl3ULD+}G~vk}k`MO`)(+&M+qBF5cTJi1X_ zy|!?d&2ZbkQ`<*}mmC$1Xeq;Z$VYeembY=Xj^CEVT%KYc=MEyPJ5zao>sqUK-6R#ZZ0 z-lk=hZaBzXxux9o+Rbxr3;V8m!}p=_M4cZ3gff(isOz`)1jXk9rQvv{2weS*TB+LVA zpb%t4>@}E}5{}{VVCN?Q`_(9yl`w%efp$4v8#;NxJ@if51L33zM;OGzC4cJiO$~ZS z4U;1?$#2cGg!ll)rktf2wn)!M9FRG>XS%6gj`{;s^Xr5cAn=rV`*~k%hp}m1|Nf-! zlKfh;8(Cie@63wG@w$<~Z@y^cING*_OD}%6_R+}ed^uEKlKRx2<>gTz%{88!%(4<6 zdFe}vBY~;j0U#MZzn@NBlN8j?vOU9NiXQe@c@t%5vn2})EV_3*){?Pi#yWUH0GPTa89;%h2Q%S;HfyaLN94^~M5l5n^V@X#XV~fnOPz4|lw_R*#fVfty8V)dT zHiE-yWzdC@Bulldbsxb%BFbe@T{_4b57iS@F<{9}-*m5qU6on4`_KF?ABphwNA9O} z*F)_xKpM4YR0U*Usvt{*$^rw+Y-&wvfaT_aN3Mr$_wbM=LjeSjKv^-U{{My=h(QJx~&<{qv z$If_;7CJL@yXSIPM;<^w#Ef!(Ds|oqOqpPaea4Dp4?Gm zNbt=42^7)&40aTlGn!s|$TY^c$@^S#PQ69rO${_3clqL%{=#HRL9;10dF!nV-bm)x z*~;ks$rjVMS1T^$m3G(9=>PoE!q)|QzPA)6t_cUTzpdS29?*_QrG~@3pBCrNEX}7V z3TYzsdKDVOr_1_eTkG`tdDKZ}JNJD$u6+Z^Q&O7MQK2u4SOM>#E27avW!;6iebP-gqs=+P;(NW-4?w$YVglvRBa|WT{LL0 z;({U12vY0@<WeX`X~+o$ZG6eBAa7=3{ZO-Ix>Uj80gJmrG#}h)vvX?7aR&=CMb* z+};W~?zn#a=rjG+*%Gc`JXzA=im2l5hSlRNAllb1ZK*cwv)wi0xXo%$9^v#yUQ=kt z1wYi?nYO-t8>)FHDDto%e^^!`m&G1sIq)>M}Pn(YpoJAdbM-k?BANsoC&cP1x1+TZPMqu|+Y@RlJ zo@HJq4zq5W-q6o$J7qWVjBba2$Mhe6;Ai=!o0?@w%NG51iV=Lz$encTeP%U6y!IxApt=pPr-r3RoY1KUn;oEj|u0e`UJ zV8*yGfrDLd`49KY6Om~>;sKtHtGE5{4wKwU(@p!VJ`)OAQ5KGo#8fbdr_!GdMc}I} z-ZI%8Gi=iG)c9J3^O@=W{!_pZp2P0hAC~UhjnmL}LA9q$UYcPW)i`HE!|fqlIG!1p zUG##u-?8syL=o7W+wdcl#&^`wMZ^Sj~+@V zXw|tRj~XJfCD+=s(<`9VoF0wh^ltr4l#b@RH-G~wQAGxEz_@_y5{2h&V})%EzHTh2 zvYwLH@mkLHzR275{&Lx)!+wDwhn5wVa~Pp+L4W{$$XzgclNSieK;8u`wHzVfC;*6X zoOB>~!6J3bkq_8vWn^UhkfF7#hruV3E=4wFD9lvxLiX)ym=`0p4h$X*24+MvGj5*8 zr%Jgjm8(k2rHa2VpIl9$)mZ(|fS^(+|NcUQ6E^|SO-uTsRDZv-_e4ZojF+3@b-^hp zm>rIa%6cz-u`B&$$f8xcg{AOQ#>89uSWz(*Y#AHe8>19UFJfA`)HJLM&Y~96X~B{> zHB!cscleP$SH64sNlmcJC4e1YjBlLqirlYbR5M$#=~=M0WQAb+ck<()hXl#*my~RH zWLBNRbl=4>3IMKggN-3LcJn(*SW(o}g055s9MC;6#?L@+L--FQn zAJuxh+F5`&m{M-+#z*Uq&a_Xoh$bnp z9|La>Oi`4Jaf(&J#Q+*-aB#4vb|Y+HO>x(e%^sv#=tqD+gR7QnF{1g5DP?s{qZAkP zq#N4x5#OPyQe=$AI;i}y#9NP*x$4NaB1&*fk*P;H#{qe~62lLq#W7(q-1l=zLei!j zTV-7DG^lZ=JjHWnxL)Zb85N83CA}o7DznvKKmihSnL-DvcP(~`pn+dtA;kaPJm7a{ z<*u4uoXfvqJgoZ2!t3bN*pOd~EU@j;!#EyPp#6ba%BFO+SegpVi zgUywFCHBf#$P_}z#wicZsbSM)MmCc^pjtL`Wk*-`G{;Eu9X$%o3wifBo$pS9jS6VE zfanJ(viuoz1ZcDPp=dqtr|T%=G4JLe_LV2#-FTjI^-aB=W^bjlHvJAH$#0RnWA6cd zogWOn0Yc4Y9Xj9;&XTlHZH$;`YN&+=5`B&H`#6eH#BQ*L7{Q&6s!n%{r|MO5*wb~2 zI7yb|6=l9sd#Ga5fW&aPmRIsT(a?whuVh21+`YX5lsLN6|In}*rZwc^D}G=I9s5ju zq<6SK8MwM0anS{7@wQ|0I7CqAKH-g0NCD{p~1Sc|k!D^35U@5ca_Bov};+ zCkf!I#1H+^V0k-Say0|RJVN+S_3HHtHduMq75QoMXdgT#v;jGjyRnh1N=GQKBl^Iw zCUa6yIp4G;KXdk#ffM1cL%P=`P&xv9<9?;^A0U&HVmn{ z5YWoSrOc4WQC*K*J;7?T9a0ob-qkaf$S`b8WDak%konriTQW(&B?OBH+6zV(w>Z;9 z2thN(xBqcPd^J`=l~S^xPg&q^(rS*i2B|ApA&kyLf(Bfcg^ zBUT+j>G{&RsUsJmmJ1UeKX6aX4 zHm3Qp2CLk?3uX3eYUo-29SUu}6C5vX{fme45-b7`xPp9{1`Cr+Q@j@te|-w?d=S#I zGpQvGoN@>&T9((LSgv!__oL{3l&#w3QTK3zAgefU^~y&++qFr2jNA#xnaSD|} zGs+!!J&Vh2w;J_u1)U(aT^?CF4-}@%&b*6{Hu57>AuUn-0inGpDGr|HmrmtmZuDsQJe7` z`?Sl?!?H0y7H1-EC6S72vVc4HnuMoXs!mg_QS)-aj{Po>XQ8S#;eC-m_9yN4TurRj z&h8`D9hV{x^o8X*eVKBJ!39GjBEEER!^h9K=`&r>4>Ffqj(xv`o>e^XPe*&jyEPnj zI~t`|{Szn3@xOi76@22o=k++qSXpUgCY9RWp>h^TAvp| zODC5xkWRSSny&=_F)8c^OH(j6P4g*R1r%!DXih4PH=wx821g1QMKmi_Gpys%eOaS$z}f#e@}Ysd4y z^kJ9rgJv9rzqdI69W(Vy3=Rs&5$8c60E7cfJwc$izE9=`d_iB(|$$m)4%E;-^P#V1MG+kRg_E!CK^Ie5am;U1O8F4 z$!6(-#5Z3VdHif2c{hI@5II{J!oO|uUc3*=S1!(+?RQA`{Fl8R`X2{1{*&N|v`8Q* z(ovMu0Bsyl098w8S{(O0R;%>#0jwJk(0Q;|XmsiXH@w>b%EFeh06;Eki3H;i0My&P z{;z6=ddm@*6~rNRf3hV<4*^LbM>&Wb#nVE#@fNOqKkRii{wpYVGb%=bBH}0b)ysP% z{@^p2YKg{ghewX=B>`iJiUXA9kgTH5swQk}LNX??NI*^|<9!6lz}Ee*@BWwG7T+bt z{}0GhfZHZOE`O4t!wgCjS}NuS&fXL3dSHNKRV1WnMziw}a6hw4j{#QMMha9n(_BH# z0k@m7f37b~iwp@wA*bUCk&acrBs%=`yLJ!|>4zjKR8BRNW4;um(i^P?m zA3EN<0XMtPO0a2ieX6@cR)}~9ecaI@u{_mwtO$bm-=sIniML1IRovY2U~+`m6YwVU z4Do2K?=llh5rI0F)FR6IKD_t9keX<7=p4e0BI3y6B|2wx1FzHrm%7hb!p~i+3XmzZi zwi@q?A_;z<(6(F?ef%FT0Lu6oIHh-m(0X{k^v{KKRR#<;|Elci`VD+IrD;UL@s~9Y z6649IN{k&XSAVm^ON40fn!LzSuXHBxQod7Hf+ad&-t$Rxhwl5AAkKCUd?ZTkv70o+ zoZnY>af5u*eh>b;tsP`wX z7a;ndR)^RcUWp4Gu@jvN%}Iav>HeUD_j_5n%a2y)-NOcmfnf7_{sq>*@Xqr5V4}_( zO0GyW;_dy)-Kv4$S%~S+@XK|!^gFskl^4MW0{az_ujX6{!kZBY{%Hb-zz#(q{Q`-( z%Q2G3+vx|C?6nvKe*9~J0V!=saFCU&V|PTn^~}WA3K6qe66;&jC5Adx4K{yaaLyK+ zlmgrtqNt4)mOdj)us6Qcx@U$wwb9pGN;}50e_O^uVgJEr-zi0>>&R^RL7rZ$y1Tpk z6LjNUte(P;$OE7Kn(i#v;}2_qUm}|g9-jfc4zhK~@%gqm@bbu4-YZJP`BE0#7C;yQ;35H<3?XHl^gdXD zVpTwhGESuff-2}YuR03aS*0+kmhyzFM(9t?;)w~a=#xz;<9$bCZ7MRQlV0C9jYRn+QP5`PZh@^n3ovrv&y>xP60a*Od zV1-e`++6_S{LgyaYIX2u+m86htoF0$+7e#k$E!Lo+ZDZe1m<=(3Gu$wcb9mwT-Cir zADQjsxccrRJLoDW2|im_8%gytgiBWEeha0bXiqY|TUiv-|e)5AP|DO4Wz?h ztk8e=v#2C|9MS_hGhA{dCRNhYTU}TH!jOzeV(z}E1EPq&r{Q8G=<#W3OA8C|TNz^@ zUTgPk-H{tW`w0s5PGoIi!=q@duF1EDD&DQx9ksReAT8QNfWNwfi5ky5$zMbU2AcrnqS5itkw>Cnl9s2pp+toq;fy|y5bh7bZaJjt4IjDO! zvnP9V!yjEnTf4vyW;nE~xvGUc17A9^%h$>?e*)Eq1E_X7*FFi7fvD{4>iW#W*_F?d zr?Qa!Wlsp2s79JD1q4rntPXABV%R{q09!R@3uy`hcq|7(${|OHo0=fx*$|B zNf;#=Gzf(rqMGg^wcL5_E37}`C(@m3y|~+}xUn7WP(&E&h#1%e2ruPXX#0oNy+b3e znYnnnq_WtB)07^ z7pb#49iVErPLcxb@mA?zL*VzVrKKgh)ziTLS`FbUx$rYN)dZKkCWWz@+UN9>P??^L ze_{4pp!fj7@3(GTmoy@+z=J8G<~*Qy|e zI2dZfUg&!Y3e^~8sOkgv2Qnc~BzAA{%eV8huHwhfov#{@jfN>&GFP}-j2|6M#Rkq1 zF-`4z3J8MZs+H0#gj<~2TTZi^{latlIuG_`wZ^a7spXPjz{8ocu_S8K;wr-{0TsGXky=tkXv$Tq^YHfg{n*{C`>Rs09NlJ^RLVyTSG@=ceC=Vv@jco&; zdNwl=U`(HpArGvzBrU*i{?_#~w(aQ&j!Q_231N7(PVg7;mrQ^0sDBFp z>Qzp(4^OPP^mM$}n0}QL7aiC_;6W?Ih-5yJaz^`V^2J_V-y*QYKq7&SK4V+Y;^+SXM;MnQ$T1oeT0beHmCkK6|T`hwtqsOmCS!UMF-|?(J4VB{SX( zll_5j*of9rf%+z33IFLPvYPN0;_bz0IkZd}>i^$Mk2_Qu6aYQ<|6L2rV4&#ue|?Vz z44?j=RsC;q3I01V{`YmZyh+d^{NL9N|7+;|@AuuYK-BR+iwuBWN|xtEc`9-18noZZ z=~57G6Zu7{ZVbPNMaSNLpt@QAz9(q(33O(KLrt};mzyw`8UzI1fKU$)_VLEX#tJ$C z5VC_O2mN;fXx=b^S)C{-D6V%py1Jz6_D%4PfV9(sm8}@s)+R_$OijUZ`Bm~lM#=wbGyr2dp7Rl8`S# z8y6Ha0VZZoyI?OTZ=Z+lW+)`=l0PvbW{tdZvaJ8eB$@S#g4v<~c}+(NW(@ZoayBptASw9u4$^y(k&VlA}P7?4JhNA?Bb zVltZV6_QM|zvwv&PIp4GY7Ax3V(5SUg`sc!a8k(sKSaG{KvZ40HY^|@AtBu$N_RKX zAt~L`Al(fj-Q6A1-HmkT&^5x)J#_PKpYxsX@Ryodd-mGvzLIS$1ePaE^0XT=HLjz! zzT4~*^FYuocuS5dPeHKU%db*=@$hIcohbcn#fgPF+X)SVcx#=e%H(-Qd+NO_rfr2} z7X$P3GI&Hg#mQea{UPj=F){zdXtl7(s1@kmb;69{Ye9u0jZ}^+8&=4TgbKE_#Qky8o2~Wiy_P$x)p)&Y6=qL;BgQNZ3JGW#c8h)|~8D z6r9e)NYdp7B)gq#sD!xpJgz`miu+=QeoBeN_3mV+`0sifB(5o*;eLyR^l8y#0sbMO zrU{J^IS`p1pVJc+$!gWP&H=(EbmvI$j6T^@FxDgb@gRKs^)Jq;_cr{)d)`=9<6q)L z3v8FdQ#D3k<6qrQ?zbPD<7ZgLlB@G_!4{so2-?0n-;{rr(fsA{ej23|c)BpZBIy)< z{>AroZ*n)9|LFecgqyX5=N(Ul?%ih>2Vq1~77V9yio+4cfxQ{N(klRbNa5jfDc`jR zccRzK(3xD-C|hvU{w+s$|NBB%XZP@)a;#4Jvk`;A6%G<(N-ZV5!<2AfxEEvwhnRmL zp4set7)#~dHNj{qE1YG8c29<)pse2P*!&(b^7|wA?Tz*{9n%`{Fe|?2FVl!UhgUG3 zA%g8rOKnotJ=G^YjDNeDUgR~{yBDG;)c5PXONqX5yIlgqizCzfJm0mO)eL<7WH{G& z4bqOR6kH!M4v&xe>tgwa>}OpUs8e0f1hn$MPqegx!Z$zvWE$tgi0xlcVPjR;iYpCgW~# z%2Ot*ppzTVp;_^d`zf8A*}b2jdr?6(#7Be^n~ z<#dU@W%n4#S{Niv-N%sm_&GZq`t-uwou{GHr?7CbCfE!;)VO??BM{er7{stt;Rvm^ z-ClC=QMND96HGDC^ARNTd%_8cs_b7SU>2D*JP+}^!k{}QUgf7+ed1#GMj+|9=`bK4!rRWJ`)-py)>bn~i%1dThS zyCFuvSrd7W(*4O&UVObFGiaWc>T1k*lA685s)6Q!J(R-73zMZeCm)zzssPC&s?e3L z?J_twU!cXMM}~5)fLl-$0V<7(_|8sy;zBa1=bkWU(mUXR=IU5DP$UV*-Q~T+^>jf8 zvH6B4et_4T=RC%Gr_hGDg>suE+E*;Fp9H&{9n8`wH$ zgB7^>07?ckrp9*Ju}iNb&}A&S)_nlE%qGL%=keQIG5WLTmDzc!AvVI({zYVq3E5sQ zh{-sfpx;_|G=kr|oaHzdi^m1}ExY?xU7w?qN1d3mSs2g216HKI^()>L(Nwh`yMOM` zZb6Fhs;MWmjkDwW1@DT~4^=AiIBIP=7T`iZcOo|2`|+>$57l=KGSK|y;QuN5tJ-X$ z*swChZ=?0+$_f>+L_w~}yY7wy$-%41V&8ek4`nIg7x}n&H)W@JpM-O?=n18ZLDt%x z;e*nViUv>opmM7)>RI;=USg*k#n$aeO6Y+jtLDdpA2i{zuF8saIzGEgvCPV!$J-MSZ9C(reT3q} z@Xca2b-gwmwjW%uFZS_9t&-y9=K%{qyEdf6*b8e($SIyoAj{mD4-HzxXExypyb)L7 z941X^J(W~-jm0J*Gh=tyZJZsa(Zrg>C!*P6a+rx9BX`#Y3`Ag zymLOJ?0C5+RCt;vN!)v&P!kKV!+%sP`I}l0lPl#P(g^ODGDhzj2ERf%+}2-E4u5b3 zd;m0a^I#l!c{Pi0Ntsp4UfKjh${XWhf2rEnnvIehvm(KVw|0WJsBlA(+vhM6Y`f{= zvjYy999FnKqdw^JMSCxMj#vB0HK4-8HwA;y=4%u~EI*Wx)U7$0Uyf+?{Z0vjqX|Y^ zW-)XY7|jm`Jw>(duV_Z8W|bsd`aF}XD@`pdnItprf5}g*1PNUC*WW!TY!}KD{PvAK z{!)yebs!!oLDn_U`Rbd5Q-K78JHCoxyzkfOuu@z)C?T*j=a zM6cIBe){gIpvjJSvK$}m=P_jE9b&BuxZdb_=tCl&79l-^yZafw;4vjO+RpH>`*olp zCEIvyDG!x?(%Xl1rY5_2<0Wv*e~6{UT4S&ZI0VXyA1+*;Hl%o*W_if*?d@ecN#~eV z*Z+XgSaY<>FdzFmHDFz_6{F}k*vq*G5>AZTsvw`|BMgW9<@w7gSW-55-lt%CXIR{n z-1}PtQOoa=>|Zyq+@Ve=9bW0Dc&4v(8~sm;V8e^AAZc`-V3aAIbOseTjzKavG)Roq zvomjsZc!aH+RNi?j1v`o7rGL`Je??*oKe*t3-h~G{D=DGY{Bc+y%edT=8ilngJoBE zg2I4S=ebjM+7t0nr0?3@82x_Q*-_-juWdXew$3Z1lKPzxK3i`Sz5zitpd zAvwPkIlVhI03C?vpCU7b9tI1t@o@tf``%YVp69~Nz(Oh6utFUSyOQ0!b~{L4km2OR zfE~~E-yX&d(;kYfCS}#eWfjJsWhz27+~Jq3o6?z2@yMz>l)Z-KJN9wLt8)?=?}9z^ zOX;F*jN#JTYyz)$cH(k~zTYH!5}UUh%K#_$vG1Sry}5kYZWXgda%74x+rLhdFM{bL z&ld2zetdmZe}{t`pRi;4)h1QEp!eZWaxljQ6;AdN=H0f2W50vcc}@(OQC~6wLULdY zh_>r~pYz4>j2#R&g=HyDmr;wC z7wradbs0la)Huec6;lq845Q(f=U1e%{zkRmUluhFcFjLtXK1!L!VUa>W^Ld&>w02o zUUMLumoMGr@r{;0X-leDQ5{Ro^U;x4{|rngy$S;!w*!bRXFFQNE-%cjo5kj_d)(T) z%dT{TL{l46Hn&0&@^i51dzcyOjHY6GSYnTAx%-4|hc&hX56RO%6jt)ho+Y=!ql;*l z4zLY6A7O6=pnR0kouL_iK4~00pW6%j-1s{4WKUlfg-13hu*J=2$`nKf-c&uUSBhgH z3Cz}4ZX*U#w}-M)B3#}_*j;Qp!Jy%*g?-Q(^UL-_sW;()JnWh=jYrp*so~^>i(Cy~ zM~pRyf3&0~HhaC5*wDw{`}Gxlz|Lm#sJiLR1e*Bh18N~nq7?6Z2gkjG>XC^jo>~3D zDz5;i533l}b0W+DRoO}mlJS_k&k7_FUge5u`u;F$G1cG%lHNC%Pme6<2*>cW#dzxV zv0wS|y-0oUc;cy96Mz<%jn2qP|Pbz4^ zLsA37#`p7|rLa~GvmfSibbErkp97@M9#E2PAm}*uTZ+CT6anEP$^?$Xv`;3_eA(VR zpM}y1heRk6!4W9c9icrE6dpC|9}j+Mj=ZY}Ih zTVCEaZ%EZvzD3#V7|W>co(e9)4j3Zsfew*~TIwsIITFOn{?_dDbQw$Yk>UPKVAjJO zp>*lVUXzbenY38ZfMdFIVQRgk8?yFmwCiAI}LHNls+q21++_N zu``J{95G4IB0^l|`{ZEzcqLa>IK*qp>D|CMv4*Z-J6uMJ^ah$^WtrD5Q~XM7e;exM z7EfCGMGXiKU+2U7ybc~jHP7>-CKFRrpsuAuheTOTO~S{=2XRMsR)6W>ZF!g0pOqCB z{*1M~&oS(G5xO1OCj{gt4?%mY!0)%)URBlZs`#-6Ikj&uQ;BWNkZc!EzZ~K{`x= z(f8HsQ?1R46hO(m$p671vswFhU*isV(4Q{JDy$EC)>RC4f5C-pfihyAI$sAf7|?Vw z9hcwh`faAYQQG3nz!W-M2Cc=JWq|Ab|Nq2oE+lHw zgoHDr7aK1+-zBx$`GjqkX*rOYKL@ijVuj~D-LY1fflBGJW+w<`?0oHRRq+4G(#X+|xHR6gUx81nvwdKMz;{2v%}APM|VDG~fOj#o*! zLFmZ?yezdAixm~o;rFYglmx2=HpVP$ls#_|6+U?xEoCWljsL^YK7j+!+&*nX_t+ek zFl@cN5UMBihFHZ=!qgZc{^lElbfjZBlf-j$3%Py4c4m&?J{1Rn0s`(1(Nkx$(0rP9 zm#`O~orv%z$nLrWah_#pB<^;yh{+ORv#&(bzbG4*H;%Qx*_1bq2^tp~$w$J$LLwMFfd6uS41qXPN-DIyL32!fw?tx8f8lx&Ov!@l56 z$>RKbp1oq~%jra-)~h?2r~Sx;C5~J94c(d`h@Y<@<)z%8JkM)3GoNQ#bTHIhNE2%> zzx{RO{AoOYTzlkgm`RzIw|L1jNu_3g&on4+_&NEKt7Y&a(V zTzM+FX%Iwg_-FsMOf8b(%Nne!a80SC0h(yu{Tv(|bWJ(i zy7mMJE78Q()nGuH3UIUQJM91hz6EW;SO#3ZNl6lbJZeFmQ@#%Z1@vrz!?*X|!1oUi z57|8R7ui(~NMk}WGc(T;8ug6u5yJBUk~e60YH|{YG{|#!D*=H00Dtg5V814iOZ7$b zb)F*%c$t7+J8-_f$db`GDik_?biH-^XlZSY&4&BMi2yKzPzeKVu26TN`(nz10oc*n zeGvYf-J)c9rx(f4C4Df91l|rg206gs$^Kr^>&mAAUK7Al)UW}3svM{gsI*_o{{_IG zT)wp7q6!LV2hQ3{y3%TD&U$!70enVGCh1hkrbroQhiwY3uC9YlhjK3c`a2S-+j`TN zxjC1|Dx|*mtNiXzXh(Lf;{|5r{ zw8F*xQ4Dl{PODJ+{kL?2zYjVugVVF*-|J6TnLVJb8h4&Me7|@Fc`gd;c@Jab-ZpuH z8bPdG_wH0v3TcxMD+u9ZA@(aC=Ipi>%vWbh{n5tt9s{uY=}#Z(Qz{4X82+APq4b!0 z{;w8bayymshH`9;Zc+C91dr@uEsL>yh#Y zB`ZQyX8(bCmgwt6$2w7`4rZq)tz)scG7#C1lO0TRUuo*L=(!)tXAi<$j|MP**@A)x z(W0rdFl*&dU*^+84>y~P${n{wwicf;d8xORmtehXU8yU0OiL5GsX8E&X5|^P%_yHn zTk=BZb#()f4j7$BSo5EPKUxU-hGmzB{_&A&vT>7s#1oMkr{pU`O)IKZJaKLtiROnE z%MtkEmvhmIF`Y!?=^bZpt9@xE9@^H`u#7~jgG{%M-6&}$^-EDwYW zQEMU|QhU$L+qIiD>wf)>)a%?=on9g`gT=o{p5EhTrakQFwJ$9pIFGjGu3)icueQ|W zmShqp0HN+BqgCMiPoJ(&Ut{D^RyU}DQfmR+~ZR=kBKAUD=lk5Pza?OHrye7{kJ$gS~$vfS9ioq7Y=p@^PvtQ+6(TuVwE4{ux72axL+~1@<|w5vGpIbQ@pb+!@HAZh{4t9IxSFR!!NbIdTl8*sYp$~l zRbIcb{$q0Zvg$x-RXQUlel7lp*=Y(cJYlf-8?b1?XXa}TNAjq&SEr3BD}xVF;>KJ# z9Z=NQ(v>d^o3Tk6qlF1mkOs=Z&8UGT>=+d--L)Uh-zb{R6}`37SK8jiZAmIKv|6Zm88*jU*18)85n`-J*ipzU|6p6IC;I>^G<_3F%cH>(u(<6y_ z>vp<&PJ!K73W$;f%1E0YjmOK%@`0R|u#zLx0SNS6yQU4kWCp*?NtjnkzXP!x2>T)% zTf)*qF`pQzb?4k&p#8N+KmVwW<^J>$GY#k}>fLU4si2V^yjbU#Yd2QBuwLZ?9$kRC zg$8@2(z|&*H2+$#@N|XCFhfdQnGZI7HGt;cR>vtzGVG7bhmU2bRnhiCaF{J;07b5$ z;U$nAIS8v!d>p-g;JZF80^|CPQKs0i*=J}Xgoaw6@<@HoNBWi|?6}+OTeDX<|6u5> zQ&qOQn%X`-!v$LYYPWb&-Zc-iQP--cz@)Q6R;gHTXC5R6N%9kBHQ8WUhJRq|w8?tj z=LJCQV}Y%vM$5o3-eP?3-4&U=ssrnP>POvwZM3Qf zbr%q&-XLF4NvAh(CIpleNjXVL^Hmq9eScWkE)f99_;Q0K*1v91KA<>>;8mUDdkpZBNa3lP` z*Dy$~x^s^2;NkcylB3Clv)j+Q+9&8ztB!yTEpk23+|(3E*#z_jK-mr;?(g}2&aYu6 z(O$wOk5#?+70@x%WGT1}NX?DL^G}DNasvk$fRbH8wqU{UuVXMgoy%)Q{n*C{aJ|4W zOUdojyxFTg!uDh=G2puXq9P>l17pmVNL^aQ(D=?O`{hth5>H=L<~u`MT>o(Lq(1y;XWyVtL{hTT%t4064+`dYU%alTofj}l%mOTwK_y*bP z2hq*Segg=%sUY`Y)fFoM=vVjT=#|GrjS2ZVB=%`G5Hy-AH$NY5?~HsQ8VC#Te)Wm7 zQ6_V=GKL2&&f;w!kp&l8vzW44-7ldg4o<^WG;2Yjj2|V`!3zxB+~_KerR5RLGW;vp zJu@9|pQy~#xd;ENp=Z@UE-Ta8KAfK-_LmxPZ@?l~S6oXtanGQ){7{k@)pxPF;UrCe^Fk7%- z;!LMG-UE)P8~7myw&XJ)k7V7UK+uQ!bh~FB6WH4TAMcZ|-x&XrkbK+;xN;TP-13Y^ z{{qXk=nrUUA5(^1w@LZy8Om1&4;pORC8SK)a9f7j+=?l!S2uRNn6F9x_(}qi?`3dNA#w2l))mH3wN1j2yBJ5_14%3L(;K&ZlXMBW&1+?58kVar@A@s3m z&tjN5htD~M=Ql*ke0;2tzPIIH(et|#YH=a|(Y30rVIChme8l<>zf}*^T%y9>AGzVH zmJ-2~Ws{KqwBZ@x5R>|Rk=0RK8fl>G$Q0DGi>J|SDM$oCdcfN)fEPs_K|0?_=j8Z3 zyPZn7B(v+o8q)#_*f7;c^|K3?&$8>zmU6uM)R7*Z6+ z+1AKGrGkA(YR`Z>o~77}G}FEo6Q`%3sA4?w3%>vMNMyQTbBR@*!60atL{1?&T1{6} zDL~C?k-fd~lA zPF?AcVG0m~w~B?jg3%Y&2c?K?_r@4wSUB7<`!4m%ieq#|M@&2s8tbfsA z1{zv93wk%6wQdf}1yYV6`_Mia$+G*iL_BHFF???CH)V%=CG}`~eSUI~JnC@Xm)Jax zikEb1@ZGPMkmo?z9K;4ksz|awpflkjc+$|OU2p5(p~FtY$OB*kO% zOa(amc4cB;G|7M-v%uI_2|`3v zTI2c5TX(Nh1hlrl-WLLV3IpVW*UWz_Q?fq}A6zj)&u1F&w6I6E+VXrbeDZab6O$=P zMoc~oIRNVfOqW0@r2}#C08h{^#IG`I#e^-DXUWHY z9cvJ=$STX?b`e|34risnfsuDCo7+Z%$0|l zv@XzX!z0bDSRg9j!UXSnYe++XAaqpVO}NxkC>cptaEe*Sa4{%Hlv{4}@odNg;NPU| z)Lla|ovG*-g?p`Hz+)(-*hSp^`;-b0fMeHjv*_#aea-ULG#FuY)0aq_$~8If$#gwt z%l0`@d$|Lf&p4x-`V84G5Daau9y+)#6|(bs&zNm^ycp`se}cNhp$gxLPi8GuFBjO< zq@wt}V7+*}qC7ojugd%e1v+}!FU3rnOMB>tbV|vCv?rY(pM9-YZFz5Vq(1?M{E5kg zk8#QE@2jZAZtJeyJa8h4n$C9*txDt>>@M$vhpoHHd^6)S`QN4h$uZsr+DGp!gzAnq zV5}-Bb*W9O>XF@MWE)Zd#Adej?F8%~wq9>ZFG`xhC8~jSS59*ob8gJpr-$On++luL zPD>25kQtXwQ`Xg^>n7*}eUws|3HW;sJKuXhG;`gHuV_NHg7Yr&G~ps54zz;dj@lig zbm={;<_0~(Vbm%h?4I*G8ro`d>J}SV9Gc^)&)r>N1q(Lh>C#)5drJDs;ozf$MxcUnWnQm|P8GE|5*J{#Fkh)3Uv0?fN>K_lj&r-?jO0;8K6;C297()ZuF#UYdP z^LsU4NCaiZN`Ep^#>v8j$5X--h+S4guPZslS(qO;dEF|qZc={z=Gh}SO_ zfNtiYwL__U=gTaHs+>`v2E1gP?^o}4T4ZFv?(EFBUKhLm;5k(_JUV(9iy;K;p=bN< z1uDwZ@6pG01qPUR1sWl@bOO(d86KY0!p}KgV56&C$F@OjbwUZozrAGdKTU`6$Ld8b zKwou>KUP5xjHuxz=#%*ggo$VSE3~bpB8!FpeP$b?-EdJ6bWWrE@PJ0pC`lla_@5Z% z21XXvn0AanVqiCHeEA9EtfDhcw;p}y`h|L6BhO|lAXtD$+4U>N)tLuW%^Fy+rb*HC(KFLWb^T| zo#i0&Z)si_h}bW|3A;4e1&vQfdom@-f~kX?`kg(tBLB;FrSD{&S(`&3v4(iMMJ|A6mOkpU_dTk>b|2Jg?)s~ zJ1MmdlQsj;Ezg?QCJ=f7JiJJ7Yk`Y>``O~^yV7Z?iI*o^`ytFipz!X2U5w#temTC1 zGp%oDjlB2NS#J~)pm=O!((8>=;!7gWmY&78?X6>oR0_2QMI2lYpIlIS!p?8yND~v}1UT|79VCMH;jnEB}3a#4dz+t^&?&&YZvNx{S7jbF1HA<4Wty9_WJ&cwBKbMQnbz@jUY81x>OJ*IW;z$9~Z z^epv`;ivm?1@D4AN;*%rZNSCiosZG@5Q!tceR!-<{OKf7>s!h;)O=w0EeED&jy0}Nb*Nsm(QNOT$=meT zy_*-`n6J8i75E-A1TuX7x;+{8Xe+)LFx+&u5LOZQe zPdGsjAMQ7F#=t^Trn}PNqHZdf0za44V$NTJg)d!B*ooI4T}J~TGjmx}Hh}#XHv$m5 zY6~2T?e5r2u{Gd%Y5$vuQ?RtA^7khtLiIG&a(s{hWg zD!+=3J3j!@8%E1!>Pkp*(<1*vmGhcr=t-1U2vwE>3Nru)0KvGI_rRp3NmvDo_t{VJ zG2M#h#5Vpju#omLRn$jk5vz(SrPyKOusKtUAABXS^B=2;^d(IQZ7t}P>i~69k}Q(b z-+W8$_Gr#k$l)O%EjMBKX@kN7Tpr1(>`3_2y9Rpkqj%r&F0qsHJoZ>Q&IE+O-prR) z*_#J;3MM8!L6Hm{sRH%rZ7Z%!`_`RBK>jrF{(gJsuM}GyH=toHZdjxP9j$;Y-3_W> z^6E%jET4IMBMIUF_NeAsGq1JQ*kLvRJN=+7_ZE0Jw~qiePw<<(1_m*6f^$)K0*Xa{{^$dE?y}^*wHvXjy6D-3kuU+92Wi%K=c4<= zR-gg0GzQ;k_gdhlo13N3?FCJ72M^Eb`R-YHR_$r`PP5Y@l=ZMOQ8zJ+bVAuJsDK8=P;tvW)8veoI zR`|6}B>?m}t731Spq{mpswO_k%8H+>?g3 ztf;JpgcSf~{$t3T(b^6R7+Qp-Zl8n)@eIV~%62NY&?AqBT((I|CZbEHrLD z2$W8pQSU2w2C>$lr;4dTd)(uEkD>nIyxfXuu)LhXpcbullF@;;;^`F%fvIM%9eS_n z`l~pFcnXo|Mb5=Kjzj1_;COjY317#XSD2J1t8eV}7nsK6FdQYVcGr_j+|12&7x0uZ z?Vm#$n{cRBERr= zRGtQCkMJBNGn@O^2o^Dmzrw(cAg9}yP=7Q`X3~-s0$Ee+_M zUP({nq60v01AuJs;2KmlGh@kC6@27~0}fIw)ekbsbDe{+9qjmtUUcwXBEyf_z~@hd zE(tJZ=N|C^1GkuHH1H{`lA7M{0gfK~1ewF;8E1wv&wiTo=M;Uvgo6el%8B8)A;!R` z!)hAp#)pF=p%yC1y~%UFehHrx4;Q^%2lstb=Qg2NWW6z|4X!EaZ)5T4?!5P;^|x0?BAj@Y{GKAv#@>RTbK0`<1V0XFonx%4K!t^V&!lzCxI` zR&t)-;m+)Obh2P`GEM;>Au`}Z1+^wez@JcLh^BIb8E7k36aa@OYjfL9(TYB#ss${rs#V_NoA)Fl2r+z z zbWqLq+&YB6@k%9m%`|-V-kIK6ueheBX5e?d%_4lPFH}+{D*zC3Pa&`1>AE#I4~5MKR_B$Om?%6gUL^N+5P-#aq`4MV3trssJ|yb@o@Md0 znx2A$7&%~EX?nejeN&WF5u+U`c+3JT>6D&ME&ck^iL_Wjp1w3f;_sZEMAWRrWzQnf zv-=nt{)5G~TNz(Lu$+skmb=$j{A3Mxwc{eJY_JN zDA+TThrc_~-v>pj)eJ?375cmVQzdmO0Lf$H;>H7P`wnaN%MH(lz84(SSx~xJ6_D}{X7TLq>?kWKeK9no0L(^!DgD!jyrw4H zu?HAhdB9)vhsNx6mj3=au=*gaU6>wH&0g?eR5l+oGJ$TGPW4|Dp`4FTXU8qz?ZC*Y zdt31+b@|sS#$ja2cf>}RKSJZhhXW#cRji?o+Xh=&^dp%`dzioaImKRqSGXIhu0W`r z9T#M+<6tKAYS$%yyXEIA5EsR+(ReSE7K-)t6Z8?z<}~5QJ^UR`NmXUBkiNv|?8u1a zj;6HoFfu~zmX%D=x1Y;gdaFgXrAU5fZz}O78rF1weQ~iNx&HBfgu`UdeYKF}U9b|! z^5YDC=>wm29rUqTEh_)W1irpxxyA){XZdDb!@y^t!4kA0S5a42*Yb2R2CyLn9kqf( zf-7{gU0%1nF4%Q~5)u*|TwHagW4Imq%>I9>V#dwl`U;N?RN{m)d5LoVuC`fLWV|9t zUA_2SjR>OV#0AVwP6n^7C!K}!uaE+#zaa=F*Y`FT21guCP(#+KdNZOe|FvhQD9B!8 zLgMd++IR!W$??+tB!*sfl{JXN?Q(l#3_R~eOGlUA)|MU|jPT_9s=qTJuo(HjS^zPV zRp%Z4Xi8Nn6Gt*%JBQVD@EIO1-pq+X{$bCFfp51#2`PNwh}#hlKB`$_@T!|DP~Ie} zh|~f}p-EqrXTn3x84V9k^KQE#Evjw}j{k!{c-0c>Yby1jd=UVi-Ujq^9#W?lA$8KX z6}z-^TclGpWUnVs=U-)(hX8iID!}3&m?UzgJ882D<}BXO@)e)4+>3r7c1K|D)*7|x z?u^2F;ZGPc?+6Db<;fuL7whl`($TZmvlw0XWAP_`V)J$042^N)mGPuZ=Lr{VFAgr- z+fGw0%>$0ME(rN94@THvC>PTa;?1wo2L_}Mdn6K;_SlzPfqJ#EIPzXTuG145O0%^St)&R2s zqteGB05~`VNPo*e;jAJ^Q1D}a!XB1X#^T^wwh=hyLrX0}Twj7VmIAJNE1-v=35{vh zx3GkcS4-=y*mB8_^dkGVi5`U^2PIA%ka7Nfz!%OilIU~3Z2%4OKlvMgZuWEswGnd4 zI`4--BR5vFx2gnywi1d9;LzjSony}<$Jcr3o&j|y2Jk3A!|Lr_6~Y=?|XzUZk(khjDatv?ng!scL%%XsQ6NE!0{b1jrF>TNUYH0n+V% z1x!zqwY!g7r$ym>yCkRxKprcgvI3ZBV0vd0ItkZgplW#b5+MEOIW}Gc1XajE??U=? zz@0t0(;io(&H`gZX6GAi8CrU70yzEwP=r_6KQwCvBTZi6gIN)we}FiHbhAO1_Tm;( z0C+!PT20DHNC+_10%%CU*xWwR-uC0#XCEL`Ez>oVKg%H={ExcnWSB!9ur#N3yo+l? zA*Pn6+xhP9jExPPCZEqASAR22#+Tg_4sUZM99f`z2pj$Ui5}*UQLb1 zQ)roj`QVqRK;b)X3IE+JZhBH<6*bPC9W0i}rhg&8civrGIln=cdEwJO#UFQ?6{r(B zj%l^bXt6w>(*3oH*@_P-aOZ@EE`r_gb)0FRoua9yju)8>)x|e(FtyMtHTERHZh*i^ zyof^E_o!<+`+h&v(*=Y{>!opVD;{$k^>j9ty>N0XFID(7S+RFU?`7`y>UT5AFBH9X z3<29TFGeV=$1{v{euo4T4RUdDVZ7@MRDumu5cL!V$WzhYucENqpOFNeMbA3Zjlyjr zBi2x=>JAT_JCD2khP{%lmy@H^>kPYdVrXX9k0&RO#|7A@F1^NE278v!5FvtRr+jZ< zKcex*Z1%7_&;~J7c*V-c#k4e=34k8fz6gUEGG#Y~Yy@uSh%IkCC#}&It}Py;-dSZn zK3uiLD&@P7aIc+f2pK3xH08KV6@KWroqJxpT-z&L5&YtHK%wh95YqxQnXmZzK2a*9 zvm#^^hoUgIW)@JRNho#75pJ8?On9r?MdTjEMsq=*j>zIii28rR5{cTsnZKvI-0+AS zl>RU+!(nIkvX!*0hBj2`%0?c>7EmU;TxUdK)fD`%_BuaF1AHGCSs&^z;oAR&GbEAD z`{k70`u)J_sLjgo9!iQ~b+&Yu&d2IPW|87Q)H#gg*$Z#77_IQw{nXgS%4BPGL?hS3 z7u5%^>cCFk^Llr4jzTM*>@k%B*>2o=UjCB z+T0UpeL($@o=sk!1|cvugxVDEsT3_eB`!>D&Mn`v+%JJzA5w%(r9ChtHgvis8D=a} z7qh(s+fQ3F)Sv>k&x>NiC$5Fq2DDOg^ftF7Q9%b$9XF<-BPvDld$tkKH2dA(9WTP& zkX@oGQzaK9Xi?={#l|pFiryEcB$%`UGx?qRcKyFGeoLZST^SE9^jay%=SRjv1@B-n zcAkf_zk5`Kf+o}Cv*nHY=OFu!oA1}W`&>{-HD%j=NAHw#v7FA8(>ON5k1O&+u{XNz zZC`E&wdum2Iczm2+Cl?z^w{B27(@=fEAZzF-*!E`S**MZ^v)T7Fy$176H8^Ouwx#6 z@5f(qC0Ee!ivPnnBDMXR*?Y!bxDm>ns`SK8CkVoTA6sb#CZU>|J1XVe=-j$2kxLCMLuK!$|QLiIMp4ZJr`eYROfU$xbfB9!a7jcwa@1Rm!r zOcV_9I^?j?`Z`fW!(K!d?bB{mc068Crbv;oP_OJ;CmEO=G<7(`BP^RO8ha8Dk7|l=eQPr4?z_peB!}+6< zN~_$GL#4^?zupK}P*74VoZg~*)Bp?Y>coYx2KHyXp?ZFeH+@0wewXJ2y_hG@tDltz ziq%;J+z}K$=eRJQ)zxeY&kb4GkBSI1vAs1sGwaSPq0aR*Qfdi@|`fwvg@ceg2_G`aW*GiVDecDp1C#~C(< zp0U%FN>@oYy1b%c%5zDbTS}C<*v#Cxc_9sQ9RtAF%4h6NWCp;C{Moe6wM;Kz@fLs>H^Jz(d zFFzt25s$AZqL4u@D{CMsanZI~bQltIYB`qz7yqep{#F$#zpnO9W|8yT2zHt^uFSWZ z(ot8jCgaY8Qtb4i0vb9loyGU%z*YEhTAGdV0VrAI;O0JaB?oZUKv4?dXzeRwT3#m0 zm9c*Ey}V#1e(g_Pk$I|y8)Du5cl*C4gLu#QAs^Ntr^RujA{cK~Bt*xtKYWnD>;~Jl z{f$gBVw61h<{;!eAq<{t)e=QT3xNyi6H!rF)a{JobCdYgKkllQ*i<`PA+vIJ3wNw6 zO+y$pyRst7g1=@p`ko|8;u?KAQ8(jp{m-w0{Q}Y%LGCpqz0$tL0U)q&Ve({vcAU*R zs)5rC8L3s5Nn2Z6PRBii5fIM11L-S3D2n1o3NIqc{FxFa(RqTIPqFp&$`h<*CvEig4B)!ol5^G*MCCZoe3-!>qGCBFayRVi-^y zb^A$)?#;UdW|M(mLF11`6hw;^hC2_}9D|$)s1-%Ix$!-H2u@24P*q&)Eyv@GYsL|- z%&hru7)q)*rsxO}L zwJ?8&!(|f3DcFy`jqykW8UPs2v9apSXfOxv3!g?oMamq%+bXa5+EwjCf`WqL#4_k_ z&y%^@=)yzXsxpJ{$=pZtDEQDBqIZZXE>1jMuRP? zj4iI*f8hLWi@0yb$IjS!Q^H%>jm)X}B`IlwnVs#GI2dv23-ssB;#(u=y;*%Tj`oYL2UtZxY23h%o6-6SXcEI`KHPk zZ{CA%GPm$hN1NZJ#B(P{e+qlo-4He}`f@fH6#iIK!xL-ImsR+Uwr<&)$tILlQaV2I z>lKND&EMi~nkWT4B0r%=qk=W5h%Zjqksw8Ii9 zPdPbzZUZJmKvXCk<G7#@;g57_5JXl7Yz5D`|Q2fT6?X{1`zxzLgS*)rHP@gUTuJzw(<#CjDSGIJgZ;rwqZfK6K4;(C4{##f+tb6c5a4gl6{D7#WkZzSj{lmBU|Ts zv-SE%nO5tm4j%0bQl6$(38~V!r;@}jvxkh#aUb~T%uyXA$(*!7PjP;B`4sLMobTUo zXH0D@69bv`k~LaS861rncFc7=%nvw=j~z{(R8-mPq~0SFcH2gLbudMJT*SHPZRA$vwhJ$Ahsd4a7sPcD1;dr##QZL$f zuiJ@o{BE~$S1G_0N~nD$HDhI`NZN?wq%q8&Rm$=`Y5FKc{M*3C@ygUoK+~<)LWm$6KV3T^RzuY z`tG623ThmRSHBh)32tXSjYUBBEkitowHjjy+KAS9(_}e1{k}J8%-muH zTLy{N27c3=%Ds^C$mZQI=N**Y9esRYDNYNwo0B!x5L1#>RT_HzB}t`~yMH2brD= zszw^((c}{>2#GE6tXpK@^DEbLZ0WP!jW1Ucc6-|B}vZ( zy9|2IK2Y1zRd6pu?V`XYb~yaU0kS2PKPR4_#NB~BSmIBjV!YL)-#ns|`;j*fQdeBj zWHrqmR#EEFpMaz1_#T&DNZH=$sLub)gmu;nOABwNl9}a)@z@Wy>*5u%U|KJJ38uoo z*qGen2X&`&ElxZ5RSUR-OTWPfQo@uQns~i6la;?52u*h+d}SEGY@O^iZuVv2jo&vB zJ6rYB_wWI0&0ntJSZ<+(UbWT3RG2H0tzIk2ia6hcG|oL0{_ajXd1Y*ud$pUIuyW2k zQ{LAv*&JN_>kBO_9J;XGamrC1F_JW{IqVB@ImZQ_j-ezADhC`gMf2GnOrr37gs7#Z zrQmwh)Y5W*)m^<8b7UU%!B<`0ZLWj4(AjwVvS+zaJ0fq?7d5>uh*Pg10GAbErRl%7 z&eUhztgg;VI&SMahg@B(EzeL2qUlz(#xfnaV&HY7^TJ#(qWG3mR|~i3?_tXua9dow z0SVSPC7%is=23~Tu&~nf(>JWKVfSM`CrcY6#zLJ(F;HvG`c7O~F=m(KZ*YE2lqSK8AGZhVSNto9E_hIxA1V=}AaQTz+789g@(y zcg|H=_ycSenMZ_QINBunCMG*wAO?2&V;fXyjq!P{)_A8z3)$YVbp+3gF>UkdQZm8X zefbJ#z6Reh>()?4@88nW9d#2itS#aYQig=k|F!kLpH?jpR1)eXSV~7OkxvK)no`O( zbLcrwjURlts#iTh8SE&C9b!mb8*ND}DNHvEMnKM|D%V zMiP=n1_l_1qUMR~k3HZj*U#TxvZ#_esy||Rgbh+X`(;?#wT=trg76g8@57gxfpf57 z#3FzDmZ2`s7D*XcLIYz)a&%zb_b|NM0Ou=kbehV%ifeAD_pg${O!xJ`Z+YIT<-`b! zX#e*Ky2B&4Ta~s5N2&!l)nlRqyPpS9hk#ZpIVGinLIxKHM?BZ%!%*KPU)`1V@|@$_ zXI?E6iK^bL;FS33r}GpA28tbL2g&DMHau7Qse$26c72C#%n~Ex9c~?3E^v>MF}d82 zw9oLn> zVut=!zMcrdIAtnl_SZJ?xAStA7O%+q8KegWPMUa%YRu-?E>k#_i6Zosbz)UxH&g(2 zAxk@)_ZBa>OPPybsJ}m?yw5`bHeQ4L7KHE8V~T&l|M&l#9h;pw(u~e~!UGIn<3pbr z8+R8-_liknvv>6j)sT;yoxPE!p&1l8=%TLM@lKVe`X5qeVXvbY0)J49~~cX%M)FH}1SR(&GrEU6B>v)cdAJ-)XXMk>(9%Qn)CwII@Aq zKT*a$@bjWtd;9x+c}7O`8NS0+)Wyl-audoWC~X}w`wUr&tVtq%D>;i^TnxFH2^RVW z1_CX_#PZw5x{SEpU-clDkO+zXE_5}+2i@jmyYPwqQrtwmO>JRe_% zuZQS820ec+vEW!{G3awgbL4(Qv8PE&%|RtX>6t*J=gxuvtrhd;m>L|ozK@|=){?X! z4ID`89W&7x#O)31WjO3PnPjDdt%?D;C#$6y3fV{Jml6E&AIrF61k!8mS^JpE1U;z~ zn%N__Okd;=j6~(%eQ%E=L@W<*oIYJ>Xo3jNwj*s zGJFsuTj4-gy;yhTsh9(z;CN*yJ8NW4pqRqg{$9-lLsL9#TRXfi(G4)fWH@yP?1L=&O2cl2gZa!uMxhALQp9ip1V=;^A&FqE$_M^ad8>2v9=le}< zw|*l`CHd@!SR`lja-T2{DSP%IkJA9rDAJM!N@zp8%7nP9tDUT zY2%|zy6Ypwk@sw`4Wq#G6!P-*jcvD+8UbjBDcOzo;xzVXwt>H2FM#cQXmmiA4M?o^cV7agEEqtlzNv{YxENW=MhzKC4N6;5Pz z{CXk|Y$8#toxG9!QjPlXZH*USX&-3k3q;DP_S{Z}>Sp=&nP`_v2Vwfpy~GIz5b{%l zRv1{VZ{BG>InvuS^ZY36Z2ciJ4-1@DKUr=HNP7StZwaG}D}e%6iU)lV!uuV=X@E<8 zV{C6?Y!hiO&uLh_n4orCz~TJ*uCQ(CPgP{|?+X&_sJZ6U>v(H}aoo1m)D4tg%GWB0 z{EMFFR-VnK>~PY(#9D?@k)#7>4E`-eMGbww_<{IKZoF)xfVT^N21Smwl{?s7_dr;^J$`60xkY`-7f(aCIPsZL$Z5lYO;Ry6Th6Q)pw|5ck zWp9PP9z!qYEaO3qy!URaDX1>z3fdXu?T3D0>ONN%&mA!`-Z7s3u>wF99ggX?p-(e3 zVbu2mf$2iOVl;pwKLegtT9kbL7~9*(UPCK+4Sd^+#xdAw3#Z+glgoHqY;jxbhMaZ1 z=^o*Evq~gyoL~fk5;|~{9kReaxuqH?N9B)aX?qb&DX=gpJ(VS)v}F}qgK5P8!DsgE z-R2uCnrHKBv6y`)?{ISsSz999rr~{oOf;U+oiX)JYL6?GSMEkLek#SBhJedg;IVee zMyQt6=?sU^>j~|PwbVEerx*`?FpNl%O0PPck0$#QWB})OU8rCdr2VWW?g)08IU`F@8xzw-rB?i&zs;+(rg;wn>uX7WBU8mVEL%Rm;H1FwnSz>mn(LTQH zT{GRgJzA_LLId}?V;cJiHrT(Bdr|P?a z4WM<;tU16pseA!e%J?Wb`|gO5^q@EK)#c=AQ5E5~??L7n3MrU5-&}l8Zkd~U-FeLH zAp6uJ&Gafe2>$2%;9Kmc?-Wru&}IW|hf5_cE7Dy>KS@%t;F$%3{Mj!m0m_(W=GIvh z4dE(ZvdS};3X6z=7_^VF?x$34`<$Jc<^-FjCYfu0bG0D6Dv(T%f3a4#Bu;uy{l{g( zJ7#|4oXohYYvc8J7NpYkwBDJH8eWWK^?X7bd*1tNtzq3k&5Z6wUWzqo+&qb>vWiMq zPNYDx!vjm`u>5Uf%qL=fa~QvnxA&v<6V{}E>Uw=lFP2xNWD&!)N6L=z&wW`Vk9cD_ zh0{MFhB)t1yJPz^KJVd%i(DPr8O*0~;iKTAI80qG9`SoHI!eGGdWJ4|NK4)6w_}f^ z7>Dy1j@J%zxK+1ub_U!zmobU_tJlp2Pm4X5uxEliu5X)Uy{iL6c=nt?4 zn!;g4iuq>k`VNrQ!ttB@>F&u(!vYyF5+X=j+e$#Uo35^IUHmqPCMBA2e^GU3;!;-i z=0f;o0zv!jDB|FfE8+lNx{2z!MHn?tTG4MbWcnM1qJyt4W>Y}DQ&<>0Kd%Yed$78= zXu%79GDphLMa8`3f%BbE+n5V+NukHmPO9UUaWnYW^l*0zJGSjcNH`a(WSl|yc+_~5 zv1&AaJpwTz6(5o)J8xQf@U?)pBtJJ#cI5G=`^f91`^>Q;>J=Kj>=z6r)hvHI>Q^9+ zT*}0mJ<3Ur3{7Db@Pb&+8lh+?5-Ida&eWTV2wo?|yP9w7yEtm%Xe zgK+j?p8XP(qZH6S11$wGTi=_Qr-B#y6#dKj^z~9`HZ^rfw1X-;oJRZ@+feKoKvN+W zyb*x7^tiVy^m;*s>KUZ^wrtCL8>@&W4g&HlI&af^{d#;VvNrjT(m63QTEhA^1^-p? z8~ogcu&DL_*8-SoEXBrH3B4;{I5eu7L&5v76izpgcS$A{4csAAXpQIDMU_OOe=#)h zAEJh26A#$upLV7h0L&TZ1sBAiKFVq9%Q6_EeMqC2ay^;qY=pH4_-a2xE+3G7{7oCm zwt6!5az(L$wnof+c|w)AY)XGKvWCfvDu(V{Uk45pdN4daJw2c!!D(PfC{|YENg0A@ z^5QXcHz5)2pbVy4fPq*EGrI4WnM3#(10=4jh&G!V$a@;uu>Vx1res`+@}zNCX|o=6 zzMEiyGo%BsN#2Em&^vp>09+^PipcGYtQd7X_Y`gmn;ghOsAnropugELQQes$Zh3b$ zcbHSgNMpG0#AdNRN}nmHb8Xf*jA>8*3oqXOE4hiQMyz=~@b?4YwwJhSMi>H>$U4NG zJ9GvMnspIqcOqqYX&yQD7i`dmTl@%~CkZ}>drXGJ*6RuRXR*(!B9hwMONg*a1qM5sPo0 z3!OS9d@~^@Lmi^hWKehI!Qw$S_1C(&cWDg7qO>&@y(b=gO>(#2x6uc+!&+Kam;RKqOtSHcO%PoxW`A<`1tk9spsAhxSQ zJxm?B#~lg@coP0l6G&&~9t*y$@7(?L_|ZNuH{SRJ&^kxHQu7Ey$W2PY9<4p$7_en} zIsD)0AeZWZR5{I2FRJ0AAV!lrHM|YuIi_D8IdE33wP{Uam%_tXZwE0Ph6JAD&+=k* zI}OB2!DT%-u+Oj&KGoJq*|@u()}Mv2t-PC*dkq?|bpq{LpZSX=&HJ*m82mRTqs#A-14OGrl@iRI&cvt{H zJan*5858`EcyM$hX%pm1lYjYlh*AMZijM%|wDVqI@zx~0sNtt)FR}PmMrY8y0~*h3 zo^w||a4tbGz(TFn0PdRjPNWiICk`_lI8qk$RXrqRU#1U+YGf%5H3<%!d3PP>t1qEo zNj6H-arc9B(-K?G`R_;;EYT3!V!J)5Y3O2gXhkh+l8T=zOLqkGr{Bto!bx&@HSB(t zF7Iq-gS)c{`a`UiMcFKJs%In13NU|UPfRXC{dalfhK8sa1f3V;^M6%44f16~y0V2V zTQ5e=SW|?_N>vc)2{Bu1kWb?8GfA>@vS$zI@)06FHhW^Ocdhg464S?JXB7sokNe5L z)eW`5>N(1p*{2W`B$84oHxAkDtVmV7(u;uJ2sTUOX3RW>-dW_GNFA1XPVxK*vmU1W z28Pj*OliVf7{mbX3Dz~x(Xqbz35%dU_JufR>Oi2#HcQqG?dQ2yHa7S4sPw!cn?oZp)6VQcMjDii zjHqDUm|Utsu4s7tw}$!!e2a~v^@wt9Q-)(voSRLX1a(dp&!GqZ=KbFay!wN^w#r(8 z%9a~Ct?820V%OwvD?e6Gn4p6rY6A)76@^15gXEg7=V=Om>Ti{Fei37SQCh32upM3q zSU3Abg$6Yo8)t_PLEQMv+km8$?W6HpYlhBHMEN7nPdPC+mtnq2>E{ojssVm)ULs<) z-s}*3AhmiaTSzOb`o+A!}u_u;S_ zOZ!tP>+S7r?ZPsjoTP3#W3HGm-P#99k$ z1A{T%SFdI_pT$Uf_i_8Le6?JAFUoq^-28}6s4fsA_7-(4Q5{?$p4W%Yyl!r8!0d)x zD!ZW}3G|&4i`?{d&JtxQOdpp$lX>+XGaX1sGVKiC9yi0#b`KQu6bLHSavWCIs8(I? zi0*-gfyvyKeb<#WR4_w<@>mffi~pUxK$iawfBAOl=`10|+TegBPIQ0zbS-}%(fMd9 zw&2ar-Y~LWyYtQwhyBCLi~SAbfpNyfHnG3`4pash2*mAvGb>-cSbh&0Qmw{Oh|6Vk zTM(^u={SqP__A81KYAS!*Gu>x(IqkiY= z6K6ablaFaLocH*3pGD(bUK%1XlB_TT*FkVs78O~eE(L?67LaA}r?OdQCq6I`S8_(r z=i#!&4emU>Ic8SunO=Vt1BvJ?mYi)txS&TV_gv%K@;Rl@ z+9L}~wEuve&X0F|W&a@NMH9O|m?4-E7;0`hG|zo>L3EN9?Eh2jxlkpu{-OanN7**m z{HXW^L5Ns=HVj+q4l5XzI1skx_(znBv3$iEnwL_;L;w}PC?%0@U}r&Lt_hDULg-z9 zb-2fSc#>;N5SpP?B8`sOE~B@yhK!mn&o}WWZ$~?xgTMQ}(Fcy_dCeO!`=jiDhB^z} z_H#(6R3gopVV86wJ-JbB!51|{>wm>a_agw}=b_^GDHP<@9#^?2*~A73_n182S-EUq z2-Gk-ol=y%Z5ukvGbspV>|ubN7J4aKS@m~;6Z4+}S!AL*wcpBR^P258d5)CtAFH`h znB?>hzFma`5b;+Uk_8{_Rv=eCB9eaw&)$-dRBgEdKEO--`_ILrMFRwQl|GpY3rsSk zVZ(37I3LhIs$+YH0=N)e%Bs8UOOEvN;>&6hN9qt#0Fa523h@z+d=0eO$}_o`hw^O zOCOL@a?T4P8O1O$^R61=v)uC6W1A?o7(z&aOb={o19H0*1w1mCrvQk#6aqiimJ3Ab z?2T`S^Bh|>gQzF4t1~AoEFKc7NNpqy({KGbI#cQoHIXvXa+FOq*YPC?(3rC2Z^ENorp zQD!L0_JMyVFaSXQpw~pK`T4V`=x*mt@kF9}pBA3a!1rnHSCaV2K7@v$4B!iKy7^_G zKRq?Id7D238MrJg3nd02a)8oCUbDTiNSMLpCn5g}7OiqWgDn+h=orR7QoC)y!n|q8 zS~Iyw_j>x`Vt1roU`Y>39CR?GMVv6;uPq)OR>>c>J$MGx(GN$aQ7>_hYLWE zbMxVqUx3K)Mq+CnD|F{gtEKDDO}UZs+D-a51R{lJLq^h@3a{Z`G@OD!mGQ;JRAtuc zIm@NW`f9)oE53U*k7#G#_G1=3t+z5SUt{uysaR7tx6Ov58>)mn#QRpt)Z8wY8K_1W23A>nw&Iv?lw z^1ZnSesGoq{=lo0mh%{Cq)@`&wx=MXC}BVh@KPVb!D`ey_$XngH@$I0eLb^3K*<{! zEPmpg6!JCXn&^k-&vL)fSoF}9_{R&p_w~Hfhr}-Z`Ma(MD5eAjbDPa`t0yC+PJeuj zE2=XB|JUv>K}!zkT_D>86CWD<^Zbe6iQQ@rQ0iKRsIy~Q)5!qI^%b!3%?)eVlWc11ewzp1c--il140ZQzD58`MTUYVA-(k_w+2b4uh~Eo=Ju5kU0XsIT zIKglRiNqCK?17Z@&0y91&+8wC{zmIXnH+<*VE6DJnS8)$)gKRc-#b3S8KastleA)s zSXR=8FQw8~hYHcm*r#z&7^JYYv9h87PcF3kr!j8AKwPK?=qfM6&p~!HxZKR3mav}U7oJZ3e0R8I0 zDU~x*+wzHoMnZHp@;%>sXMj8KWHOm`cRQ{7*;7SqeGmF8mFn=fr(8+0-xa9?h9JO! zgLt5_2AtWO@5FD=3AKoNFtxj$tuU9oYWaPZx9w^VYlv;^v6nodyStkN_H0MrJ_hBe@a0L0xUFvj?n)-I4=u*RgTn1sn)E z1JxhMtiULD?uKSXBV1VK`cRzl8C&Z+9S^RW>ubYM#ysjWKS!Pv3|baA5uH(FFl;tp zBs;yW3X~6v+27jQV(?lnIA4}80$zlLhvw9|$rMZ*m+o@ddn{EpFMw5#qYkH>K9Dg; zB7-@$HmZ*EF}@~?Qwl;_5dA|50UHpdakIvZjmwmTAYcik)xGiq(WQ%}J3sCA(1Jx} zvC&yRCbcYC_paFjiw3zAi$N@O`=%w(rgGsQjce;y%{7vvgCiyTejMmTj973dRJ_97 zKRD>R7#1Wa7kVDpa+S|nR@c@h+lPk{8mj)~{%JbW`pP+;Lr~n;;?;%>Lr$b2OSEZ(XKj*|{sj^1UD4@1YOMF<)T90ue_dN7 z+-Ac8D&$ITCyA$8P3KxNFm?3UI{B-`YYL8&wRpkPzS;%#_uf%!Fq6>{&QQ-0;#7GP zhIo+Hqo}AjYBt9Y^G;&NzEBZlxzjPt0K&WLi79*3UoZ=weomlyr~<--yZPF?5IR>} zI0>mj3+=YQ#D;&4#yho~6x&2qmAAFiZ`r{avTNK0~ z0pp8_N24$j$Vj{XGMD)US=GM&mix?3hEB`(q!hUpYsUxyAWZSGM|62SR+eqrCw;-Y zJur#kdDD+nZO&xZA-Nwr(a|A_Io7`VTMGGgdf47L%yq-6x$0=7@`_<0i(iJxNZV>20siiK?2|6z-_h%1x zqXH2Szl|lSytEyXQ#+ejd5Aym<~T_i7|tx13@+IxU)ORwT=TIVcsRD(J3h{5YE7n) z2}7ZeR#TCe^6lMQ$HKY`TCyJLZgj+|q99&Xc_F*&z^)o`Tv>mr1b*9UYjR(G+}H10 zX+0;a*RKkuT5PYu<-YXen_j8W*FJ^arh^~%G zweUy(u^emsK3l04PjW0V1`|F{g9{1|2wJ+&BsgSbWRNsDLi6_dU`^+4d-GQ-7xru6 zsCN5|k?qzEOPcBj7~dDRmUV!Cn)!QWX(+_?1ev48yCY=uoUWiP0nB%}gna&>bKR&? zRnN}=IuGU-F9IVBFfVXB04Susbb}(3LTI>L%d3Iw9_Z`4frHP5mQk9$uypWm;^Pw) z$^u_Y|02d5e$s&7t;ZaemaMWp{peUOS8;wJcoNSa4NqsKJ+{Boqqu^`37_Xs{n6Ht z`|hX|%3`#qNN%w`)$z3r5ENYa9%=zt@KJmipo*->mKj48!qL4nH z@tQ&~Yo50X@n{}M@aW)0@69Ph?oAZwWT~>tHYeioU?o>XbOlXFZJOOK#Yb7AR-lIg zkNqcn;F}S&3p<5O=89;ne=Z=#Ut>8a&#g_Oba3qGCX^dOE0+=-NJAMlZwuWKOar1 zg!2675y zwX{<1(!e<_+~meb0VR0Ovew@{osMT`<3n$Q{|ZEPM-_`*0I00EM@qRP-;7{_)zDAD z7Hq}BWhd?Ncv6ssU*lHUp`9{i5YfjuNk4A$`hUmRU#?=U+)b$lZSW0Wm*ypfCX0nS zkx$%Rf?RIK6E&r(Ziq5tFBF?hrlmC%DB_vWq{>N<_N<@Mr&uOAEPt;ZBUQhC&7CIC zCbwWrU~qc)>)IrE7`eAh=$AMDQ~f4o>*Bj^1GUjgnsgtSSWkvGcLI^%>jcthx!SWX z9iPoO7@jT9fsyj-bRR1ewTG<)e2z?<=cej++;vN53SR;%XRTNYcLtiDHtD?GHo0SO zc?EH$dy`!s1%0-6BWG5ti%oaQ8C$r2IQU$!LI^Z{p9R!H*NHe+WaYeGs47YJWH&#+ z*0dkaWp*~I{Gk;FkF58bdLt7*(G`uO{^Hr3k6ldE?=2p$I;aP`(zgnLG4J|rjEz1a zRBq-{TVC*6jm6BCfKAj+?VW2>MxgNBV)ihLQ78B*rRtL(jSl-n9-3F0Y$2jzyu-4Sl+G5Yj5nc$1p z8pB;L-XRNj!$VtE9i_eAoa=f;d+2%x2gdX=Il01Sd{(ir4t~nmWWdkvI&M8eQUfT_ zUjut|hDTPc&_k}v*v-S3pahZB1m)b+!Q^Q*jdzcD$*mS4&j`W+_NpNckVhfmJ7(b- zr$y+PiaqrYGi-DS!g0w18IsmxU;m$&zaPmGF>l~15_zv5h6DNm$baapg(Qa*>W^Gj znris*cok@}NTbY#A9S%)d$4u7*L!*su>ss4oR1sj!?AIv9TIP)#d+XGPywfeY1gzg zoj1i9w#>=HYC!n7l5v)UM?{*ArexXFmg~#PAY?Lf>lShEo$56$*+ZZiNc-kn|mkp^9+vo2#?$T<%hGP9wv{oSDqn~qk`W<d+7JKtj2&OtThcV`n=kax*4(crS7iC7omazB&wLqWLL^|rl7TRd5%qJxIA zW|jL!ZdpS{>#MjdT?H3I0({nGBQ%@8^{%-3&TmL`_NbQ3ec1R_1z{~{hDS@}+vjRc zZTr>`I>Y2G$?Yst*4;rqPla~n-U0%J;e~!%*KG?J{=EsPT~t4GP~#3D{q8nQ5x*oZ z_t#gxENaZTtpZ?-cB@~>zamC`D~*<3kp!Jzftf3Sd!Gy(F5uD&(uA|Dkbk^n$Qxz$ zb9A4<<&BdDjTz7V@UA16a@`9k!ButVPyGi}{UN1I&MFEX#_K!M?N^{+0Kct&J3Kc3 z$||DP)1QC;cGiiQx;%K=?}dV@-S&akuJ+OVuzlf&bhX1M^5B1*FrgHZ!>&^fYxRi+ z!&4{RYzQiFU`g!^66$VRdcfy_5PK|~b2(1W;m@ye@??FtGK#X}KHRCGE+%8{M=9+} z8+c7SW^aE4!N9U2qnDdxp9aATKKVbz8$q)5%UFY@*BC3+uSYg(A0&Ol{hMbvsf zvn%F?4XLUK;@Z0vLm8_u6$nO3)d$wzu&yW%Gt7S4{?Qemgz0Ivxm(R_9+hkQk*TSp zv#~lGv|}49(zLM(nZs}L6P7^lOcpnfEkSq1Y*>E6Q@E%(TE+vc*9E`N&-?kBlY&OHBwZiS4J`uo|6 zi~90ComO^6MF|XddDfXZa2QRN`Vsg2Q6F_Ss|zE@*9L9!h6ETgqa;@yd7?HW94r(D z;dr0waA&xGZuuQOEAplDJVZlOO-mdEEL5hp+L77(K`I zucITW34+yJpNk#Bh5Zo(YkU(lw#fUd22*o9!V_%Z>$c{%sE2`2XB3N{S5C|+{!U># zg?7-7pdo75nGiD)V-$S2Yq>wW5h<7Qa3#JwE!V82+YF}CCE>ai6@N=jLa-vslGd^7 z=)}eHK2;twlV(y_rt!60)yBCDF9Xdt^QtI$^19)LXOS^W=3@zY&4b zBntdc7rpNe3FPw|F>)q8Kogh3?dG>G-FXSZ8uvc{N%v>(yLLbsC4NEN_yCz|e}}a< z$Zu_fY{DVaff_yB;|yB}*}^s(vV_QHoZ}?8cqdFcTqEWXq{`>-&M*u^2^fCq>^@Gfk0xPs^ws1&7*1cQWblOef>K4*Knu434=!T~zdYiG+ z1YUFHo;i^E)1c7&KFN+r)P#LptR;Qs*RBsOQK#$a3CFYJNh8>LzwrSDnCZ3}ZD|{B zH9g~9U-J~uOMs54x`^dZn9YB9G3dh}4kL{0ooqF%TR=?&!$%lT!QN4~?-h}t7_9Bn zgtBZ=y&r-WA!IrJ?%#QOcrn3UdFw;%=%n*pYoc1))AY`+jp{avIwWr>NgOY+hL9cl z+_8Zu$t>}zaV%+Q=TBuMbI!-|zOQw`Z+11~N2hBRuSa+_7**_tBIU_PXkvnP7bcJn z9di#+xr=v=*x%}y%qXj7l^PyH``F}G!jH=$@{EB{h~S+IS^BTto=`I9&k^6^r)If1 zqer$4M>B;8XTH8eK?YM@T^Kyg;6c9oU|EL!}IV&>l=ba1ylJF6X1`;S9j0S zI(tjZT2z)Z$zHqg-bzeK6tg?kJ{=dwt^(3xAzmV*Ho&>p%sGfp6m|zN_3CYF*+nWu^CToq~-A7wm~LtQt~q zr=B{4{Hn^O2rbtmFp!im=gJM>K)!o}$pnT&xKb;w!}9}mm1I@#JD04F4!+E7N8$e# zyPh~59$ET1tE|E+1XG5Rzx=q`Xwx`vPC8dx;OUTWaCspXp%g~nnux7-9{?sv#fclf zoY1zBfXa`r3sfaN6uSd!Ybvs-0rZ$XW;(3MarbcBXpsfgziQGiLAnBZf{(y`EPR-m zm1^T|$*Y;ugr0ST(0^R+%k~~ZnZw;-lt&Z|wKaG%S9f(gnZG5Pia| zs;-8NF6i*aNcmH*mlw}2>ca0yb}cv|%K>gZCIS1?yi~mRU?OuEA%A1%F)(!dPaoB` zrF?)-w)9|KcelL`vGGqqY$T`O$@S`i>3Req@@vcv&O zQ@Hqbaaeonu@e7;Hup@M2DuyYM)7Qj1z>$JQezd2(z0|h|7A2CqX$kMFL?t&-F zQoQnMWU~Fg^|ng79O^d}>XxYAtNiD@Y$$Rl3PNX2#^MFfd-g2cY?k!;s!;Vk$0B!+ zQ{GlzCjGab)sOo*nmmg-#OzQXbg?6U1`%?JU**Le|F;n*Nj{$2@JV%wA()FIMrxvm zeM0~&-kTv=fyEP7K9AjuyGXkiE4rsGP2i3IzM-4{-uf8Ls(Fof@W9m@pq7#rn25%y z^H0pl=obG7L(f{6E|#@dYP%a1E9sM^bZX9(dis(z6;|jNfaD!U;>cf1Ma!mGgw-el z2Nv|;MEkc*OX)HSCNF`^^7^i+Ij?r%Fs+fvWuPVT^IrO>(|Tzsc{p}~DXGs*AI<%3 z&qpN@!HxS3lXoXB)&{)z-d9@)nI;o-uKP4cURydvR2dy+R3pDgS5oep{yKi%bX1xd zmDHSxdiN|`Za@uv_!S<+R@YeRHoHXK8?jB9vcI zh}*Il{Os3yspY)3M0d~&OI{?Lu{(izu=_wl`ZeLjGirIFU-KNoDyrS3u}n21HDUDp z84Y+(kqQqDcPl&u5K0wG%VTq z2)K;$CXV|n!vMw8?W1L};vY#kLmI)sxE9bUuBfCmcBRapNc3=Mx2NXWLF8kP^#i$V z@w6_@+uHrF$W`r(a+^2FllO(#OF}!^gmecwF+#jPqXj7m%3|LG<6E@%mg@H;4z21E zY=rrC8`4J}M>?^-Ik*Q^B=s*@s@oD^WrAHHZ!fzrm}@6Jdr}#{C<~a=S!+CVWGTe# zQ*m#qgAN8o!4>FT(j1TIiLen;sWhca=%)KCs&xO%gd5WsG9Y{KeB3gm)2)gVMhGkQDHI|`?i#_08{NF!U1t^jaGHd^17lMeO zk@|^jN8j7#t0{#B3)rhK;u*i#TM*c1{Lg00j7~@TSpop1JIW4!+qPDrr)-5J1XI?+68-E>b!a+m7usQ!hoZT}d> zWo5OFQq}5#uJ?GTYu&h3ccV>PDtB4%{)b&p7+SR%ftc%V#*@Cc1+L)jlm=!I%R?_w z@2?T8^NfYZ-!ejsbD{z9zxoRk2!Ghz?e27K3>y!xx7qK9w{;#@V7z80ejmPCM-?s+ zCpPJ4&Tz9V`*`&m4`@88>Iu+?-VVh^aK|&l$y~#dG}Q5Tg8XCyJy=$VE@;6jLsjMx z13+*-?rSX@$c|9}CU0S(&J<`khN!bc!D7Pif_o&;A|R)bW}ImZgwuuGhZ?j`;Of|t zJeOEmZj}Sus2FJQQQtI&uAeem&Ld)qc<`RC9gi|{rG>rUXb4{+ zPG5e@VcMhl{ZR!bKHn3e*x@g9GtctmOOMVL z))iCK_?FZLH}%@mA04(UoIMCyKQUwEr4J^qu^;ZLMQ}y!x2gg6JS;_`U?sLwOnE+- z=xqYmb*gHIm{?b>=)5truvh)fz>LzvZHi~T6BVJ&^Ii~#G^+%+FhPIB8S?b7jOzL@ zlH7%4@>Tg=2cEWLdj$8p9;#aifrXQob-L@^!oB zHVX9DvJ(fS!|{GvZ$|~$(%w9|#!o)P#l1y6?dY+o92AUF>P}!PUscgA!inqy-gnp3 z1(cS&-yl=@}aqr*99t3}bBhAcC5GP=Q=-855yq zt18xcI=aa<1Zo z&|z|Xa#EBs<)t3kh;?JKIt1J7v5u4#f4-rRwaLzHKfN|Z@GFsnq&{P$#d{luKcMv-qP}g9t zH6ci&y>gZiUeZw}(ufGS%JyD($zfT1JY7|El$+g{+&_L|p#Le=_V;+Q8XRfyhh`Q>8w76K z%dT18-iVJmoiGQB+LLD%5`GU56d_&s1Cku~*I%a?afs8#b>#M?yle4owvY)Wdc@Fc zzSMIxK`octJ?M>{OE~c_(-v|n)N_ShMGiAlrTS%GX{xFUtZLkSt(_csJFV&%ma3JY-Ur++!4cim z{S+*ErU#S-0vVB@4Prxw6Gon9$S*k&{lIE5J9>*Rv;Q=V7>=r>u&+I(ZE3-A8$`>fE$oGcU`X!Yr zU`zb6+PRggjZJG19~_Ihr~Dq<3Yx$1LW1_OXT~pcvHaTv47JmagYYP&e1|k9vq!`U&(}!kTowqqo)c z2Dr(j|FV%qU66tZqYPxAJTW#lerCjyxbM}U4a6FiSoKCp^ghl?$Igoxt_4(v4k(j zv$}OYQ6ORg=;oxj;6NTcxIbRr4L%v-Jv9oGG$!0yqSQllE;10 z;>{sNs&_X%pU-Z$=fj0z{o0J)c55N$$ zxAMHK$Re}s9Qi(nTBu?dG_5!btv>@`1n~!rKXQ2=hf>04HdI4*jx+jBMswj~Td707 z8*3!yHH?@^^fri3YOh{MMLC3TOy==A2)Xor+z!d6PL)H5+V!8kD%@`zt%+Ag1b>Jt zVYcn@dAW?eph7)A7f-wnsur1^7(xWnU|-Cgd$6 z+6BL2#_b|E)lJ9L2qo~CP=UytB-uAszZ;=Q=d!z(?f_5)16bCwMH&G!8dtxcQk#I)0$=P!+LEhMkf%g)UQMJ>_Fe<;gA7I2O|}IN zVPinvr2E!!eK=}1YB|Rvra#i~>SVHBR8SHCK*72zFU@PPSO%0nz+a668~Csk6$!U& z6ctIcBppm^|GPr&MtpZnJs4lqEr1vWv|4%a_Z)huPc~fxTf-U6(P?imc9pvkzT0b= z-Z;IO^;9-ao#ggH0xHhrJc2VEa8rOh$_0QXD4K-rxXveq)+jPe=R|N`9l6&7h0@o4 zQt~Mg{n0<>m2*0Lwkvq}Pl6^GEIh}ab4lb#dOk^8B~nAH<|+$c2SUqEDvDTgnMP}} zY0Zu$v&WBa>i%4Kl|MDHBTDfa`BAQbARL)(nl^cX?g~;F>nn!soeA$ni@nyl=DYayTwQs zx+eC^Ao}gd^0;Sk!u{`lZUl(ht7Ou=6zRw?(BDz1?ZvLFa>51mra%5*A+}zemClJ- zL`I(MfN{V@38dp1_~n^_bq*lH0zCwPMBco4bEZ>576Cr0xhsxO))#TT?xI7 zmAYFH`ephhUXo9q4=c;|MpoDJ-r6r5sbO{8!nLd+kk#9kzv??S*$_8HbWUW^Z&8L1 zw_FO)wrELnjAsl}BeCN|=n%Gjtqh3q#L8UtUpZ;M8kHL=rkeGpdKIFIL5Ah!(~qAa zTA3yyNK83dvb#8xM z@aQPZ%=`!fRN+Zl7;D6^%LUF~S>Wk@ojdYSU%00O-J-si5sKE=bJ$<|2w}FCK-Rkl zB5$>f=6~s{?jI1u#!ecfah3m%sIP#kD(bdYv5=B30R^O`JEaBblOEn34n{>^7K$@(0xPv`>N9KfE&VivvvVyr^wB+0*^}@w>ddHeGQ3 zR~EYFdm)9H*1>kQ){S5Y&jBHN;m$d<$335aSV$@2lO?D%)*hR&zIeAluXH#3Q-F_%O-pV?VkJIb zA%-SyyJm=9gv;qM2bFp) ztVWub5krow`l6#&lDGh9Lb1&Ln)#j@pe4q&_UprAGM69|WZ`e3IZYPba(|h$Y#gH- zS;A!karh@R(oR3f0wWW1?HivOQo80<>X>uk=_>B35pyw)q2tYWj%39Ggndj2`}&5d z@ljYrbI58EuKH{Q-N$;v(*$J|Yi>P!!0$Vn(`*JRxy$zWto;bSu#0B#kdh75>92SPKgc4b3J1v&H^B)-+?3n-u%EKzW=BQjBhbl~Ctoij)lfIL?m-=z&!RX98aF5cYu&do0N|89<0 zZn;HHaeW|?V153X#Gv5~Ss?olx@fYEo|sZw>;*|1FeD(@$if$OFGTOa-Hk}y!Wnj0nv zIXbcp4Gnn>esuj{o?&xCtG;Z-71Mm8;LvS)WOEO^T|!Kg=m`HaiuH&2>x93(Q3470 z0l;{clg}FM;hKl|9i< z?05xQ1DhFdg;Lz`_v%Luuf8WKrmFk3>T_Zyxe;b_W6Ea>x;V}ln>nxBa&tc0bo`U-oOKng z+dmAgI~vO_DrofExMyTI2@c(5(PCyG4bJ?VIk9s&Qm!|_p5nA^xt~km*{p|9$SlkX zO_&qI;eM|s&AL&ES>d)++;sJ%+>J=lL$pvSFb?6ak0_;W*rc!NT-I`Skz8BXz?pLG@HaxTw{d(A0c{oG(EYT8vzCTuzB!<>v^fBC zIAzcq6{+cC)l11OAbkV%zdX=6kE~5Sd2}TlF37=hB}K|M40X-PsK;hg1cDaJf?lw)-YAK(hfB@ZZ&<#2j^ zER0!J%-#3PnG!b=V8YIK+QnEHwCz9*a8g;ySYc`@fwya>`6lfY;DFViyLInC&t8l|F zntCmVm2~_~e@Z-~#J61BJe^zAtKR-|VFPOF)jfAXUD|-cyJ%+L=xV6wDmhVgP1!(b zasM3@0fdsq7bQ0MXo7mHy>z$5uPGH>2H&7kB00?G*K;P(nlJ}sL z>Hy=!n^!*Om7MGzTIm`2yT;q{rxmofU? z2Dix~Z;*rnCm@jFgo|by8X56Qqk-%NP^%>2xWT{Y;J)cEC8a3(yB$~i_E#uWYP#^n%xr%>by z1Vg8t6Z99jR1@utmA?~R1*Lj*86H?U;@NE}+@S{=97k0~2>=BGVfEyO>yd@EaU^4$ z1s(?61z<4d7=7sLII`18$O*f|JIN2o_!!s)(7bn`Mgmn`4Ga$aTOfd!48QG+(?NSZ zt8&c=cnI%b12JmfNriWqNxa!7tjy@c1|myKR7DS5iG&we*pKwCFkmODf@w9N(PvhR z`#(h)p=ZxV8!bFQ<7!jvF`C~I0UM)Y#huN(4gFt|>%Ft%5c;5s^@k)dlbFGYoc$~*#QZM3e6{?&ngpm}J9 zG&jM@aw0IqR8WBO{xzcGx6R4qg(FaX1FcdcQ`6UJ>FFzK)^R!Hru0_J&3X2(Au#@A*%NbhV6B9rvkSeb+2C`)uAx}$=$)OW`M)ocACOU3ZBP8*A5(T$YBIPm%4(Izz%r!@74 z?CXh|G3ghH4NN9R)qIn3-tTYkQS6Kk7hB*=1YV%OF^$CIuvh^fviI0=*?JCWkvWEN zqcsy@BAs(JnIQ@36BdnG9Qyvmin{^6ZEq^y!@GrrCZuSKUsyMdpMDkcZtZBM4 zZBhI;cRTC-(ZHt9odBa6Am{_w96HH{1&=?brFdRq6Ower=BCK#QF3j8mK`=e{*lS^ z9qoBs8)vhY9MAEOJ)a~X05BbP=_KE}n=-Xp=J3yGEgD1Djtb0dSWbXI1l)Jx= zsI@yS;PaJ9%=z%cK|cG_hgZ(dC2yn{g+j184f(SC+NY21dCpfOcX;8upB3YBg{58$Uw zbJoCv^D&JAhXM!cNh*yOC+4b+escT>lbAtsX1Q(@pjHikruKbh5dKx444BN#iAXE4 z)yuW%cikA3Kg-1f`EM($VkIz?YmEo}l6grUsM|AgV@PkK+>}o4S$6pZ;;GlTp4gyd zX!q=#hTdxX!CkAyQ(Vu3mq-uMp|Pk%mn{P4=|evxe4v+qOwR?kW$9thaeNO0N64ckcvvd--Zj?PguZayeGQc5<74r2 zQ2{4(`G*U60gFm8KSD5o*)f={G{OOsvA}kUB08h4?$$5pCV$ChLY!r*`n9vEJDT8^ z`MP9zxY#n|C5q|NmBE;A$W3c6UY_JGKr4sff#OSs%P5xdLv)RMNj(%uS70$EXE7yb z*A)>MXjptDmBjIJ4jt0v)pHNY&rf5*AT=Ss#N$jE(Xjhc4Sh(X_z^`bJ?dirwA;j+dC>a914m`8$b{uRP|^;=rircW>k zU(P3oEG;jJfZo?|rYON9Wb*pIKP1KQGy8uJ*c=SrR4=eJEBWrSi<9 zYd@5js=Di z5KY?wsA)iJf6T2)oE=VMhx<@Ig50|h(pBn}9_+P0wD2(E6AfG%?DPbtLA!gFm4#M* zI0Q+Tc)~mx+m1asvS`$?Z~%*EqwMz<&z(8w0)F!*Ukp;vr()Pkyam`|CT#HrD&{+8 z;&Mbd@p~$Y%&w+9*Mg>e*@gT*1F?=5b|ii6J5OET+;1rZm!V<$g-6+BqH)Nwp&M{B z%YPMzll&e&md@4~cR19iS4nK#^uNM8fN`<6z``3rJX;ox{Vu#Um}&Fzj1AC9Vyd4 zqt6*hl9*khLw_Sb!?qeBBRzTr^R7Z(4t)mMRE#S86D%27l#{Qu)z!Bk^1D}sIQY7b z9j;AtRWVN~3}it_deWP*Sx2)eJMumlDP8$YwV)5Z(Fj6k$4l_bdaQZxRD3!AK)}p3 z*2Taa^X+(3k}&)dkx@J|FAqR#VCWGJxOJ<|02$2Im6bvr(;c-vF^I}vmePl>@lU9C z3+C<}1|g+gv^IZ9oT1M;TTKS_#}6wzz8Vc4s4I@qI+RG5C+6%M`pJO>jn}HRps^)P zS+JflsWMPifQyBe(zUSQK2&haBehPU)Oc~#Cyx*+wU_!gE`nw=zY4@?9x^h~nZ&+= z^==90d9&|$jybB}`1@#4LITw3^^|}JzDzzE&{6vb0Fqk7MciaoO_?|UjZkA&Tsn0g z=DGXz>g5SO2MNpa_c+*GjsD=3CM@TLSKHszT2mcT?!NQdH%iOrTY%m!GhyCX*o#D?kerb zx)(6aFL#eOAulf<2lHZ#-eyo3O<12p=5$x90H)_+xrE2|j$?osNjWEE&#v~-_gvK+ zk0`{eW-rTpp{9(kop%#!GmECd-9Lu+5^JrA=#BS$#)yYDz5RhCdwwifo>ak4;osHb z!m}~cO^(gj2sz}i5d|g2j#k*ZkfU2JYI_~ktKjQ@6@GHGwiwA-nQC>WhaUH@vgVmW zAV4|ZcA>GDA;b@h2r(L<_UY{}a;DnPw-(~3O3UC9#RH2?W+KX{UVUkmV{HT&KXOf& zWz$T6ah!w;A@FrV*lsAvG)f5j^ZF3(?vE_tO+;OACRo~kr4$_f5jp* zypd>ZwLi={>3rnCT0+<%wpz3q$E;r)FEhYsHuuDZ*rE2wnvR>sIw%_-7nnWCiNm znBie27`^L0*uS+Bly1P=U!Pb#a{%cbADXp61faX}>3zgr?ui#;0L2g|qc|uK?e9W7 zjNMaZW!Nw%W)(~Y1{2xNA1Ck~gJ z8=*Duqb(t`Dlcbjo12Tl<6Wo;T?`{{;mtjFZ@nXob-5vMv76Q(epn=jKa>qHN?>F( zUEO*|V9+|Q|-eSN$~w1f{()QxWG zUAY+i6FCfrO9Z9m(a`SSO0I71C!XzhImUb^?BgLU#V0VsQhsT0`2bT2fg=A*FzeVk zqw931&e!!ZWDEdkmgMSJ-|n$V?ytV@KuKM@5InMTVAG2h+5;w=zOSB6MT1N{qaogL zYG|>Ap^t#T*}HYA9Bn|M4DwwTR`GgkpJ2I@ro{~>g+DkJ6uZQZ8RCDE`m^~{yQ&zD zQk|{{?MM|E+;@(OLOFwtPUg~=Dqr4dP3S~9glNg!RXdUc*X}+7vv0is4ht_=c5HmY zu&M3;{A{}ZZ3*rWIO~QWCZP@i3wr=B9qR}@MGp&jMAiBgnl(i-wAKYz<0@3@I5Q|M z2-f9z=)8qbFUUaRwtor)zMsUt9v%V0H1j(b+%3pr6s<8s-!5o}hG#rX7uG$yeY3|w z!!Y|+EjwxYhFN{?+;tn$>TQ}zHMNhOHL-VHc<}GpFZ4E&)annbCEY{{*X@-Ev%0T; zq5kRi~t-`>`T@A!=NVXM(F8Q`;QF6UV7*RAmvlN^J$_t>`x`Kr7gEUAtW zGb?HmjPV&KtK~^-51-yD`bdY`K~xFZ_)hn|Z!cwJ)RK&+v2sTj7lV2>!UJ*ztk5>5 z0$H=S6%F@;sUQt7u_`*hPF?7Y0)a9EEmqX9H!;e;=6MJ0JeMSCqzGa7`>$WW-cc)a z0J=?Z!p3cPEXl<|s#@20t)%|-Bm+75G5hlHu80A>{x0(I*qCVLYC6a5&PCyS`Ri=f zLxZVCnsR+%F9HpFCUmX(7tudLUh^Fr41ts6}{7IUo`5*lzv4uhh6B{;16C;lSD!ZZ?x zx;+!Npd$s;DG+@x)EHN3rPwjKE|J{aZVoyl$(J22wZLEGYx_5M@9dkq^U^1fQD0r3 zes0kD_Us6Rc|%*nPE>92z14%KE>gFyTe-WP3k@u<&-k~Z0Hft4Fnhiywb(1$5z$Lr z;JMlrbhzR}G4^oaq^5cc6m#~t94={dCl^KSF0Ptdj)Y7X2z9D!T8{POZzdk-$$>u0P$ z{ibedNkzb}b_ATVJ4=E69pUA*HR8KadYezVf?Yn=-F2$Hp94~^=?}&c8X6Gwbs97lVxAN7p>g9`7#duclQ1 zx_Oiu9UX1?484J~{=UwrJhlgLA1%-NNZ-7^UfwACwh%|Zs5Wr$P;#a+6O0}8?p(l* zbO45k!+eb1*!Z}|%E?KMnt?rOvFaqHpHU*-1m<3|EHJwqe|JXjS74C)D4}T;{!a`o zeM6NN>yXJhm}kSl{RxWRmGc>`{b1fbasL==?aIkJ$iz98|I!(Aikl+@nxXfx8UuyR z-BxS`Bp@1a{v|Ns);WgQASqBUtz9_!p&J+7!MA4$Sst!LTIp%JQIuMVn>(@IQFkT6 z94g| z;fR`lmUsWA7RopQvmSh}Xtr0DhMb(27FJbNmCb>p_ykys?_8;qCE$t`L4uDa*o(Dx zNdr3SUD{kJEuLCf?fdB5O7KAK z^V0S`evhd0_em0@*kZgXNNVLGB@)=DiJ6xcGe}h4Mz1T7! zF!0)c2Dmwj0Qz0-h}c9{eLWi~+)HG`gy|C2zK$j+Ug&wwst z_Ytbz3_kb>mq`0;>!Av!IOUfsk8?w+?ABxne;YfS|BNSS1>s5cQc7}v|B)>IEQ5K{ za*IYF)Z}8~;Jzq)bMWu#{+oFuyCs`NPww4Sm9JskY+?fie&W2@t5yK9y#)|yCYNv; zFtY3%QZv=pUR(H5+^*NQ4$`TFPQ1RI#E0xwj;Ht50 zKYsJfql~R@uP9QVp}y&698#z8B)j;-FIaxT8PPM-+FxZ9OS|oUvUW3*!rN<`rI7|2 z9#<;Ul`d1t+WBbmP#=a2yN~MjH%kqH$Daw`&4&aAEw4*km?v?Z5gzbf@`=gIr`x`$ zz_TTF1quC9NYW0H5cU34pYUs8(PxiknvU?%)({eG#9TT)-Fx8LzOlpe%+`88Rf@%+ z9~8zZFkmslbwg`^tZLQ@wG$QX?6krl=QlB-1YW~To^KJ>w-e1VB7AqBreU}>Yg?|qeR1@Q(V8ldSG8ZVeT1I4=N1$csB9ru$H!enN>C>1cI;ORFgRD4QFdZd zv&R!WA2&K=;@0hM_IdHCQKoD$b>xxlsqg+?4(2l&`Ye%;eVH%YD~Oe#dSnqe%uk$g zz&p)*Gk$VJe`&+f-^F2hjx1t+qA~iQ$BJ?v8^!D5^f18xuO0^(vYry(L);H+nS8lM z-{WO6&kQmUdHzwb3w!DTY6wVx)%3q&fF~FrOE&rD4c3pxn=19xQnQD?;lNqosjR@( zE@yRbzH&N#ZVrVuM?OP`CLT`$030N)mig6| zuj4d{|NT0EGYG9pY;d03T%K!IbX)@(2NDLpMN9ne1^V28!Z>gRyLPB(Mh3_o8Cw5C zbgv{3TKwlmzHId`P>8cHH$i~V@vM7dZKijf)E1&XIROpAH%Fh zNpgf=|M#}Igb2gafb8`8x;A`Aue~KV`j}x9KvP6S`n1w%KT*W!Nh>P)4r~F~ly#-+ zPL7ox#MkyUi1)fc^BI z!alm@i;TMe{|J6Ky>V<@G)cgY=-d3FRyw6deW~tH$~7}v zBTg9JYflGKnGgW+g!WjE?f`BuaQ}Mu^G*x=DmIl!xlyYgWTZf<0UM0s7gSZi2kVd`^u2=cn2QNxc=E53lgoNJ1TD*%hf> zyv$uq#=gMb#t*b~97YQL;28A`8`7U^apj1qK3N4kjSyA9+-728dWS%d7uc0oSU4b^ z55nad9~oLkQIXiM&-bcj&J$M0jXT^JMToy1bUI&*%V^X5;1m~V? zJsIqbp^0mfQ};gvuLjPzc*F8jShbX>wjSCZKkX0|Ez~rM?@^tM?3CP77<^*M8rV@` zrrAKMUFr(ic4dKsyANK^<$3x7{n?ZexD*m?q_|*zUWpXw&5FbRk$e@gOdSFN z5y{B1yeAz)THp5(JlJLn|TP)`IWW;yrW;rB+foMfn|1(cfD=J=JtFC>=;}i~u zuj8Q$t@cd3BTvz{rWyW<1h=?IgGzkFDc@(B;qh65OVuv~-=g8WOx(AAF&Et#E{1bB zu17-5&%EI|qaS{mZv$}FfBW#0-miG#Qmz|f%KdNDP1_uoNclxcHy^-6`)Ah&1*`jV zn47y`h@MjOsuI4F))#r5k<~z1VQrzB(@mn#i%!{6`px;Dusf$wv`bC#2B{7aG2h)= zo|Xz>?st*aD7phx=};9x^zaG+GkC8R<`mrDjacXxGp{!-x-!b&u#ch zh$&YbgJ;y8JlrySfJzKvbB8hs_%H9Cm8ai*JF5MY^Tu}Wb#Y*7uS19CddDO+*+*sJ z=lccSt;CUw(%cojS$~7GY;q#H3h}1%Thzycl^cdzA9EC|ty$wSW~0TH6axY%(#R0> z#Y$ewJ7`}+w)yD^=8D#2Q6Sq@U3w>T(fS|K)SZ@|2Wu#L?gly;|8msuhU5da{fg7t z7xFN{t*P<%^8XVpH_3?6S9&YZIk4@|vjAYr zVlu*onawxa3}R6Zd3>xTQ~#9TAPH#aDz&U~B~;_I z+}#CYspI^u>0T9KIg9OytsrqE?{T=H4izt`_V>x_^N$_T-_QE8`|FZ&%E7#kXRg8$ zbGpo8c=}eKqDLVcV~nrkT5oY%b0jCCYh7Xof{!nGDEv`LSkX?^eG^^Zoubs zi;_BvaPzHc|Duu^bOYK{{jDn=YVD|QB6~B&_jK!iBeg@JzYhyl!eQ&BDN=}Ot@X>I z`%_5GkxI|+Ev+LZYrA+g&S{s-lKdpNt{t%r_9$7N*6r!q`J~K$k0Ggu1(pfzTh?ql zviTIs_7qJ5AEaX4vL4>UuT42Z_CQF|G>4#u>_cTPMH$jJ0lT~nGy~Er5bUFboR^9yzOn|jX zrip=!X})=y&8X%*#_dZfx#%ZPUu4GZi>C(0o}l?gu@8URag2Rq7;gH6$)qJS|0e70 z;f_QiDAH+(wI<>HTzGl3D4yT$JJ}5(fAJs$HAgEri!&mEi0| z`*?VXD>$_)t2)?Z&!f9H|4j^E+1t2Fsl^7iF+nEFetIH$aa}48i!+4bZT_xkOlZSd zpcn<fmvj^I+|9KjhuYl#)Y|S|DoDsh?+CL2|>Q7|K z3K#=H=hu{_^pBk*r&l9t5-t~__^yYr_)d!*tA7+_*N29zW@{kp} zs-Tn|R%~?y&N8f}Tz6R3sZPK{z=sZE$UCh2!(d!@oW#eOCRk0HRy@*%q;kX@ZD`1U zBIoqf#QrQ-<0;61B4F&0I5=EEHRZcHXM177^0dt0F_*c1WtNN)yiVzLHrZ^Ah-eu+ znS2VGNL&2oa$(zMu|d!Jfd zw|2-*_kR)jNF)DV;i>W4!X--LY+@zdQk0JdB2pE>A{9z$(6_nXMk;7tuMGE}B-t3w zgq9O7nA^T=U@OVTVWKjL&zw1NRb_3sWDqRO=6Gk2$wk?%HBZye)$KZR8^WJwACwyO zmvI~Zf-lM6wy%Z8u$4CTm&5P`(g@e&<97tEjT!mdPES(}2u!IxAaE(T3^JNQ%iG{a zMzmyIgFGs9;-K+uoDZ{}SkdV9WcTjgpkCj(`53Ir;jX2WW_;VEUHWcEb??&a?Rla* zkJtLSi1UBbPm$X9(ekSm6^z2`vtH6KC==65%vf5azIAgW{y}S6e&xc=;=L8^b(3Uh z+>ppEk6tJLgT&gNFFOyX3qAT5r5%ci_@7qs1F;?*#+od0Y;}*C)L+Rj&hEvj)CsiO z-&CV*u0^4Kx2`Au@b;E-P-9rI@&|(~E2qLHl&eH2 zEia;A{NoABbokKx;RdhB$K&r|uUcb|RP1#mDHbwAXBlUvw6MRtdMY{8g=7qgOO1w7 zwpilLG8b&;I6_}#6q7LcplxLxJ|i^r{FLSe9dSGDv05l6myCw63=+x*|K{x^YlE_Y z3}gx_VyxYzHvj+Tv-KThczosfI&F<7;dBe_zKoE&-){&^b0uwg9@TDNpZmb&Q&VXl zJ)D#=&0ohuvVS$(ydj?uPaR)v;X&~(kY&?kJ-*dnc6uhF&k2U;&z3zxn1#|-{{A`S z(e~~dE>85*gMr7NtQ{yLX5+n~t<%76WG>Ra%jvF(&|{!-wC5IArbDH?M*l7?-+zxl ztXAuZqzLJ$dbyBOZAN*(T|$ILCA7jJ;Qs^JrR(w=rRG-BySed2yXa}L>eFeCll4-p zAJU2lu#wp`+`R}a%fr)XuRVA<-!@&`Q!>_5#Eq3=?lK95J-JfO((d3-Pv*WT`xbd; z-V)MSMWE*qEnzhA-|)zGXz3Wprq%BNlp93MjOGR8f9XN{w>tUw*(}G&8oDL6^ zju}qUT6h2AT-KK0Fd;d3x+AWIJ{rIisESk=Vg8IQ(SMwarOQxFsvUNfCc|S*>C;(W zoqJvs>DaftJz^b*rOtHO%uix%zOAOA9?##$4@tB`v5tLAUU0LdB=U`!sCt4? zIFh?PoML$Iq&jQo6N)*q(adC|uja%qxEsXkhQefxVZBr8-m%*zubWU?VbO(=lX4-* z#};%ouVzZw*3^?itjriUA<2fQW=UtZ4}afs$8mgnHxxuL6?z-PM0L;FZxpK`m@Zkg zUi2S|3hWSXgf(EvqWEhrT(=R#e6q`052kXwVN!Ror5;c)`)l)#jOzXB{(65fBwOib zxo}XeC3t_|IHX3T0fD^Go*>p<*_ldERKl-^mi^W9K)9QlfEokFPzPIXwHm2&MV8dESrJxuO!d z)%o}s#`G~5SRe0^emt#v5tw`sb2d9+K4w2z4o^n;{RWNJ!UDlKCj^~xx(wosm;NV{ z?Ia>Fo->QER*?qXTgsBnlOkZQ>6nOH^?@#@Fp7c4o-S7c`%gOgtX)DV#^74Dy}i|8 zQJ>cfjk=(8sOap|+6RRaHA&BPz3n+nt6}n8Bb7tt8}7u4Y_U|<4>u_${WVkl zCO162K4N|%JyRJlPZs1I51=o%!LCrO{?pF+=97 zdX8$c?2)&|cpgKXHJkz+#$O9|;Zm&^QML}v=u6J+?A&on7xZsw#uMTMh3s3(^(z~bKz6h9tBBh^1T*IMpPK4o}CzVF^RlaiIS z9(|){ZOs6D|JW#^1<=aX4(iT)o_pvUV+@Y(MPdU}x}#|8`SG$!&BN(nO#_@hts*~7KH#3d{=oK7@VrJ@ znOj9!8nG<+pE9;+h9Vg;W^UViq5{IW6l^_{H|B#9=`sTnFZ z>j*9}x;vxMUeK+?9^Zp!{y+^F#Xq03ieg0Q!v(By>(rbc%<1=tl~lTDgC5mNZTk%u zbjQi*l~4txEgBJQnmcmjnfd9X+m6m`X7eiODzRnu;A@uj3HI&w9C0N<-zHfsyqZc* zz5{}V$9!U1v4^(zJhJR6G$)Mlk)~PN9`)-C)(zTNj%6yGq zt8dJ|eZyGFwV&cfPc3LpB}K9urGDL&vJ>&sptqhnpg>D$n=X>_p&&}6?MsB>CFX85%!;T@n-67=iGLawi^nU?u<W@`pKZecNIu@0p=3nIrG z={+gwy7x?v13M+@y{8V(bG~<16QLK!Oo|qBoha;fEkpdFG&-AiGv_QUYJ%bOT^l#g z2`nkjF_bh3;^O6>kiR5z7v!M6$3MbQ=&A@^7c$cZc$mfQ3O$=-=HdL2 z+VyLtD|!JPV&(+8%WYMrgPBca(>9)AZnyIkh7BT8JVZ5ol7G?PkGf-`*-UJo%FCmn zGkj!axXx6!VHlN^%GG@HfX!V(_N&G<$}Q0Pp5Iy|!pQ-@M802-Wx4xkM!hc%xdWTc z=@SEq+@DCIM4rp-d0YCX=3C3@ssvST`$c5ZgqS0O(+L8o2eckQ8#e6s(kYtNhss?i z$LRcQ@Sgm}zh7sYT#9@lCoqet|*>k3-Ttz}^K z)t*G2wnJ3n8I938C<^1bo!NPZzQ*^xq{ij`#j951$8l2UZB(6!rNrPpvV5l$5W(RH zu_g*};r=QTL7e>JE=M+E7_DGl4xGJcpH|6wC_G{+^Wa67I@ zqg>hs>8l9)n|b?|A#?nv-=S^?3tkG3-my0XY|-n`Sp2oEZ|u(xC5cLx69|WUR~*F- zuWA?B5#?>?uBU%C)%+W8E7rM3ueBVjL$Of1W}ZYPqcpIQ-qieC@R6o8r(jb1A_Qa6 zamvs$4g#dHMA#eqSdt@b-b%}%i_4zTCpWb|sx50&#~+K~OAdwg-=NVQvzPsOcjD}| zZ%woD*q6CyP9adG4R5_8MvRYe$#K%r_tnwhY@_Jd&uV9gAd}T42LUadIG-oK#`W=f z=Z5Z)XvX@eD8ViAoh{z4E{zn8zOB@j;Dn0aaO-KbH{uggi&O_QO+6|n#hc%)S#rWN zIZ`{WI8m08f7y`>l{PpAn0=%b$4O8Lr(p-rKMVECijC+v_K)g;+sE1`|B)=zBK7l+ zH{7QtRo`J*265XiB%i#O`q7^D3gBQ9?;;JCGTkoTao*kQm(*<>1eKP}s<`(zein&% z0a_PiCJjQ?GYFWdyF21KEsg#UZr85sk>;BTa6Osf>`pqyVFP9lIGBrvqh(v-$4_G? zbu*QBI~=9$mY?LZpG|}2TvAd**3ZRriA5q9Y3X;PvKYcL_B*u&KL%R&37cAWN*iebn{>qWxiCV(0?_f>v0x^8!?bot{$hhOWl%?MbuY3K8 zn2WVq3z~7%8WN?ir-b#+)J3XVW0mSvUEl26zL1NKPPv+s{P)z`zIsZCu*=pV+BULG zgpJ10YCX2`^?eA)(aEFuFgi8OM-r)fS14KWDE}f5_{?)yj^$gj zTGb)<-;Tx^JXM<%8}@;BG&~a1Q*m672zd6dd+?E_v;4x;k{9EGMQZaGl`hQU+(<}I zYe+N^aKB#^a`l@G0b^d-=y(9=HA~la}5Y zGo>HBr?X8i#TGwn{k4JGRx`@D| zh#d!Q8x@AA#{EVPen0j-%_UMkcme;3{fx6Z}MMjs>?U5w`A_ zGU&qHxoO2_cd{IOfUndj&W`J_dBJAI_D+gc47IuS_xQ6Z_0MJbaJ29fP29|GZcgs*?&ejd6d@*!{d(X5i5G_D;If*d zLd2<=yJ4$50K3!50pGCUkJhem)f&5F#oVW5%MroxIpDM-BQ5;}l^4XL-OWVs@HWVH z8~nM`GG%#LigfUM6U;L3a=!^ql^s0|5dDa-*&c_F{7c7^t{JEBR9K@obJ{C_VZ{sqSW zekH;y+;P(F}D5L0I7{)HQCA71KM)Nw$A06i{}$S5svw$YO9#C$e*;^48+kOYppF(ANjL|iGO+259OrmVIeof1pu{BkY` zU)n=~EbegJNfU`up)|9YL$D|>0K3vaO8pbJ=u5K~p>m422Z|;7iaKyP==$rE@j=+~ ze@gHlDMzxKEqN#vi!WSSTNj(mw(XhzrEhf034PKvUdDm zo!WFSw02lOHk!WzLLlEw3HZ{($~I??oyAzMs~x~(QEIN3Rr?XZu; zv$My-5@FAUhL(Z8ad*$-5~-9q@WODVF+t<;a;9i*Dd3&Q%4jJZ_rBn41uvfFJD7n~Oc;c|Nv| z@)rM_>2_54xEU*1QJ4?MiPZ6d;hMsbMqT{E&9y;CV9lN@~O41~DVN$0Gxzm(im#DX2n47OP;kNv(#*`pZtBrne zvnP<`dZM!KaZ4%7uh`$>snm84No@UpWW9A%m0h$wtOA0hgp`zkba$h4hjdGKNw;)^ zfOLa&*P%hWq`SMj8@|nZfA{|LIT#*>GC0rkoW0jxYp%KGeB$S{cK_U{(lqoK+0yw) zKfI|ym>xV|lYS3sMj?wgUeA}w1(b{~`vv^xH_>^jyw-2Aq>AyH=>Ko~?h4}m2==S& zbA(KN#^%=L`8xlXLAt$yxPM@P!Qn#+bQ#Y*LFV49rdQV+5oKj2m!mHCqonhh=eCa) zrh@`T>;o#txsFFY_0Cs{RDnp_3+7e^Z4>ayv|18QE$ORrX&rml${DdTTir!;Mv8`? z(T&oE4^@WqEPHU-*m6#o?p^G5G0nC#J94&%4ot5@WGiikD3obE19BD9wjMQ9TWft` z;yqGaV$5)q(UB7b1rE$riLX@M*LsY5kRcZV4j@(Dk+euyZ`O!k^A`{|KmXG zNzC$t_h5+{&tGYWs)~|ORQ+x_gUhBJCB>M6syzKh>4Os4D9-ZGZuF72x(>7OdqIeZGfXc%Lq-eR7~{2bHU zms=x%c@rm17xO83*zQJUyPhuCAViEKq>#W&70JK+CnSQd;h*wZgoJIyqVcRF(a$}u zKVvwRTB**zQQ-;9bmUpzK&00gK#GLDPE zOf8Vb5y{?Z;LCvK@N`r_eb~GYW)q?nGPh-(3x#GfyvT7&V`^Ss& zv9T`PVX)}nXIt{jT|CiH`pWRW{4)FEv$k*keZWI`L8@*8;OzT~Gj6B01Z6HmG zRKjF2nc3OC(a~scy#F#<(+I~)0dD3$R}W&By+yCORV4Yr~vNd7;LM71aZ{+yA?4&nKd+g zh^7n{()-d@Z>R=n;kOWl=s#cCHwc_q57h_u!nu1dB(7p)^GxjU1=?Pz6Nx!6hyj`w z=WwfNw^TiSdaTXcxz1F>CBcE0&bqLpB@|2xhA-=UBdR&kZ4*A zxJ#3nikLrG-#bqjyh9}V?Zct&_j>QKLc&iT~+`5S?}T{SQbr zbt)&%Ft`2{q4YSMzW|__0)Bbv{@Cc~%fMG^YAT@dlUNlBm)*~LA|fjD2MF8?8fv5; z4*Vh&zheNqtloMu^r6S=k^R_eL7XlAbF7K+Y;?TIpovsIzQ2)h-OYmhfyOi4z|;X* zY~L8fi`&cHk#cJXCybU!o>)JTgQ`0uo(0oFiozZq9s+Z_7p^jK6vcu8vj4|7;?ZxnPVAt+uqEe!Z4H620WUxkFn<8V z1o{j%IR7g|NWa4&mzD5d>xa3mx7IyxKlcrE^xIUvyE-&Am=opea#1LfA4zbW#FtV# z=k>gx^7@oeeguVJjnbx^YpB!9D z@r7NI!1-rQp9J51jyWZ(L-%lNwAyjm`gLnl^0a7Nlw@^fE9bzZlcsRR;`!C-L*!zw z#`Ep+!&wi1Miz&ZTL)is+o=2b-8q~I)$i#a`)v7$@yf*7njSjo$wmiAE4}DB!m=DIzMQvzG6fs=OB;{47O8Ei5?6_Hr3eD!jt#s7 zKPC}u`B3w<9kOeEl^+p$&BEP3Q&BUrD5I|d81GYa-%fVBEoI){#yrzd z4csnIl$@w&52f|8F%aMp5=fNVDwZYPK!L^jYC!xc<9!ndO~^avp6QWJf{B2eZl&^^ zaE<=&W*hGLb*xtlVDwXj{OIJd7nTPB$^$JC@H;cfD^M?*lYP8<%A0s-_iMb;LH8eERTO);t+s*JLqU~KT z`9u?P)e^RMVY32Z57V|s!JqB6L_;;GQeldhbZsoraOZ7myJ=Xe)4Rgl)Dex|%+hN) z^?}M)5pZO@!~5rN{}dHT?l7>4M-SmO52HSn&pi=v+~QR64s&Os$^Dn0rn(r6+u9{uEc;k0m%Qj(E(W#i<;UaxuNCn@k>rf7eMR3AWACyra#?Z4Wr|&jtg|0K5Diaj?WLII14D@{SRFrN6BASXFe>#${*e@hg;6KFY$o^ zhFsHjh;OSGn677Wqa`Z#c;>-^gMORRtQ)(jsRIef!a{i3`>evfp93ukxh_H?E-xS` zk?+u_eg`exujDooW}2mDP3cgJ1KxuByROLd>ha}`kF@K3JG@B#1|SN6x#gNRc%Qq>7vu)D zMX%4Wv=YqIA_O_Mh8o=tYTN1f63?huv>^@0)Y!qJ#PbR``aof=tf0m(v<##ykY%u@WCDFZ=Dl zJ_q+&A7DSSYtZ1b71_EPTqAyz$c03`fX>ixi$TxtNEq4r@Op8VCdo*S^9t|jYQTvh zlL`5Y%DVEC^YLR-y)H_f-ixxOw_&pGq9F?JANc zvll#mS9RPRfV;8??i$N`Y9Ek&O5BYxPgKz06I)Jm zR*qp9YFxMapuXUtp;})PEctXrO_RcnpyfR#yZrCy`1zNi zc`b8EE}mV`zs?f))D8#zqwUnY#=b_%S2VQUpIM(n<#X)Rs*F*KaJ^*h%t(s3yt=PC z6~a&!g%k+|gE@n3h-667XeUBrNKJRnePw^za3F$ofhc(}dSKH=EB@%n3XFWQNN0`a zyPyBJz4K;&A#zn|UGhA|=LON%g-docs@587tvYEJCf5wU6G>xEtNb#oU?W_`xxJvK zI{Xx&1i&Zu4wye)A<%U>!t6Z2sXbbVrX3JR}Eu=T3voEWT`X;O#aev zOckb<+^}m$_UJyBgsIGr`C!)8@n~T-kD^)^dtzcD!&EO|4FJJF_G)W1um^cU%nq4k z9p&;i(KLyIVgl34FX^Vs-Vmicd;N19u*1IQY7N{<3`2DHJDp8VeBXdQ0H1y$om0B+ z?{iLkbC?!#yklao)pO4CsjKDjQ^-?zBSoGW|6XJ(k`UdxA_2nV>GA?n^`kd3KUV#a z=RGkLZ>wc1O7s012lB0ZT13Q)y=aB%!Jj*-J8rHf6yhKr&&Txv{`e5}fYN<|Ncx>! z-+c40$94s9PmI>s-9JlVvLdCy@sN%4<7-~BWaUN_?5zI8+2KCj%HOdTL3X=Y#9*no zs3?jHuH(H^VhiESk}BUJcXZC4KTRw)J0$RaKA%#g;w^Hp)0BvvNo@ZfJ|g4oT7u=1z_aVEndEElqo6;JhS z9KFsEiq}mCpYZs=z(D=;(>;RV$dEFS&;ALAj;*XYFi{veo;252c*kFnd1|^hw`3U4 zp03uxFYR#PzIGKf-qam`q zXY=?QddzOKxV$Lf@^|I0E}< zt5JiSaOSxEx?USD1by~NQe@%VKAG3I&>>kXu5`Cef^0Ewr6SxZ3V%tE+oHHdbJ}f5 z9X0Hi@>z39kCVm^j80BE{43ixC@2icHT8J2r~u$_zzB;HyV#8vVKj7QL@iEvbg9mF?@5c#FWTWaQVL8T0r*KiVH!okiZO+#XKe? z6%5$|-!6$9kTSQ&W$rhWAAV0Fi!+RpBJ+=3J6#qEG2?#^m$A>axVQ+_fQcO-U&jXy zN~GLRAMCkOx$B?P?kSQSKXgg>&}u-+%QBJJ({+0wmIfkW8H((Th^bOsNCbgC{q6)< z6d@wPFYpjvuwUr*hc$V;o~~+|f1XS4^Ft&UFo;sc@evVGp5&L>6fi>c7>P$Y3irlR z?RPv(ZSb*_Cd(}mXz^g?dW~E?+RtsY0<)Y^ljhh}OY`$Hva+&{uU9y#YikjjQ5g^g zP0GKkVS?;7A8QZZ4HK)gRkC|QJh{rhhSqCiw?rp%B2CQ226tT2+E~@*PTp7cne0oa z)5b8T0*3I8bWvfUKc@xp+!YT>kFDSJSH=5=B|BI2Sy-?9q~0sp4b}5NO+xFNl%2!9 zGT(edz&!_5p3(giR3c~`eM2!4dP!UBKGY*dZJR#d^l$!XC!o7Q2^7Bksx;YNBQkM0 z#xELVwp>XAV84@^FPt|-n#dKgFECp{2D4JDFOtLI#zz9O-qPW)?C@F{lbFp63^tcPBgB?8IyX4;4Rr8bG{B zp)Sa$GOXZ8p5yLqB*>i_x7ttRCFKcSjYak8`hc-db?HFMF)lI60%K!pW%hTxKRAH) z>~oH)fyw?sz$$M?+>bxl~7);KPkf7#Pwtqr{FJk?9mZg@4ZpOjk~f~fPnB6Mw& zj{@+US{=DJW2{Lnu6RD#HOdO{h~;duPd!IxTn2bK6S+*?`S%13QR5BL-RK@#_ol{A!$rkEr0B;Elml{r1%nD zo9^sGTkwtVc6We+i6xijhGKpB@iqVRFd>uW57zoCt|{x1_4ecz0hbK$U?cuS; z<4t4!BLgWV)NpS03U>7MeVrD+U!`9sIrtoIrH;IAEaVv+EC(oxOfBM(@)V&c-eBTO(%hU@K{PFY%LBxL%# zq1laNWJIc3k0-t1*2pNlOhDlS(O^H8&iCB_nJpbxWO>03xK11`-V`ZwlE$mUp3!#q zjU?IAih(UOCbu~xEEN5Vay{JZV|^VDlGQO>zSDw_oK9zAEJ|PHkwn7M1r!548m~uY z7P@!Dw{6Fzrxi4CMDhRfAP%4Vk-a+|5DJnA1C3n}Xl3#uxv&Oo@xXx%vBcWKyv(J1 zi)zQwA;66DvC$`pGiB<64fk8o?AQ~Hbit*G#Lr*K0l8^&2NBKN_8^u)@!A#Q^`ORy z^+GJcE1M1ifN_2tRanjoY@6ST*9g^E|JOg3$a&ku2QX}>7G=J4tOl(|7mKVRR8Qqu z{nTwxwu33y?+ni0HnTxUVJPQwSM~hHiTdC{8`B4usB%OvxN*7}uaCM23v_vEtkp1q zqs6)FZ1S=(XC6S9+j5XXlZEy8z=$DkoS^_o3lxPok^=a1|sVW zJM(QI%P;T*GUdc(9IvixhK6J^gZ&UY-a9_bL8Jlu6kJR(_+`ODz~18O>gu%?0yvBD z_)h1hz{(_?N<^njb-;5h*6R3&FqGH(3!N4=`MDZY_dx978->r&vx@w0&*h(Su(>w7 zLB~wl4{TGlJna!zpAUA``;+2e6W_PS&CP5Il|&^J7Q2v%8#9ly0LLLPl1m)5V;6C< z&v*Cwt95C-vB!P~P0=&GW-Ym#+y`Hm4?6nlyT%b35&wmSKO}t_=YI+&hFXH&wGlN? zrDG$rNsj})gt}4%YF*50(zTHvk(cg*(i zTWRo`k;$DeCe%8MiRGgtx=Ni^G^s6+W(9(4?tHR*>HK?~0`k8FfUW~PH+THWU5gjV zX7#1%an0kih_DS=Z04-*64uN0aLvD54&@B$E!%Dam9z5g^8nQXx06BswvC}~d zv{3G$)1OYh3nIxIglF~^>YDWpT9~brwrEyk&R zaH_1hv~>hJFqL;sYSVY*WLV3cbu*I~v)OXL^-=PUcv}@GoHpnMFWBDKBYJTf+YvsH zej=W`p689ag1xJ6n<*u3#d>ZfA8+B8eGjhp6#ys6>%ToYi{~{dOpy!A!*3JGZ<~4f z1?C&*6Fp&H#k>pN5j+M7sB^?m^j6Fuo+D5{MhL%675v4oDRmJZW|}rvT-MUgpxVJv zP#hD8@YeU0zf{?i#)v(aR`1O=IUge_0q5 zgqH|IkmfYulHP3nO2|WWgt<&fQCa9&L}bJ9t>3Ze(}H#s%J+xcF5;W}=P&cu);6$X zr8|dIO!C34UTKAVH^Bj#)1Ok_b;zH2737_`o!_s({w2OZE*Gvw3LFw7zImOu!-pMU zJM}izvoSyyafgTeZU%5@xNu)Sg{Os_Dpt3W^bY0nz5t~LO!EIEqTzS9 zUt3Tm;G=k;xzPu(Qk(^iTUE z95hHR1thtC)lwwKbf`6p06pA98K(6-Cng&#up5}rJN#tN+CbtruA=u6X?ozvp z^@39K&%ui1NoT82!)+9aS*Ey&QX;)I8P}(usU$0&Z9p;mQ{bMsYPt|8k#&?2GN4o+ zgc_>(*Z%`++gJQ;>KKl3dEpI^o20=6!qia%n>$BGWB_jeHUdSFuc{svES&z`*_i-8 zanZuU%?BT!{`8=;!jvpth_VRnR7u#`}jUyo-w~6NgYG6*Q z84ylRB8yo6HN3rhZ~)NUP>{Kjq(O^@Q4x!Lh|RQ1(7JL_W_|YVo={&Zz9-mMf9m`l z{SgN1!`o!J6O%k_1h6 zC~!I>q6~M)`R186zd#(j|HuHrYE3Rqavma176Y`Q1Ty7X1%Cm_!p^kb&mGSwYI&iH zU{@F3zFy)vAxu#B>mQ+^!+VtO?xg2M;*BER?EtrU+nE6CTzhI&d7DNFoutR$HM5M) zSd^e{&wZcFR>^xxV`|ASiQ_og57{5i8Zz=&?IEaE}#bbB_pFF)(Y7XD0Lf%xJeknFtGL zF(&ZxcBF)a_XD2W8Ug?GL@)-43n$ScfLvvnp<7R=g@s2-{M8dZvB>97**gaZh~<2%D-A8(?-|VcI3ZaOtSupZ@hwT=quTNl z5o|QIv*q)%q0;%I6m5MhWfwadV<5^mSyFMdE<)6(gZ^)*$)bi`E-zV7>4B#*>$0qt z9;d8DYii|RMKTO6NnsnV$<_1MiA}=bo$vNpE_`H$v#VUlhxuC*CX;w<<&LJPH_zMqLSzW&_{*rPVuifFlEz>^bYQ@IGKHP#tt$MMxe_&~q_eYNf zp{yZkcRldhjD6tjGyUnA#=x#v4i2*|or<;xFq!K(i!*$G^VJBb%D>>DC|5?- z;9Ozh2rlaP6r0=qZnuc0%=Q@J^wG0FU`mv-vWKB$L)k1_C4-_;JBc(k`~h*PKrsXy zziDBUcW@#3dSBgfX|I&8a4HHA6`iV`9rB(t{cemEF>Am{9@UV`>hO^yVWKzi@rZY!gvxoOCCo24FT{Ujy zmN5nz_|`4A$X0#WxnWeI@)@@CunUjt>6Oj2;fDDW#|wYC_pcZ{?`R}QFzYy{c&;@( zZ+p7z>!@bF1MxNZoP#Wq)*%6BKI5lU#iq~7BpVb4H zYjN9Z`oQ)%;PVNT$Uf`P#lK8@NEr2!^1eBO13?CrvCu@^GMBX~mG*F0E?(2Sz~5F2 zYqH8@;9#Sq;kW{_DGLtVXj)5IeN)@4t={anc_4(EMW&f~s=bz#1IiwRj)oox=6E?I!=jhnnK6NF0AlW3ywL3O7D#llmNySyVzs50S`;1hL zNqs>ofnJmP=wX?W!zJU0LmWxp+FiQC7gU&sh|%wkbm6*ttrjO0^&feJJr$O%xv!CX zY~do@`p0Zo#YPN1&;81vmECm04s~N|84;Q{C*79^%#^hl@89uLkQlzuobDZWJ z`F!6cX{+vdGLl{Pjp;Dk{WfAB2Xj=dW`pwR_MDn#*_Dwnm06_a5D)rkkncis0%g0w zs+C;;G}dhFcD#nfDG^ml|76{uah*5LCaQOqmTu3^-ok$>2}$}p$zr4;7w-uhDacOd z(Dy-;eakZd#u$Dthg6(=K$dR{0s4)H=dlYxy=DShY1cX~9=)W3BkS?Rvb3EUdMI~` zUx85mJI}BvZ%GPvBNUShpfE#>hn#-)^aABaiwV;0(BLzsw%+#?NTqQg&yR_1BJ$8j z;+A%@Ve~qjc5}~DestnP$c_ALknblVvoU2(w~>D25`%P^uIo8U$9804xE5Q>Y-#|L zTPVkS$g?%9+8#Kj_{1yF&WW znP{TeIZ1PH|9$B2)+`^C@(O=o&pL4T8Q_=SzkQvyE;l_U;4)Ybjr45vCCBVQRiGb% zg^gUy1=i$W{l7F8g7I{c{2j!P+Ay=1_T*I?$++TfxFm%EAC^qiGbuWX`e!C4MEH(< zJWXZ+5(ku!F4V9z zerwJKKh#7h=emA&#NkIG&8#ND&POp&y-+2ReTS8s(DpLGH6f^Wupcq3UazN$Dxb>J z(wc5_09pnHXn>-qr`F7uL4n;MZtlh%u`W>dTW}zbcKxXF_=wp$4;a_D9XDSGt!n3m zc(tJ|da~et@XW-VZ1G}7kL-!83d1Jh-o|6yvG3ChWPRg$5VZWla(n-tEl)unLu#DI zXf{Ei0?*4cEeP^>EVjXv@%(8fuMlW?1$9L%JoVf?oFaBR!kqR@e|ce*{aE%B{Cdcl zZKWIM1f{mEGdq5Y?63$BmlM)Zr)T`H%|WBW2-phtj0*i_B$(7aEYi3iKbo4GlK`d6 z$R3bFKUz6GV`H{0PXliB$$9mToe&2epLrRpC*fPy=HIVDt(7-0g2YwwcE5M-w*xvyG36byX~bv5>4s?;P+|1jeaF^PP|hny~N8zDIs{RPB4Gsk{iF+-FxCrb2uUm*%1-|oGMVllErWW zVm>JDlXgho!vP^Be>%ZwhZh$4)uRDxpTUp6`+#j9c@LK7q}M+29^NXf8q}k|yOdL`d?NDSF~%kNLO(>T8(B0o zH0%;TM)vIVRV4G(RZF;w(n-!L^b=si?}JX;Z)^KIAxYYsAqRds_(Nt99X@_N zUz-J4>RV~@?-~iK6!6l2#+&JWmI`vkCG-wP6P9nDyJX@BeARsW?Z0vjOr+=zoCIzL1A;FG zVh6U(y=OXXbhGk?BP${lK%-#Rfkxi&7?NJqIsr9QnM@c?PAzlxg(HvV64!~xRP}@R zVM*EPQOhy|u|3^~w%vb}kGNLdx%Ksc=L!OvDu6&{@^{v$K^2Ms&nnHG`S@%A$~Yiz z>DY|%_zGy{D=OjJywx74jNqox05QYw6PCS|mlG->i+}x()i-P2X-A(Mx%;>XUeAjl z1SBY87@d1Rl2*FtK_a%dX3F)DObxNK17ItBSQTg9&a+z)v0rM^Xmt(d&D5BqWM(l! zAChdxQxu9gxZspcALdcVWDBvk+09ftLQ#H}7z}!^><&E%>AsRq78|4@u`qL%2}$L4 z$_^lt!(i=IQAC}>xH>4)-|DT%hbNs&o^2_LNXymBst?KlFQjcilZaIQC4R z`lk$m@DJQUL}x~CnI>05gNkB;w#lRHM8ALhBsEUMV;>6=gFnn|zYw_2cO5YwH*qRM zdOmDReTuQr-DQp(-3hqa@Egr{O^^g;jD{#pgCwEdKQ>{|cNZGd?$fnIHj8DM$n<_P zMyn}^o?H~6obT;0Fq?(UUOpAkXwCm^O1A!#PyJKUNrT%|B`=@DC)ea(0(a>4ky+^?U9-EUu*92<* za|cW*LLm7P#`Qd|S^49mEdK+7_Jiaw^z-*q?3UBh(091J@{8@}WyX&Xos-7K)^X;k z(MxRBW2(~T@9k|-RjE9;>&KPFb}Y|l&Ghny9~Fl+dq3DspY6n4pUlPl+}6bk--_5T z`($?2grDkhJ1C`bDfn~~dVKz#_YgV#6!oadohmo?i#w^4=aa(23YP4N&azrW*3T^U ze{~Pz|MCm4FOC(sn0E?=4GnNVtoBJAHiU0pURP#c;y@rxUmP|BwiMli3s*R?)!~Gd zf5iUzB&ozAAOf?#HZ2Fh?521_>G5MzDeLcc^9l!IeEubv3ds-9toYM%Xt=_psx&ON z|IJw8N3I3;v)Ro#lX54kHk)zIJF;h*vgwlzeV+FM$&0Z8DJG1fB#ywE-=-lT!1(1++@J zwrMa!fdrM`(C!6#)c+e-0yLukp&_#CQ2&Q&XKmYeuiL%?aJqjrir%#}$E8Pay*NNA zzXEtbV-727s5aXbGfw?qXRmZv2rZEwA_+!NdNdm8Q_IxstjpBUZuS2gW$IDm&SM4& zBiL_2WQR%{$3YI0i7cn=<*^yCb>??w~_V;jq4TH)9eEYITE-@UY< z53(}#sgvgj@4XHdot*I}Q*WNHC*~c>N*g{8#97v_Em7v;Do7oF{tYb#*VuP-wrFtO zxY_pv`J8OsOGZixEg-q`oq_fz5*wtgPH$b?wwAPLN!!@dtCvI)Vlg0MXBd3$NbM(esj) z1uq|%*vurpPk=s{S$dgqe1Z!1qt9fkP~r21$Pk+i?I=*tE6Se3jE%c9`G zi?Gu10yrTg*4l8V+O7~zEzYvUw-;nbf>sonoBD$MNOX$2Ug~ z%$9U0k9L3OL$r|u$8}HC!2ndUx4ssa& zz@sClg(!;i^m@zdvMfd6>iV}^9`UEzni@qaL-Q;3BKqiIk!)nia zBBT>M;Dq-bV&{6awTw78){HMLJe4>c4?6VblvY{qW=$raLU$lbx7Lc~wIMCqPJaWDnoY{tFBNGrg90tir*o`)xURTwc2B zqCE8!Q8H-F%;ZcGbHRPm`{?Sb1oO4`wc1$1w=)&7-UE}{p9ibcDDVqVbs1~R`LY&9 zS{>&H@96a;$CJaTn!KwvVNVmFqDD{?En&c-L`VIaN8}F0iH8gZTmiW-exNI81xayJ z>)+GPk10IYq#V}gf$l989D=$4-KYI`T!9xCR0ux5pwL4JzgvthcIq>>ufhrYcX65HDbhiH-{D4#Jr^29`aW;6yj=QF~I`rpl6@VDHmAPW`KMp4$gJPHqpT9Tq0a zq~!f~qT^!GKK!jedD4qII`~D{r0w)cqEz1q+sVnv1&xo|+8IQPbiC;i2*_aws z8T}WcUxCD-J@I}=B;sfls>QS?6At*&<0B`GJkGQ+g;whWP-SPcOUva=SG5)49HZ@YaJ^k49G(?Qd2Fc)VMof49RJ+WT?CCv43g0J&bliFdf6k{PD#_!`UIylc zo4n19Wx7fnmV%RJF?8r|5z3lwQSbMXN9@@Jdj?l>=Sy|vGO>Ot76(TzIY(W`7w0gd zb;PLjR!f#{#mzF?;M1`$)rDiFxy2glSdpj6C@AvzB~CB2cPRv9QLpV)oLmC%L0elp z@Su9v9@x6R9FWt&U3$MMu^1bF5W97xS61A9P@Z^-8w|IjxRlD6^Wyx_Qe{i^L{J(| z1h+W2YO`{E4~E$+6Z`wIWe0!v_FQz&+# z7rINt)7qO@BDi+;l(U%hhYfPxX0%1q?U34=Lj=Naj0c)V+Rki~O1Bm|wLY?Z^Zhpk z3UIFW0*&!!JI7C9YJ4FB#@b?uJuBn>=Wz9~OuDKtA*2;+i^eW{slzy~SN@vzM?%$c z&Hr@HVEiZ7u@$4IO|NIG-B?@i7ZrpMit6>IgcY}nXf)0k@6kPT!%IY>K11BE$Re9h zK0%SC4*WaMT??Box9OA6R6@_E8|;v(Vc4!nXY#TmiJ?pC3!}_7U8s)`rmp=l)hUi~ zmqNby)=_5lRrMgJ(X>5TQZ8V^m5jIK7Xp7E$iG1&X=7sXXv^{bFXs4_vj?V@B<5+d zTGK%LyH1miMH&eF1NUIiWE?aSCyO?SUSXpyut^h5WBR!@L1(6~Q(189fZWBNMQz2S z&KOdhe)jj9lG1!G+AqU;%YF>*_)LnBlJQT#OklIXshBI>g3C{eOn;sQ%f=uHu5k#p zKEbMU2=cDJz_)ZiuQ{EYTU2#B@M=5AnMZ~ys!MABCbmtnfnJ0ifdgat@70Lg0Wv+b zcy|eA#z2cgQenr@`aq!$I7&RyWE%UOIPfS!#x|eE3ab*Bgx~vV?%O$WIhAO%{S;R@ zTfDda^Ec-WS@yvBZD4xq->JnKM7qM2n0W0^d-&b_d>2H%XIj_==e_L7&QHEBns&RS zREwy5YUMs_s95KJYpuwabD}t(Eux=K%bK2_@>1CMj8ghX$t`?Y)BojZm#z>J-tkQe zYllhsWZ-Sx=7i4Jt#=__k*nTZ@tg%{_Ifxi3KtB2y?WR^>Ge9>LpZiu>UHZqSpL_f z)R^c#-m&2H=&}>i*_pZdF#OEZKX%ED^NsDb(!~b>_R6<}4W+fbBHNd)01g8Tl%%Aj zSG@+Bh|$IhoAo|6X8g(7^p(sHY}|t9*uR}nNzRTkRMUe5FCj?zv{Qd2-}^1~a5>Hs zPP@3c0OxIFe69&cTiQgaECY|QlSPS|dl86`ww~mHFyGw0ms;)z=FAf}M0@Q3N!q`_ z=}a|%L-ZwYvh<+aN%7o|OdxeTX)G=juWNGYMZ?Yw#zCoa)d_maHj-PlKKx?1IN;Bm zdp4I>R%z%J{o@(=Dqu{i?uhs-1@~1Ub?*+U?`)kJNz@6pZkU15Q5QArCU^+NFo8KZ+ppQGbWPVz>Y56YDUW=TbUKTnH zoGl)T=?GlBPwmLh>Ebmfx9weGEfR09>B*8UGAW}BY#4&Zrs~C=`}=MB-iAx-win>8 zsrjFUw3DD9NT7Lt`6-nZU{3`l3X>Y3JAwy~PJxnD(}=J~N7+W5$im|Pjyf>NC{w5P zjlpei<)gRzhDEe0k9b3YH;eY!Ij^N5MT4);cfN3kL4!)QmEOu)i`n)p;ZQD~?yXbj zy1TuiQIADxIoY1%9$lO z`fx#0=J;ej4&{8ld}WTz+mn=pbzAMi`MDxEjJ7E`qbNl7`jNY)??WJL3|dZFg(l~Q z7v&EW5I_uO)2@G){BY2!b-@2Y2lq=ca+R$t!r!3-9k{pml>yPTsD(TVb|<%p}6w_TQ9nI#1# zBwuIZU|H_HW;XG57HNSfI7Suhj^8#7zmct5)bPJCB@&}XzTvCZPs@xoc^8P{q$%pt z%3rM}9-~E?=7YYmOpho}MOOuW%YuQYPBJQHM?B)d_~It!s0rt)elk^ZVbMi_Er} z4jNvKKUsZKXgdI@>c=O1<9I%)zX<|7M?J^awKatM%Z^v9%4^k2{e`|At2<}cAR%-WNJrw**lW>y zTBvt#@+4%oYKk`}SIO!}%Pq_*pxU)BTt}%?lX`d7q}_DHWAIs!M;8(ii;|u#-SK6b zWkQC_Ry@v@3;&M-W!M!z6XGw@Gk0wsZNf@}Xff<+!VsKXZ<+y602#!?2L0lsy7o22 zz474ton3k(lg^jAx3`Dv%dHkCp7Raz8NTAJzGLjM4F|Bn_M7)PI0EhyvldejA@V8x ze4zgtceW}P{l4+Y{?46mY}Yw0yKFpUSed*SIf{v#W9pDG$G}~oZv_*Shtmp% zi_#U;ufwMI0chU7T28wuxPNM>(&#*c%Y2~A62L3bXnV~%1{{N`@ z3ZOWmZQBqCk^~73!QI_85G1&T1b26Lhd^+53lJOz7+{b=LJ00YI0To$U4Q4^_kR6f zHANxAboc4g=j^@K+H2PZ-xIC1#gMX|--4e6@sIBr4T?ho5?wPvOR~SzB54{|);|MD z-J=n$T$D`FzbQ;=3~X)CWuOh40Lc-MfQW8Ugs4bhxm05!=h77{v_I@cu7dq!ld{RZ zzH4iQH;SHB2lcn)%te-(j>2$J^TN9%YgW>L_{ZI&#BW)Oe-|7;X>Gn_7pW@T_c3O>%?yP`d!E zSdK(3Y%8S(qo3+LyiKa=>OmgUA_WFD1T}Mu>-GPURn=E}5}9d3cunbzY*nqyR~w5k zCfA{h9vH3bpRRO-U%DQ!#TXNo>Y<>;Vqa_N*&r-tt9nv+-J+E{Q9+q^Q65X#)-NuT+@Q0^DaieCad{}GLr*vt5$WQ5!*RsH$+1=4Jo=3I=mB;7mX^}nfR z7gXUXLWysXISR+SvQ!9M%4p*vh2&p2YgGDtPj4h&(-6Ot!1~WD;1Te#*SRGJZpw9oF8&jNK!E*DWTV0yk82 zE$T|1w5j%p6%W1tN&_fQf36sxO6kDeH};di5B4#H!o-#(-*SZ^_5QvG;*2x0Y5_q> z>X`Zd4+xcfb6D{_wRcnTUY4iGB?j)?1bMtKv^|PwFir+SUXfTuO?g>iMz3Uc5Bh6^ zppO<63Xh*?=Fe_-s#fQv)TB9ES(10AZvIZi-}o&DV4u>>vg-B==T{ZqEit#)ojHB( z`KB9vGJ$ejUn3@|@ECyqyu~WE`fh!}_t?Zu`bGNUN0VnEd{+IB{tPn9!SDz3Bi%!w zdHq)F5F`G%n@6+AVALeYgJVdQ6&*;{-{| zHsr$!Ecr$ojbFE0Z$2$U!`8m83=;Lk&0?XfR1Q}m{CEyxWznh)DjxbmdCSytxpK@A z;DqKvAoLnQ`f^Z(RH}qSlr=rnwu#1qcV8Fo>FA(DVfTO%_4wB2KK1+I-F=UZt*I;6 z!{f4u`QeJ%>T>3Z4V6R!MlobG8AoC#(%lm%jRlssmr7?`WhW%zC^bFDgujAe>U=yM zqmXS+EBT{e$Amx#RUrIQ1<2w~vUTB9Zs#+Lq7s}kDU;>t!Q}Ou zhkpG-YL`>x(%b}(ZEu^d5*)F4(5(kAO#`juVUYZ42>$DDt*o_;qqki~weS$-B;2bDLo(pYMJiC zM%%l1FY;4iHN`A7WHJCZ$0*JR06Gi=*8~DnBg2RZMFQRidvsOMFmI-(BpSLrxR^^9 zH57`|Yw@%$%9?9lFP**%%iElYI<@MN8ev#>BTV0Pd{2VcBI6uz+nLnQS6^=~_MX_# zSbEvi&dx6uKx@jb3?+W4ePi;p7kUB266`f|3K|+X{}7Pw3SH)sX;PyvTh!CN9cKRB z3U-d(`HvG@Z-=4Zje`Nx>CBjqr<3~e%fMVhU9O3LmD@t2#%UsixGY8zwY!V7ZdO$l zS)f`5?{wGi+jFDZu*i7U3BR$d6K@SEMjoTl`JZXpT81~&1I&RD{`?_4dlBmrr zvF-;iUAcv%zM9{1DPVZr6X&cq8A!4bePn)FAP#Qavxo4E6f+&0(BZGDv{>4xd<7sB z2?30MkU84edS4OngJ49rL_mO8reXX_C%~a$TM=> zb-fA~FYY|H>m~4y0JciE`Pk-jM${0eZ7cwsLlrcp)p8La@e-*%Hr{Jh22AQGG8G~1 z-PuZmDe?sBSUJNTH}rP#?Oot3S#85-s_qAV_4Q2Ua?7_-n}0aCd4BR6Wyr2DzY(fL zE8A+R`ALCl2w=kiHZZkkht9zp3(O*jpsYqYE)dXQL^x-E6rev@oi3fN<2_@7>WUdV zdNHQ15ar;tD%Gc@2NN-vq%^Y|V&Be8Ltc4K;4{mG%9crI)fm`q-%zjk*TRg91*u$fD z#l-ep*AFjCMSN%e?d6hUofuifQ3)WV=sq>BhRztkD?(GdW&&jeRtV&;}Z5iHhsouo%jIv$W=Q`ud_*DtRZbWm)6SzgzT&zwPpYT+aH^^aVK zEUTlgB;)ugJ0{{VcV~2+cuiq;C!IdnguB2U3BqlPYL^q6kt?{YqhyxI0Z58Fg%N8v=!_Ht<=+Cx zsPRI#RIzSbqTV%6*;f8Fv_#Pvx(rx7E*>~lY+;UO_pG&PD>6J`UUz7)wQnBpF2mpb4U8nWt05pzsm_AR#;v!GudFf2E(3hOyl9D}?c8`iX~zKiU1*)OHH$7jW{9kFRa8KO`Gi z!sUP@C1Xbe3#s7#;`GN9&AVtL2IZx=IO{znY6U7X`pi*^A;9z=MG1ZrRhc+~72VTs zhK{S*2lptB4C-II!qnSpuu`KitCoQ)@5rJNwY2TBPlTE$PyyYn-& z9w@$njQtvp$s;X^7XwN^5l8*xhMeerM<}JZP{N1^eHlok(;v#DVs6#BsI7f3kloD= z_IeJit2ja{Nqq>S3}$v_yp@_nnV&slBPzNiRgL&->h*NlJpDihiUin#AU#;vd#nBHS?fem@6a6irZnbHfd`ov5dY*F2J}|ebPT0EHtvp!(2X>Q*n&C|OlmfqTHbvL zcp!5OXbuaFr96AX)V#ZY&~n&zx+{M`Tw==Gc7Ar?cX6xpJCY{#W3TI0EI)^})~gB@ zcU2-r|Iodg&d7y!$ncd;@=M=haDb9jeJEM~4ZwD?EQXol0GvMmRC`|3mG zk=0#4d4;Q37=dSJ`wXG8w_x+qNeLhsd?GqB+h>O-KSPuG^B>Tozc^51A7_k;dN{F; ziC$Y`Fa)q=2qvp(-VXb2SZ}i#s9ZUTCAq)ZIlDMGUN0ve9o8n$DC9l;?K%cF=_^oH zPiwFdtc?^GKW8yC*Bb`@^|V_-`z)QnYaKrJi9GMtbS^92`|KOPs(fX$+br$H`5!O% zH}N_zTheee8K&P6V~^oYX;lT!uz}&$LrJa(zSBPOr%2Ju#0?g0tS}(Qmywo!1$68N zDktP$GZAUJu-4y+4+<7PyD9#%ww6DlNmf4sjg1?tg^I6i5w8_}gL zcc-QXuC;*!dy2Q}I<|-~dcQQi%6So$t>$n-ECPTHY8D>L;ieX98Lfopvyvg+J4@sZrX9}iJ3?dNcIzPqy!D_J*+`j zi+>d*W{D<0-!=uVS6`=d7T+;lm>vmaRT*9&2%91VTx@ije%25EF0qr&E^x z`wK?-FgHeSIp3@QmW6Ctwgwkf@_LE~U1@4z+ZQOzFRjO37EF%Ee7HgCYyb)Xfh?2- zfTu?UUsvyZ$Cl-#eX{9AIyM=NTb#$yh3z@oniz8Ma6qRzGb6?*t6fbz?rnfY_Cg4N zlL6ihXeuIiyF1i;*|sin+aKuuAj$RT3{H86nXjxVaj>}*H8qyXJ+>A44CuP=lW|`` zz?GCE+D*e$YYzSdpRe(_joo-%XZ<7uII~|ixm|?JgAZQNhK`vS6`;J(c6HtRrs3_l zMKXIFIZ@+A{@w;0)3Tie>9cQa`0M zuWRaH9vg!Yrr0`HvM7hp)kc^Wry~TL?>+JX!Dr)AVQYU z+A0o9b03vRRa-lJ(qaDg)vJD)kpBYBxDHVvPmrc*++_)4i3IalRx>cNmF%D)#EwWs z?3QNlB2o7Ho7b;z_EZ7^F8(e{_xuKsT~zy3YFzlVLsA@pFe3;*?(K_*fKGeBJ`t^S z%3Ajpz}yamvPyCvhDryHgtXunyXIDVEFVJ%>gxA!-f}NK5eQ6+0D0!`(Q&^JP?sV@ zOEkjm&f08&XQJJ1AqlX1#XRpvuOYa`%57z!>c49CfHd(wi8e4Ut z0Jp+@vkupjB*fR-5CRvg4=`of{BHg9-&n!hC|H5$f(n??DKtcZwJ(C)&S0IT8jO?W zJgrB&Hz=CYTm^M4-?l#^&R0eCvAXxG6!vpPnt`W&OmVy|oDlGCE!sfSS4&V#?k%y< znWh7Whb>Zje)6Rn;*#N05J{wYhk+2l)4t44gQoq(%F2c+d9T?H=*SE3IT3(?DJlSv ztsi@c+fvwxf?>;5vblhw*go`&G|-t2r<+6hX-i|eL!-%-9ANm~cGsG}*S?31dLUK1 z*H)4afR`Y#aR%XRTh3G83A~J#l@2jj9qvU|ITN70ZPFSh{0i4%y&Wezhjw}KRKi4i zkN6sc$EAM^V+;Ox*&ckfvlmF>^^}~~*Av@)rj(O4+!ykXDx{E8PrnY<+yT|PcIqIc zlJ*fah+(5$ksEq6D)lRoe%5TP_B+5RH710>Edi?LDwK3bP+Cn{CxU8+0U~$KF>Kmo z8f;6Ly}xRrzdYpEe4d|>z%i_-o2IL<@Er`f^6pJt0_ds3Ss^U~yw=v(Dj{G=+0+3Z z0Tmfd-p@+3$$J@ksNFCA7m4nn$mG&BgbR^8XVM9{kt&ILsL~H$E2oDl^10sV$5D$X z(L{>m6U5I&P)RUZdw7`+`}`R!u>ItKiM5g89Af?V`ZkV8RXIl>F%pN~MK9`Lu z^N?o$2>HJt@fK8%4d9so$H9lFc)L znsJ~OjX2iB_ZGi&N{%YAwiqxeJ-vSfnjUVjd|W>pltnAWZ1Df9{${+sIK`@1PuPmd z{S@8KHXx!E=EEmzV74>sd0H9jWbJMDrgOQV0q_3#d)A1l&ipE+`&9NETl$;yp_wfY zvzc2nPp+!IGdNIUnnVLQq1Z73jwdf*0ezp zr4;bhkByrnP-J_c+*8Z~!umvB-Nkx3-wd5IL>as#DGd{PAg?HktftKo`}NenlbhyB z(H0JGR3OUmlxUjm+Bq+)T~m zmC+mT;}YDRv_v~ISQ8UtXN0y@8*(a1u482lMT5wp^Ja3r$LH!eXr_b>Td$P)_^>_} zG;?4Y)?%fbOu}%xdEP9fk)lO#;PI0+;aF0Tf5R`V!yUi7sYq0w!p6|}j4LHN>c$G_ z)P(X4;#Anhk5;4gix@m&sm!J3@pSr2O6JO9t4$5Pt~pmnvKaYz+Is;U#mV!08Bk>T zgfj^XLNAQNku8!%{hBnANpXc_DHND!k2q7B`(9uvldq-P^N(l1R+1}OfTv&Pu2-5P zX{rxKMSkD0p!~7wa=$7ww*Ig_m1gGYbE=@)Pz8ght)1Sb;Hlz4YPTKJaGJg0Ix}HN z-~51u#^(z8=(6R;TGUxl3c_>?^@Ipn20ttwK{ER6U}UQ%u6KgR=|j~ z9N|WJ@&R(my&!2B_Ij@97QJ{_b_7!I%3F&gZzd!6>`OlONc&a?WMWFATR&B*kJt zgO+nq(kzRqe_K&XH&SRRMpl%vt%tn``6^zQvTz@@5&6}gF_PUsp6dyj;zllzws9W6 zK~`)I4;G_OI{X_lRC*pQ-r037ksR4AKVQGCfp70%G23MMZX9XFC3mo z^I6kAoe`qhb&|Q{)e~Ugvtt_2vdQWek=#&JL2Fn>le>g(Vc}$#cFz@vAUq;?tH`N3`6!nOleoH-741> zc({UdMRg~!_h_SDV^JRF_Q?81i?iO*i)2fKzDo^c2&fdApxbf%qF}ZC*OIunGFL98 zmQK%Mc4tqm%LE#lq{@^zzgTrl+H{`jlLevm6;F+k>eH>jOme3#kRh&8%Dvu*I+FP6 z=Qn>++AT6Ipg z-4_g`KOni1+;f==4(@D=GoO7hgJO@#*A#=!wn?ANbPMJarc0DL3>FV4|p z+gMxg&363as5cv#eNsy_H+g=Y6$hV>NH*o;YCLvk!TR8Rr`8l&> zlkc-8V05Pbmn<}-cOiTSHh#jC=tf$E<2gCTO{71Un8K1Y?E?^_W}gR~lW5UG#@`Pl zg|IY^8ZPj`6H_EKd7~auZ%F32X^vgCXT<*G-N^Pmh-HcW3Ge;Na2_L@cwzd)1|Saa zFZX^-8PKh)VI2`_2P>GR;DtHjqP6hWeRre++J(gTEk1b|bO!d8BWd~W3gS-(446H~ zcZDtQH7#uSoeLv7dCbQjN5Q0FMK*Du<@*kp3Uap~?Vs)A0!@w^XUXYb_|x z01e14IDj0!Ws|Z3@#|^2eqhhvUUGy4z9P~NoG~7*`m-G?FtqfGCRP|3%~0Ryt$#zw zrt+lBfix=`QD>2Ot6v*^Ef5}$>QU|D0(qE4)aH+0+9Sh*`P`m6P1bVmIdj~$B{0&b zuc78_X)whgH*l;zNqJ@Y=K+!u9<)Tf69IG}8nq0$D-R-a*5i&E7Y+(Ed# z#a{B@ZqXY9oHmgjUvtDjUfbB&FglT&D$e5dDr?^e;Qi=rOLUSK&AH#Y>5m@Wpj3f; zldihTj4iZWcdZ9SlZiCFsGT`7heO)WeBt?pES6uRi}P|8&htCqPtFPF3iFJ+N* zdV$n!!QUC-khE53>uU3dmh{#Qw2vlH5i7nAP=)2DIad&{w=R$!V!*1vaJy%J%3UO1 zcE$OY*o9+L=smHfroFZd=;hAFG0M07HI};_z-fUdQ}G>I1m(dW%6o0nyDiR4dvxyZ z?QOvr#+FwCE-Ymg3&n?x9#@qB`bJL`GAh>3qXgi9aE%feFKNg>(K6sL*kG?w0?gfm z6vlsi4I_2c7EI6_(69k$r0xO55ZHdJ(&k_f@DPG6d{Jr0*O=$2aLe)JQ?hzJmy}*k zKSS7)z9TiGGdEA^JAr4o7}QaD;D|n|!dKSd+o^M4G{?2%aLs6+K4omVh_n!=3$V{S z-U~tUs~)xuOb4MnOei7G3}scUq;m5nF{|s%cKNG=lFs@qT3mrBNY#Zc`I&5ocZr9(v*~3 z1=ue+9Oi2Es!AwOEff59u-s_ly-J=kI?ZEf)~5qr0M!p7&~0K6`2+o(@0iR#1yoMC z6fyTd#sn*j`_nf|CbGsdg;gc^XS=d%ghBR!f@XyGQ7Jvp34b=O}QL zO*s1d?+OG^1*{H6GdV4HJ41`)UhMLdbth;#3Ms-NOEQP)6Xyjf zhI7Y%iAnLmiIwqn8UdC(>(Iw~!o&PmAE*|GmvXPc^lh60t9+bwmK|!qeW?L%yk$no z0%BM_hu<#6bhqo3_jxoSS8dqXH7Qjp^0M>h{eS!!fgTEQZrczmC|w|AaHMB*(u2K- zJ;f0KD_n2kLyfLPhD{jQ*kZDRVuQkddHr<%nxYHY58W-c8^R4_9N7(%XcVUoXWVNC zkE`ALdjx+oL4^lBnF4^|Oj_yUje2h#!Oy}yA&-V(Q^j>sdOZT*T>=SJ38Gx**5Y1!6zr(m>W(4wb4K+LTU{|^>MXN z`wB*vn`e?a(G-t8Ghix7Blmmfd|q9w!J!mKiQ>s!E-4I1c;+Og8j=G~d9)Bk0SIEz zBMHc|)6`~;#8X?B9Zc$G`)r}Dn5{z1Gw$thf%?!!ve{R)fMCwoQLMwPai8SK)KDKk$^D6a~E3H9jOJM z2ly(N>gWWIeNe}6CBcUJdHlVnc9KeSyVNfrKva1ru|w2*a7n4A!Zi;<!>4G4{5b0y{E6krSv)WjyJwOy(m5)oJ`ym zsq35DauWjl^Gixf*!)*3&Igdw03vGkTDn^J(Sue3ybqh4bU9WuO@eF!tIG=r0!y37=X?aH%MbCplHGtY}lLrc_`hfY&XM zm9G7#hy%E2I#tz}kMdK~B)u%2)ChZCDWQ$%pWm{{ObG$K!h{;Gsz#?k%uQcfS{OR{ zXAAr-9j<3wp%69DflZs|#?iNhj!*TH_H+4Dtz$kT9_uBQ|#aDxLl=1#u zq#cgoH+-Wk?Sv1hdVp3kI;e`O4JK^wPnc~AZJk?ra5ZW1xEWny@m&FVg;VdpPob-6$kJuY z{)KbgIS_bsn?%()2pU;bWv1gY_X=!1`=Ds(W!DRm&2^#u>X~#1xHW|vUu0e9#L*MD z*JkU>>G*JWbND=ZgKBc9ed_gQy5b-eLk3L#J8PFd!0kvN*}uI12n1(JT)wU~bCx}~ zUZejH`@nsFXG^4$7r*qaV~9s7Y2atIoC_fTV&%|lw80xnXWxM=I#6Sw!oY6xd#CDF{g z!~qKtU~mhN?9E$gx~;%R=Gx<|^$-|JP4PIioyshx6#yc?`<8-&VBl>|w_E8NYHG$q zT!&ud;CA%)d?)jk`OfghIrYz<&4AyIZi)mx`CT2(ce6Gvh4gI*?puUvXb%K8fD@qKq#um*0PChF`8pN#Kqz<18f{zmZKz0%*>er&V{T)wJ=We zC}TpqMM&(wzEcroK4rQ?Ux3kYAAQx$akE<#|A{Vn2^cB}+t6s&?fr3Q{33jNRQR~z z5>%~!FPQl0WWlzE(|&=cqOuZrWP3o5wU%RK27!<6z%k{_&5MC!EDokJYqz>F@~`^2 z)*Q0GlDOI4%`J*^e4(x8?EB~F9k8eWdlh)r2fk$g@Atsh_4ogcBY=SC-^)JbU-5+a z-d-wJ*wuf1{N$49E*CsC(7S)Mwq_Z3hhHwRWGB&Z`dsW9+=Fm}y4Q^!ZcZVzXm#1v zS%6lSGvMJS&mff>xZbyIS@s(1BJ;A^{c0S!0r((!({LAXlfHabSC=D`u4w_$Uo`dtBv`A+z) zSwjcnbOTly_St;(odsy6%)d_5s2MTt1k^^0+_;88a9x=D4+J?6RZc5H#5-X1R%tu7 zGba9%bueG+{3bS0AUMP&9Fsg`^*~@E{*zP|tj(Od@siU`g_2qpy+~k9(6}9LL9A`l zsWaA*k}pS%oOifL^PAOfsW$i{>$^5@+89FK7$bw#Ox%_JlKCc6=0y*33e#U}#-?Qz zEJqJF;0(Icw2yWMorw#dup^pMI5)!8*5H{^X7isz?BCDURYDLVy2olKEY-g3Zg}y+ z+P(3zJxKR29Mw;qEaJ$6a;9W#u;C{9VkwiCZ3#|xH`r^TVt}5Djr{BC{YQ{!;M>uS7{$p%pB29=wUq}moIN{y4fQ>K zv)8N7>z;&a`2F_iV7V`W4?^+c&txNYxJfkY?4*OstIuLPu`J>is@7S^DExt=`FW3T z3cMi8qB?Qp3%_}E%JGGWBA}3 zseJ)GD7K!-qr?p0!7^9N^YO&NP(Q!{blz(c5q0DpaKlH-1Q+1k zc6E#YXdPePbL0znqEzW<2X|YRO!&l|jCdtP_hEb(U2n{lDhn4Ey<*1qs|$@W{=q=p zJ=PN3+44*rax{uK=2n#_{(P4{{A}`mASRe)z$+TXY3@;cbEyu`O0Yu`R_jaKc|XEk zwX7Looh2apO3!KNamjm6N-q~Vk8Zt{_2DQG=q65jKVTs-;Z`x$65iQzuM>5q>?ScJ zR+3n3-uF81=J6P9^oETZ1-~fg2h>WiIkJ?@H))I?PHjsppchbL;7xyRz#l5|&I{0H zg#JlklpHmqY81KNOg`j$eS7s0(|P1uvfQkL=|F}3TSDHrvj>P@j{{r&kY4>t@BPfL6XAIyp_l;?=Y8mZE729S?=`=p zCHHR32!mp2JB$EJ8lR2I-1%&ivZmthl79SmzyFNxq;atJgU}Oc2=o4?yx(amwht@P2cW?F5bG$ zd~Q7&iqEWeBW9QMjJZv)Pb;WD#)?Rq^@&qz6(H1?G$^{uItbXVQt)Ti3D)Kag@ zWH?}OfzjbVW}hS4n#4MU4|e&QQJ&nl*6-|OVwQBcSeKWc=buD-aQ%)%Brj7Voc#LU zf^Xmq)8OxIJiqfq^|jM#ET7c0I7^dGq;N!jHc(^hk^z=`k zCr~n3Z5`8YJ$gYfoDou=4BanD{2NLT(BpeRt>?aYeRBo>s&O<>>$cu^I8%|ySIAR( zoOo!!oiCf`pPtC=xK`8Yi>ZN(tkv%K?mebphEM=QoHA8-($h?i@W$`i^X}qwm%Umt)SyNgn)DpX$$Y>Ag(%4A@%Eg_k_$J8!Sp>vFuHLz2t%39i`9myeoQT@tR1~ z@5AWWyLq#e>VveiO`Ne>Z$s)W%iN(0wMVshoL}$DQ5{9kb;VY*x9{#d`rvaTzq5t@ zv|$pcs*2`OKAySG#sRx@937i4nxKhlldq^QrdiA1Cr8(=ur3Ddw+!d!$Inj+9a_+D zXqw+hmP<>id0DhO5-a4M1AVWF$fW1R0-l8z!P(0xO2;%8*sT`^KZ}B1s^i*~eUDvq zVZM6lfh%1{Np%Hk!rN;nuntL4oxG=8^xXDNRDFpsk&kxsFgE6V5|W!~3$lfnjSALq z1s(KVQUCc#pMW2~@9r{Q*@qV-zL$AFHu3Us9vo~!n>O$@zS!H3Axj6-_D2Q*0gL5* zFiBxHeeNvsD_1KTcS&lMwxs;JT+B33QNZHU#X-gsy?*hNF#<~VtbJF`WSm+kN`hI_ zkK);aDvD@!zUnR5_m!XFc>%6vY6dzXDwd|>3Be6kgb~ULp@r0T*-cqV*z`Jn@MYX| z0#eg+2qMouSCX+m6vt#*jh`{EFv(IIYR8MnRxS&(R=Ic_#r8N9{p50x7ehLmZ&mqj ze}+~yr{|EJ56rwHFGM$I0MU4JGv9gSyK-;xsE?;6~Zh2APWPvfui?8vFX1$g9b(}+T`3>B2 zp%?f^#sh=myL^8!WH66{$S!{K0wy(|*@Su_Vf;e3r zvCq7{7oYBAXGTuk7m??zWo1S}A4tU`&plfga4+HWwj*oXoywM1^#ppR^)O_y@7WQ> z`Pgd8x%)|>8CiBa;0vpc{4+-E_xikmgCFWTT0wJIV?kZ%lN@YFN(#lt6aIFO+YHK0 zm}Tv3Yqaye^R1m3qo#CD*3h(Fyl=al$XDcEQH}3CUF06MLX__no8Q#Kn%UE@5FL%& zI@vNVU!!Xzo#GwsBwr2*EHsHK{k5r+4J)bJhTi%?_c8yz=b@Zjf0+6>7|jK{S$wB+ zk=dUBf^+QQV*d^mOgqIrn6)p~t>m#IFyaZrk_=L$p|OdsPYxN+9+-PFT)YF*LCISat#MJ505G^urE0@V6i`am8CmNHQi_6U89#q)jHEK z-B{^(hp+T6G$W3PsoDTilG_NU)-gXEKQc`7Ew;jqq*~IkW(FZD>cchmt?{}^9b1W4 z=<$Yq37k#NV0+wa9H>G;hGE`n|hp~El5L;IEo(Kmc%41S?m z9dTh;nopHI*G2`^tGD;Ej<;|#GJrbJH3v@!&by}Sox_{f28V==E4B`^Y8cY}x0qNb zvyDK*gZm@z+>XX^E!QQY1}gyliJbjp2%#xke!vaw{S~>{bxHp1*&;fR)k;)zO$9`#7>PlRMDG#~4&dl8W?`iIBQ||^--`*Q6a&m?7QuE3}F>w{v7d4?*s?Zm^tI=N&ZP7 zExtm_8dk_`+G{9RFLb428(LA3k|tCe1{;5`JgZ^ZXu?a#z{V&&^Lg}%MV9*b*OxR$ zQK@{+xd{f4M_Z>A#%`IqOU=pz!0gI|LX}gP5M4g-RfO`F%dHM z#!RKVQtPr5Arhp_LJne;fX);O?t&7EICD!xXG;=y0qQr>6ZNjAC~ftActnEcgdv9w z`%Cc$^ac_!)%bl1z9Ir7$|gnCng(;aA!su^gE@g@%_Okq7->N^P$oyh?vI6M95{>X zng+U7wM5Px4-l+f@}$C1%r`F3mQ*BAKJ|4mj@dw?*V;YvD(E4*Yb z>dhOq6xEPWm6iZx#496a^IP+q2Lu85f=)M8BpNr^d)a4!N11|AB;zdPBa3zuW@%sa z5F9ka#}F0w)Ge#|P`dU&7L}TYTVUMfvmd77BT5Ou(%VuNF<^Hx?k#yu8HSWw_{AzK zybmr@s#S?3+~@7rVbe-rgY*)Cr#Q%ch|fvxnDQbPvuJbIAV9zQ`&cI?{bg_B^cS$| z6i=H0yHNYnFk0xi6IAqba+PR{%C8|vqb(zU??k6BJf0Hi1jX;>k;&&VlRvnk;>$`( zjxzPZHM-x`7CUTG$klZxZTGRKN*Db7So}Fh=rq|niBQLOUb%BDx+1vCX`ARVKpcVB zsrMkNNBP)56m~Of*>*Y^{gY0n&?kz4^%BdD=Xbt`L;H!J7<)y9N!9i_8EV-Lq9x>h z6~Hk33^1ciDlC3epv!7|yUVVhSEQtqnNP@zE~CVodwq{A^5w608nA35%IaU<5Yo`) zRQZ6uc6rO8$z7j@eDkm~Dr4JW@`Y75ma&AIRHsBg;nWl8*ueA3pSM#bIH>cE&kErR z^qhL#vGC#11CeO86#-#~6aTq6WWuV zG8e~#wuWrR+x3A7xb{U=0_6D?y+6;N-ULwo+PM5-eP^>OoIiG5^BjvM`2dw0XS4yl z$il+H*DqU6zsU7lA@Sf<_Jp|fvZjS7mwr5L`E)FrWLvWvB3JX{j*U{?a1?6m;nc9p zMcgwznyi~!NG$*h>4Ve!_#H_U^BGSS6}>LkRgywQ8^f0Gh^~0VDT!_T7jZJ+j)0n@ z{5AE3wuYt~Fi&P}=v>ti?F{R~^!%}}S6vX@sj3;WxSvDLkhX;@hQal$z4C;;=A&4@7W+ky*X2* zde6C)Xvt&|IQ<#&ct;t~BbghvaJ>KU_0=LgEIYJ+bnv4+k@~ztY~hj~t^&<$J&XCN zW5KSQl&sA9vO8|+qQcMOLQ&9Psxwjp9sa2DZ3fbT@;bt|L{bZIdW*|xB1I`!_sRV^ zvsiZHA0sq6A+97=S@4N1edkVGh@N~Z)}ir^S93u*$pSQ#N`oYnIKnW2_d5d16)M%G zs@}c;k*f^?_?rBeQ7lbc)@h&3FBR{W=*{f`Scr05CPcRHUCCEQ(RewhC%LN|q?n5I zq)C?_CJ*gnnxl0Oeuc>krd*o*Z@}?wIU;|vuD83$x>cDBbuTCC30XX-x~nZoTJO4)1`0&2!^i z;MVk9@@o4|t~yy!I!W~-IucW_=gR`;0h+J~u>Ud_`1RoFEuuh_%RJbdf4i1}O47#p z)H64@!N=X%5^g?o_1N4drdRxmMYkiwVjkTlp-O!D)Z1v?4 zSJpb1bqeuN0o)em&+w*ZHNI$#u#(@d4_hOB^CT*pQ%M62=`&9?72)^T_|naj0FOh# z@S&Z)9Y)L5?U6(AFEV;|?wD_Fq9mQnsLB)RW}nW0t+4p!(ZB`su{WF)MXqcP$#yHl zSQ8yfU?o=hymoVn-`lJA;BbiP_wltxR$xXkGrw`1qvQrGnloDM5^D4|wm86qa1lRX z@K`qb2TERB4jQT)p=)u}jp@>3zG|;eG5SluJ&zQkE5D z2P`6%H|eXz-vcqf)rQfz*`*HC8+Lx2bsVZC_A@(Njjwig6#7|Oh?vtOdvh0hjB34{ z{gj(+^j68_(=*%ft=Bv55^_4a-{F-E>7lYDZ=vQq&W#I7ngtG(4m|pG;h2`N#}d}$XSZJ@u;omIO?c0)|hvkEXW`~d!-iHHqF@S*!Jcl z@Uks9Ew|5U=w<#g4$~EVGKV#aqH^yduy@vI_p8{t5{WnVRVwxt%dZj2dM0SamBRDjK?}_5M~CGjZ$n`X~Zv zEwP(bR{qSY-=ERHSiSoGM10Ay7@|IPW`G;D*fp+ulF7Jt)cmt7emOjR94z5i4e27z z9Q;M5lzDxgt|;D|dSWEtu|}~~XTLYwYOvM+ovOp{fO?6un#fsOmPNwj&R{k)efw#! ze(D;>nmfULlcL84RpZW`c9ZHMmO%s@$}VmwABxbuyTC(K&l>mSFT}A^pP!j{D?Cvk zwYn4}N19FPtxWmn+t0v@30d;?@l>p5M^!iE$^9Ane#742kFpHTUY`_>`@>F) zKEgXX(9?%=9UeNGZ$w)TD4nLyw4~)=ak;tqX7ZJj$(eX{;bO-3$ zSU^K3=WWYmXC55KU!S0P-Gsu0>;B)CIx33k%&+k5|jb8${C zvfit;UY&Voo|$o7vRx@_U%YK@TS*Psju}zZ% z+F_lnTEg zLo8q%-{)T{X3kzCV$|}wt;p_oUBq#t zcB8{fotCDC@Hba46-n%0w>c}4)63-Pxb@E1ONL_N9su44s9{dJPqkdOMtv`ld3;Y0T{e2@qx<|t(oXP2uXyjbx zP0DND))=QcY0z-gT+J)H9|dzPKpT`2D5mh!uCA0%#;jTxw7q?@JlV+p^gh?>R}3MD zii&4!%?KPC8anTAA2# zBEM`vNDpQC`!x9ygjdt00Crf4e8H>pd&@JW&URS1AD80i%07H^Nr#$_PA>3^jSnL~ zQs15Bmkk(C9^d#qy9;P+U~&xmI|SsF7CAd>W-qls)|_q9otE5MM!3z{2TQG4`btCx8@BT`bbuWWdA069}|rcSQRR}tuilaXS=#F-2`BmOfeeGFlQsYwJF z&n9+9<_ix(!=DY$QzE|-7R-R>`S4%2cF`?h38E-Fm1c0vn4MF{dvO zyTHHWyB*F(5wm9`{(VD(w%ZYsb=^2aRicIF?_BuF7ymmz>x}3cde9BvP?lHEdYYLi zk$U0@rO_3UfKi|bx+tar;Ko$ANkO{A^6UoP@%Y!hz_)F#lY_$#nB2QgoAB+)GVt+` zHs_pD2Pi+AT;0u!pEt)l0v+BK*AMWMNJVgaQObIqS3_>RY~K|!E|0D+?V0oJ4K71c zgj|}ed_=}b{FDvwV3d^<(=r+puTSv?i4VvsULg+; zo_R!5SE&$W)9UARYMMKoIarw}k+|%O zgcMnOY=&^m42368lkW$Pf}7aCd$E^(hw$%$p`|aRbJdCo7n!*g6{8*BJk6UUz|U10 zpfzts0#d89AjTQ@F}GOj8>gq3L7c>9BCLIBT@Q{0B1PM0S7YvBUi8u-SRO z*@k;C(3h@_cNXwYcpe0Ly{V+Ci`zpH-fM&;BrnuF<9qT2MLZWzOUpTFWA!Rn51DXZ zpt)a(dQ|7AIKuS>b2k^AG5r7I#DhY5a%Od&NaO`Myws zhLLhqKt}nb_4T1rCSOp)g`2YV^ed{j@$XTx$3l$)kA7(T9X-i`+{$eEwPo=TDanXg z#)L5%Mz~t|5G)6*_XJiB{rw*7^}nOUpP~pZUD$H56mxiGpf)XHGp6BDJ4zQi!%_Y>USWC{75BPiqN z7n}y>Q7Lu#a_6=lsu@@;#*SpYo@CD6`DA_qD)3Z-886>u0;g)OLM%u=stgG0`HzfZrBGXB{{l4Cw_WQT=?^}EHM1*}RsUa6XzhsqGoqe-@a6v=C%NaxQ# zk~z9Swg#H7x!|frOV6D>N8SESFeF5W8j`97;Z<{!`2=bMZPRo-g$RU1IFc$W_=%oi z4lP~|QaodF|9k{?oM7ap4Enn9PV{8NpAU<5YjPhQpN2$r|LBdbkLG&6_4q`mmKb55 zT*R=0#eAO1fgulQb`+^lK!yGw*6Rz9I^+Ge$DVZhJlGMiDIRFCL-kS`NAY2_{Zhne>!KX(>r3gO8* z4SU-}2SZ~$HFFwZWdR?st+Z0}Mbdk1*?ncKU7RYr*AykHb4~+ev@}~7SM)v<7tv*fq&yoB8-0lgh4*I7<&4bo& z;}ax*8FakmWQ`7)DrNSd9p+J(akjNLvm9baST#8aN zCaI}AXl-HLOv#XR`?{Mt{)la+Og$j4z+vm92(iY@!d}q0qP#pNbuLXqpA4|H>eP=s z0F>BWpw5?#>MKRh*kkSeTDj^;(uqZDqVp|9eG|#Bf%5I7JFYdoIkVD0&8_M1z|e zqjn!`%6H#djM;m3HBd3GDchXq1kpf&U#B#FP4EFA+Uy%q5QpNxE4Qhs1bKPu_uJH| zXRuwzyNlxS#U=$ z&jIr#3Y5SU2ejGcR8)$6bBDd}p?!kws6WTytji^a_pQicN-;G8R1#uU2laRNi-L^= z*2)XVmwRo1D#b9A2ylQzO}xKuEytNCNGNg7z20Hcstw`lo@t!mP4W3XCleLFaj8bY?S8c}E##GnH*wylaeqL36yQ$| z$I0a0%^|x>QFB`ELthMQQlt5QBD1^;5-B z+uiIFVm95M!O!7Hk7S2LKyhqt^|(l{Gc1h zj^0Q`Hoxoz-KFdw;?@^5B6p(_2+C z^Bl{AZM#JPP%uJpO`(%d7t(4z6tU=YpM96JS=KsfJa+00I=ilbL(MkaM*<)1Gh~xa zSC%yNnptl*0=l%7mem6c>hV*-d&$GU8#11HJ7A-&!^&ZgVIqM5PL;qc=}gcJqD56b z(snwg#Mmd{k_F%D^Y9~gM);e@m*dJU;d(SR;^qx+%yGbRnS;dOxDunxp2SD0anM=r z`f4h1-mG1$w!ga;K-SZ5{JSk<1N(k#T;=T=5u?~S-9<#~X8VmJRDk{G2T#?_vnnB! z(rV;pci`jgH0U2hFX0V*U?l}{Y!_1$*D_Unm)Yl7vPE78k*0)q-9U-+Zz7#<>c15~ zLNyWJOSpXQ>?;?WBRY05)5?XacLueOGEbSuk{tZ*PS4IAj7SQEK%)%zZWTuPs{x~wLN zKPGh|&s)D9d?>C~U5sF1bq*Q|9<>qod|h1DJUr~=)3$?6ELmxJUgB1#IfHU^%Cl$$ zh*pUZnvWJ53@t!&N=jLC@m0&RAS6nC`HDPTLI)oxl5OGDhUz9v(Dd>}IJ6aJ18g3q zY!($iFMeG)yTaT0LFYGMrLe7%vIY>O;z&Pl*#n%&&wFX7nx_GlW23dy?2MS2;!#oU zXAu_bYWg@LLj@!^0uhuzL&~A}b~vdAiJcRti0_G)g}~MwBRH!(mS^GE7Ag zKlQO6Z%nZ~tIUq-Lt+CD2rN~o!qxB$$xEDWzMl9s*r2n^c#72^9;p&`|K8g}Uv4XF z@FQzvLpAwkA{I6!NeET4dx7ONPe@Jn*WlN?CS*uUCiTIdT>DuTN_247=i>FBbxM$X zZ8|nBM`hQ`8jHTMSK+u+eWlzzP6ztkoD{e@-$wbu=v~(D@g{XUQ@4$L;uh86diym< z1cvHe`nT*t?~9v=-_4wlobE6;VXXO#&9&uS$1_B`4uRVruaPzLTfOX}slzGpUVY^t zNjDb)v~#<@E?B^-)5A#v3!m)*W_+dTEe#V?+d>Fkx88@u{tuXbC#higg(r81wRv?q zf_!tkJdb!~K83+R;5$tnchq+)61#gyJ}IXtg~FBY-c_KH_xpqfo6S z>e8o+EeO?sVErLQD}N-TFk+*#w3OM5ywPmzv-@lztPb(iuVePb zv-#6bA^QGV^P_IVZ66&KYfo-R08dUUyYKNs0_>QO#&T;U-Rd6fw4{P^N6jRHXXUz0 zaF*`-_`@nCvnA6#of;WEO5rYv9B z6Xq&ad=fKzP(A4|^1nT`xl?^1>hWk8iiQfjUX-?5Y>aD^0J~UR^`{XE7WR~PX0?l4 z<%feyd?7M3DtzXcswPt*paN(v`OoBr(-_W=iJvAK3_qy%tO1F|Qt<%y2`BC3ref?B zgx_Yv(2v@`Wm2aPS32IF?4%u8*+;MlCdh+fqqF-pn_sM8Yxg9@34|^IyDGje!sG21 zsKha0{+$j0b6u+>pf`!0#@;!XOXz3kdBeJoP;}8#*n-+2 zjgw-h%-!jOqLTml(^faXY!_GidBM(j-fkx29P)+GwszaJn`8g9L-#ZvX%Q64bp4jYR_k+Wp)b}?e|ZJY+EM3- zhxAu1Owi6~7sY4nna>pncpntiw48R~tvYGrew9=h&Jn#T2&*1l8$YSm zJQj+%SqX!9omQ|J`@VbQNms}u=gZ*Rv zA$=pMcZIjtZe6Qy4T& zQ8Mwcal0?@*lfJ-j{E52L_rhK8LXWuCfBCi6%uDerGI)o0INTp7(I6+)!O8XE-2Yg zPHZ;aL^BGw!}n>ri>!{avW=3=Rz#d$PiKl}U}J@$6?ZDZOihEmj} zg-q-sLrAPT!#t9RQM#UKrE6(QOztvsaF$Ga^XxRr&~txxbTu;WOgFQCwu-F7rwE=Z znA_|P*EwvmZIt3Z52qOSSthVLC2RhCO}HiMLiJ7iDT6t>(|W11-|6Il(gAne*9|x7 zoSV1nGI&hW&z9>&qe1p2GhgnT-Bz)wUC46%&QV*(LRud7R=uJ@$CKNgFPAI@bJ^0A z3;IG!D!(?_!z3_8gtjzm@Yd@|I=6;s<0jY5ogmCI%RC@4Sr=fcO~s#fb*K|D$%+$1 zjWBuVyHv(Idzk@b{k7eyXFA?Fc`si9Hzj+-=6@GKRPGu7TOTNT^%Ww@AfvTOAP>kEajww?ijUKiQDPQ;Yee6cPppS0$Is z;M=_r8V(?52orkt1kG-?98c9PI(T5eWO}z>Eh|EjP@<%7MOa!^MnxR>QR69);U9Mu zQtcIKbvpZ9OhTCY-F!qHv^&q(z~U!78pRB!hgrT+Ke9Z&s;GV)dPX+D?ADskv*AKS zv)%xaBMXsIsZFfBsY`<4m zrUpOQi5hg*?)#VzA1g#ttN}3iPoNW2{sZI7{&;{J}ZaEOGd8x0H0Bgrh zQjaWNgX8_TPGbY%uo=95@H zkn4CX@Vzo*iu>3X1KkuVG?2E+lLwo+*&U0GFL^vyT`VtD2H`7Zh^7Q&L(Chr1$apB zEN(_JJlGxC*j9dn0NbY??vtX=4;B-`h^b=CuN1_qDAD#^N%GIi4;Gs;f7UVvdb5y( zbiJ?TPcjSCo59HlFMv+cBWVx$%C%1SZDaNELs^=La5M;1kw>TDYi3pM@ zcsIG4Tc#q`P}$zXxr!u6g^@i@-0Q?I6DGVxtK4~2J^JpOZ}|As%?oYWR-XOGgU|~j zB>Y@Fr>xOpz5Oc3xFht<4H!Su@6TboP!{gtCv&fD<)(*#Bo%8lW~@*K{_SMWfbQLk zb=TzMRbGK(4GT`$(eyqY7#m}`e<0(9jpEnU>!sRMSjt`RIJ|u{h1GYe(0PWV6a^DU zZqEv z$A@M-+|3H{(PcO(3~~w!5!CEmeap-YVdqOZ{y03;n^C}&FE2Y4Y;3s-0wgZ-#new& zp4$K!w$FddKR!8}F8f>VY&?c|f&jzo`f`aRP5e=(^=d~ZJ4wn%)$hq@Ao~8eI5x4g z+C*ddoEpwGW~s@wh$8T|nI8qb{bYqK!b`Cs z?eSXE(*FwGbJQ!s*>JBw=O@jv#@@+_d-ME{ioNDvUv!LReZc*JtSM7uO1)uWwH4>g zs9HU`LbM4LT)C2c*3+s0GNzn}N>0zA9a&s~v);hGU#EvVw z37|WqIIJn0R(5bpDYSi-k;Q?UVS&U66C^B$C9Coq_F2{R{kCS#{jNS8-ZgO1gEC)u z@MhSmI*)j(&V5Pdmk1NS-A5-Pi+XRksrnvH|B85jYVISb)gS&DVQ$W~pC|f=C+^!y zDarr-X*+7nWT-530;PdJsTqR;(o|J4C_)Yy3WDkN+Cm*RBBqM(u4g|4wjgg57BEEe z0g_m9cT>l8x7JyZ3U~GnuI1prp@U1DqP{)-?gq&OaEVRr=gE?olRZAoEyhhvV`+_5 zERoI37IBB&dC}Pz-%#*W)Ix~Os4eAqW2ckfwcLycT5ZY$4`|Ddi-_4tu#~-kAPL;j z>)W^u{3|b4%E8aBxs?-{+Z}LNtfR(zE+b-Z`PbS`}wceYY&N}HI zn`nL(2f7WI1yDK11ee*Pf(-Tkxf+!>fAkdwi1Ym}PoaM8Engp(MG0eug8O#Nc1dbZ z)H1oc?^^th|Ge;P*h5kF-C8^QG6STWHd#qk1|08aw7S{-PGd#TUJmfVjMyBCy; zjcJ%jAI9pyXSzC5TZdLS<^!+AvwX{^lsb+sHWp}nC+cxCKZSzQV z-rT-wrInX~+r7$S;N$&4Izq5jV>Wtv_d@Fd&!maTb)lz0uf-O-58+yM=!LLQ<04fN z0e8v3_o=r!48&?ibBCB}h}Ao-(W$c@*zPm?pO8lvd~oE-s}SWV(z9x9#4%4ie3da5 zSIzb*Lc(#yOUsY}-P>&h*w}SdyE6Lt(a@`q8LJrCEkjoviq3#wt75BHbImwcfGaw% z!7O=$n)w<#`1qD!++{dzRrk$Qvzwf8-adOF3d%D{c_|6aBRR!22uy*+0}_Q5S6~Z@ z_q!bNRhn;!c@AU$JXi%ftA90Cd1bWz%UhM;)ly_dv3P%yv%l)}kG)$QtX3Ns35vAi z_rr;W^an_UPG=f2VTdztzI%7to0;=r8<%`we(<_*&0qS$)oEDxR10te8(pi1TV|_& zjbEM<>)qysuWssb9v>*yIKhADs8^#w8?`8?eRuI&OTu&J^vj_8HVY>pA5V_`f| z#9MG2O_cbc%;PtWW_a~UGCbxNoVC+ruynxdA(Kdnda(^r@6J>MTK;VDEAVK$#FW5$ zqwi)U&;d&=rIR=TS#VCmvDC+&h>EJFEisi>v`z2|C%hwFRy})5lxv+#23(C{o4#YgotkZj#c3q0wPAef5`?$s^ z{4V@;2yW+qyh8vgDz;Q#uPOK2(sIyRx8JWUWY#N%+N6gPBKoqKvX4-xt(U$xXAt<@ zPvz7lrcOqRz>Hg4Lx18^szHFo&QiZgVt`PEmicIGc%~mV-1nJSd4^pq|~g*qQ~BUcEe7| zTd$gSO5Uhl0rX7DwD)`uP_AHye1A<6K`x~p+g>jR2-6#2)(MURMRj|L(==(6=J3RC z5fhM6Ma&rRo6I~3I6XUaZWYsI%a@W0GZ#FhjTt|r=j?oG{Q5o6i|?{_uhZ(0{=ZmP z^We>5%SZq+?UNlT1qQ46g?D2Fjmgt}Nw`n@Nxa>=tQu9CjCQaL8~?@Rk+>vX3EJIf zD@t0?I@~UaKr0(cHG;*WK1z0eb%6YnG1qVZv7|9*rQ*H~(EqywF=%aKpZ&!M|3TaS dpSDLH(M9g&gRbL~R#1>Xd1)1?YDtse{{zdI>z@Ds literal 0 HcmV?d00001 diff --git a/docs/source/images/recall_buckets.png b/docs/source/images/recall_buckets.png new file mode 100644 index 0000000000000000000000000000000000000000..3589e4e6ea94f7ddf9afb2061c0374a7673b5338 GIT binary patch literal 70402 zcmeFYRa6{X)HO;Vf#4yyhY;M|X(YG>cXxNU00}O^9fG^NySvjsBaJ(aO9MaWobMa= z`G2|h<@Q7Mu02-OsM>4oz1Ez2R@e`D2~;EkBp4VNR4K{tN-!{RelRfaRzAYN_2~1Z z54`=ocNURS`S|wp`DhaQc8>2Nrs1M&Z|34|=wu3GZf9?6%HVA5WNK>XY+>(m4%a34 zX2kH1k*JfYp^K%x-4_*0TT>VnSJN-7>|c~k9lo$KzcrY7c-c63Ihel)%Y701uELT> ziU0%i1xD(-u!={<3CP?7TWqx(@|u;j4u|;b*Sa_27r8&tKY#uVWydIxjAcaEE&WiF zr}2jpC+$l(RxItT_qmiL?BQ;HhO^r`dRV~6aD~r_TvN|&l|I8~Cnw`)Jl?nKshKkg zN=RhCK8bvMJN!=C+F<qS@qRy@fN!UC`d0;+XxyGZZlQybl;7C#9k8ekPS4A z-f6ZFk9kGE<12s-C5!1x$he1kd+?xNe2N36r~H$Ml-h)Pk?>yKD$I$U|<{=lH`>njg22!4%`DuT}^Y=JV#MY56* zz5=B`X?SB>+eI4JyXq&PnkcWtz}KzWccVQm+fSMg18(bH&hMxPfcl(}IR^VZ+4PI4 zWQl*mJNHj~x5DNQxaam1unVoSaMFubv0pKB&(?C{Y`ayqPB4nE3S9TE(x zwm7Y}yWLjXx%(%jM#fCOrb3;=&lW$=92_pXtKJ7EP=I@d7j1xYPm>)U<%YH>2fy@N zx)&MvxLuA=OiNV?L|5HZ=(RtKx;Z%Tfq9edzAIIj5%cRX@R+uGJTgnUsavU3gaQ+= zV=4Q;*Ex@g`>lEZvbrIdF7cHpXW3dFCs8TW*pm ztlIJATCv>C{b;eIcO)+aF-A&!YJ9Uh08WKkHTrLEXG^q!k_}c7qfUWJ%P1BTIP-W} zpW8W*JtXyLq4cNSe*cy}uy)J4H$JXp=|V9La7*12@PcmBK3`w%wL$EHk0Q^%V$3Wi zA#1iDTer;0=E%zP?oU=)e?U-IwdW1|3CT;nZ#!`edvQz|&_lg4T-$I7*l3H;hEAMo zQo!qSI-M47D(a+Ul3(C(xkJ`y4;*O-rG5*qJli(3`C=uq{dUBETovm7=QEfDZ6U{u zMYX~X-Lzq&&298zowis7|HkEx3hU36c@1}&?obtNL4ve`oRU0Foz;~I>8>>=tq_ML zn~LeOt<^qyHVv1#bET~uj`_`sxphKTM+|CqS1F9^rjg2|pDQRQnEh6DH|_R^qm)`f zf|9xdsR}j-7kY1JurF+Gq$Nq)*uHpLx~KGY`C-&>N@WaJ_Q#5@=etq5 z=2~9I!#&chp5^pp?7w)nl6p0jB-`< zafJ^QE&+7*L%OL9myhd5B4MNIVKksT>EFCGOpY%iyd%}$Ku0urO~HSOGcqD&^R%bT z&)}9+u8~OxO5)JgXu0|n7KT`^%aNQyN9B>bY7l7Mv^W3T`WsczC%032;`9ZB(P><6 z4~W@qlK*s)>CX+XbY2@Zy=?x#k9BnEccc_NJimu8%C2p-1Gmkqj9+wdHhk;C73-dE7QXSS0>pOh#tp_sYMDMoXl+VsF7h{g?o1Iv~EN?S*CHz5SHH2zj zw?|?|vhU-II;CF2&N0*fNJ>j1h1}VFWjUcIm#FaW)(jjDJtpFH2@OI(ce>aV_Iq{( zlyzx{98;o7N;a_N+bd)+h`VU>o=@OU7K(B$*Pe4rNzr_x=&vzc_}XZDALrf$g!`$) z>2-b3cQb1XbO*zZjL@t#`y?%*(p|c&s(s5hyXZ@s(s+HAH*3E`g&<1h=YPZWd;Ti? zvQDql8h0Hzj*z7N?~;Z^j|lyq<$*;r&=i652}|R>yE}SX!}0fBq}^r$l3+jeRvZNZ zsk}7Ri|gy$&=>@Ot(HtJsDDYHG0*BDuDPvEG1r7b@cu#+Hq>-E68nxg?9%Tm6R|1e z@wuJbn#GoIz83_F>i(-V?!S*Y`oUv8zEf_4<@$UDpVVH#Cjx%Pe^^M!k?lxLo}hqE zTp?AasIbAo*_k|F)>ZG>gN0;JWzlBsF*!y7vKe>p!5uX6dzM0G>vYS_{_BKflbyn6 z$5WKHX#Ekby214j8?&x`F> zJxY{lXVFc`2zGX7?>{8(E^Nj8@nIz>I7X*$`gcZ9Yg|TW4`TuV_OKo2@72vB*ccfn zO%A*&8(U{7!VOu>eaBteqGZ&oHo-?Czm3*gF0UBWl5TCSE^hWzxv#r8OO5Y;=PlES ztV&q#X`Nx>i44s4Zu z%GR${clWw>Tv7@a1I`jZn)^#8_vMZ$@G#idkzZOlg;diyrPo7Sc!dZ zzkT!(SJ3JwoO^_r&s|`nGm2SQSS$i^a@fnYLvCXl@@?Z+GCHw@e}D3aVP%_zA?G*- z1a>WTRKzn12Bbt0l+Z~1L@?VXb;k)sDM$KEq(4(hCdK_=xA3yz< z$6&wc3Jh$>?zUH&GZR2=7^KY#iLDFC)l0AIzeS&IMAS_gYm$4L*~iCvs#h1dFlMx2 z>LeEbKn~rz7&&gal9u1}+L62bN=_NPI-}TBuIWnIVE6@|8Zc zUI7u5LlvZ6t8Q_Bl(L?80njIsM*ePLO4XWm*uO;{GPj4*jti&F@}Fb-w}7O7xUda> zY{pP(L)pMdYbA^4mNOo^gY%s@99ni~ zs1i~ER#V@U^OuRVczuTj36EMwDLvm3(&WBeM{XjMe2TiAOm!Zwhegr^@H-`P&32a$ z_|Ye|JT^A#7ufr~V9V7`>@>6P)js0kZ(jvzted3^oR2OT-F|JPXDV@CBYfv2!?v2q zlXG29;|yq!=sTQyD#^fUa7Jj_Jlvg-NLR?6c5rhoI?Frtu>6r-&U1PBNsPnh31u$$ z&g5=zN~6<~35i;-5Ge+S3-T!Q94aL{HmMd%;2$qR_xPhs5&841i^%F-&&1!Ft&r@I zaY6rE6uJn&OF~%C)8}lgDM)&Wihn)kl&j_Mm^MCnc{8xRo>^~hc0jX^^V9znLBTFJ zSg*xaQKUusWk2-rZ?Qreuk#dQ(8{%L$pAJW;SsIO!=+L*n8ebBLK{aW%G+aU>3*N~ z8$8MhkHo!+bSE0V+q&jQX4jXmRvf27H>0}_u5Kdk%Tm(v@ZC1Ut3Hl|g1(8PZRn7} z_MCFvYf63FRf&B;QA^b}EQ{%4F&Xw#SAs>$?)0h&vd6V4`bQ_W34*145&7-BcT4~U zrM$+gmtKW~5$ts082_U4ekTHX|d=T|C z7iJ;HnyJBgpG37v330*rx@nRft1$T z%S=tz?stj2oanihy`nvXW*OQUuzXEu;?*HUsga$--bG<|oAj>@@ao>xR!={%Vncin zg846WCbW>2>Y{_3OGZXeaFQXqGXcUML441!LTMbS#Rqu&9lnq&$Y=v2IYqQUn_rGy zyaeRdr7PZkUhECC^u%KV%w7%<6g)4|w^!8_%NB&x>^J0+DHmEWFap||055`BcedHm zQsMhdXV}*!)7X&8<@j4E-tu`*;bo zkf`)n!)0R1aCoe~iHG$7Cz#KeMqyDh{e4T>U)NivvS*3Ei;4fqD!E`%dXIkRGq3v} z&P@Mj+7dA71KNF_U)ZA}$`A@!pi%q;c7% z1@E=kh(}-+$Rt0^v7coLHk&ITLL}OKd?1&Q31=QB3wT)AK$46~7`!O6KpChf=v{mJ z^TlQ7>%$z1c=!iUmvz~{HOc+{8(;LY;dFUneso;t`S=^cUaWdEiByOFn)-qyXRSUg zT8W15s`Ov^$UD2B4DG`8=Zni+_1_cGz;?=HV^5 z^!*=T_EFK}Z5jWk@pkq*V|@GnY5(7HQ(u$k@2NYry71ewZv3^RVd(%~$yvI5)48^U zgOTauJ=?f5q~IO{%)F|rk$t*QnGxvKFKNhebW}zs5o8KW_TM)M^$&aR2WI3wpY#fg zuW^%wmyC)jRM=nS=QBd07+cCS-5%88tBVS5IIg>1)qQm{e4Re6MwZRu$u53>oJ}>) zKKlDTtIPFRe?k5$+wXtl@e^3z;IRq$3kk_q(cR_YmWrE#EgH2S@R340T5tPu!Xf;r zImdFdx@&wDGx%bqT^ z>kSe918Y5b2MZOrV(lPdn$O1ZHQYRkYg2G}Ge3I_x@{bDUz-8!`{ zNhS!na;+MUuDuPvE!k0QB-Tw1G64#7$8c0Cw4&rY9-WR>PBR*C<Ew~jox{B4RePlR*nAj^L1vG2k_6`1MR3pstE5$@u@*UiS)Hza%Iq7_x za3T_kurj+r{fCDu)n`wS=PNc~gS>g*QGAl733gK-MF2@vPdr6uiAU@M z82uM>^w$4*(NBj*G%(YcG4X%!WhGg#6hWnGc)C#Ii@4$y_+};1R?4dyiWW zS@@{y1Yr~pJG`)wFzEfr>ixV|WR)7j&{Q9Ofy+_CmB~TbImG2I`sX|A*2()4HvF|y z78qI_j;r}d;{0XTNz(O!y)i^0?$(ge9>2eQ_m_RZ_Ul0|nEavN#vFKf{5BsOXcK#f zR)hBBv@w);z7=InX8Eby-R&>w$4ob9-uPcPBmX3m*}naYhMk{Sa(x@o;&T=0f^h^@LuVW2&Z$}TCO%S=(Ym- za%7r-&_4A3A*xR#gi=+Ey3F{s>wlfy?u>U!Rp)86fBAOlfoCXBFRq4HstWNB){Z@O zjQn=RU0?||)#!i0H zJz|!?e3r~s7vtsPWJe~F?***xumE;8Y&srHV&@xsfx-Wob=vs@3NU}-h+?|Rkl{ra z>tH}zo69}A%?PRmA`XhM5x+la(EloA;B(%+*0Y-TN?<_HFN{^Ebc4lXx8Z&{n{2ss zq0@VZ`g_)ues7JGqu`jjz*x+XlXq9koL!rlfWt-LSPYOi6FOosu32R%jsonfUS_q$ zW6##fYe^Z$83)F&>j*%393SAv1T}_(Y+Y2m?ly$9nZz%C zX6nXdsu1;?ITYuBr`eST&L8)cpG4r_@F63yn*~$BIF!tjR8_BkAE3b4*$)^2e{ah2 z^=vwJZeMRl5+Jn~RwQNmY-A64@Sfv2R|?x}7~R_07CFCFJzOedT6Ew4AvhcZN?&5x zN^Ao9oY1OvM6l-Sy9Y<($%@6GmtdXrb-HH9CS?V*Wute%M}p^W71;c&5WQS1CC5>CBO1tmnnAfW_aR-y<*{>WmD=g;~7+=;d}k+7Gm! zIjs4Sz>x|U5~ElmjJ{S|lsnC)v`m^Ql3lv+Vx#`Q>o!F|HjL2R8uixc*o(1wiL&5F z)TqJJY;ajj?jgj7be)>u=YUObR%*Zi+fRigf2ltTi}{FGg6q+Dbjo-v)#z@ZXc-~= zbs`<*$%>_y*TAkBufWe?-FOHq#=T&+;eUdduJTGf3mv{F(T5xPgI2*aR2P zwM}?Dj!5U@u76owT5mj~m(1D|IJ{^+5xa$@FS%-QBn0UGo4vnp<YXFPr4_s?Qw-ZLY59%qZh2iXjC^n`FMB?Yc!Df zH(rA$%*X`gp)LAy=t(;^OeBsj2=l>x_(yDYEGM3{gtq^YimvyDPRe z8V*iQv~L%NFIWw1IbeBoisxadt1rP$sfYHSZ9!e%#_JWse`wzOZ8ED;(ri;6JlOVa(BR5xzoRG`+;nn0sT z6jb;O8RWq#y0iT@QbLbvrEWB(pnbupnMklJdL~!G$6f8srdM#JSYjen>DpAOZ}7w< zai1sP^>`1_sFTomJ#}|Em_=3C4{`4Quf`=iqYT)14UHIbzmm;P)&{z-siuJ*G6OwcLYkg3Z!A++JX!g)-Ga8+M0Xux78_`exxTE_o`C148dG#uYlo`>#Y`(aRDweq z#0)`Kc+igcZfT-}`$(GLm&ZNVN!|9Oc=r*fL>g?BYyIMdphGmmWX24#Be$B-q{iVl@EZ-wm2VFE#DTDh0nM~DcYa4Q*Q1T}UYv^}b?0mT<$CA@-wd3_L3$033 zSXEvc zV$+2?g`;xA=JO*;5W;=}87*z3kUvvldjqXGu<|q=kzOxpcvzD5-gCcb#hy<{4;BIP1+Y|4(# zUntr=+irgo>T(iG&{PV(6i7m zZi$E_hWlNg?~AaY$wDTaA0=M4X7cIMH%=75>SqIauL8d1hwaZ<1U&Ir){8zafU1W$ z&APfyT<86X0_!KY_SXR|B=B}t?lF-mg_FT%g+{`B}Z@xJ;2Y;Ulrofc$qmyArMS@ zGh{wa#6RJ8$QA00MdD60*=b$)sRx5k8xqB|o+h$u)sV{ROAuwHYBrxvrzj*Xg4IFD zX*x)!d6OmRO~SQZZc<3WdRWlO?vc-iDUOn>V&TmgGW2(KHkHlnV}mFmoAVw@sXBdM zoW1)d_0K>nA+z>?NIbaGpM{-mUM^GxW2!p^h{~2&T0zNX{;Ad-PA1HS)@){-&oqy^ zUAVW`-?A!x{NA;`U3Zo@t|Ts0tTQ8O@u@RFDhWtB>c-1LOZ~uKRpeBC&;a2V@HrAF zn|!H_QbQ$hcyL?fpg)>bgbk8rb$pa1L97_?->VWwI7zh}&u0DwFP9b<8>?8oiwP#B zWbk{9cjq3Jh6cpQKu^&VQBX(rAvx_tHBe=Vs&gXm@IM$~HQZgN%^ zSFOAKe95`WWz*iQ8xMV4^xBPS-#}sJ5|RBToExvx7Au*R1+-6+)W6M`ciwj#(m2v< zwMPmDA#9&Xnl7jxTdvenlqLIgSL(;i6+g#E`a6E1y0za2q2n5JG_x6OM!0yD3s2gi zoj-@|%BVSULU9_70KJC!pf_j~fPmwf1swg+sl>JHU#gH|uo za2petzq$)Zi1$jdPej(~!@;Jvhm3G+_7J{?Mb?|(jFa?5PTl_Ku&c|UA}YG|85_-> zl?vqz)vRMXIvn)5ADD8X^^T~Z%6r?sjH*ii?Mo3GtcHFf@x6XwtRr`d=&I;>8 zp#R5359>+H>4bi~r1s7~x3cyQMr`jWc zAFc0@8Hiv=S5UU%WiEyLgy(W@x#Xw4F&GaML#z}Oojx=CtNUyFCk=Le1?#Hp^|6o* zl5dm^bjlSO&3H8}ybKma6)6UMB@}V5@ zcQ8=5$m7XnP2`f~-BF{9peB^WS@1Og14K%3GA>BsZ}^pK=&rCi9iCb+lPwZX4RH1% zt}EdD=LN)vxt%Jk>wKo5W2Ho|z;X1mZuGTfk~Y8hu!wscFqj8)O(=_vjDA@r@WdW= z?jQ$LnLaEbIHJ;LTYH*aGPHIw*cw4Jf0m*&QpNs2-trfRg#KaAd|feyKF&PXPkl4uP!{z_4o%xKW;*hEunhN z%TTs+Rz4kQG8h~AVQcA%8jCL}()=_jyJq8gNA=J~Q!qp=6CuvVje>hRVx$kgdMmk@ z|J95X4R(K$HG61Oe<_$uA6#5}zmiMf}b`vCe^Ke61WEUz4FYqkoA` zK>m@FeOW!A0YoZuE?$?zp|EYfl8fPZ8?C0KH;#YYj7=;8cL-?FeS$GObGmAwScQaB zhmWogkR4^%U^pxKn(TbGxWs+^jmQhBro8Bjb$x6VBA%#^p zn!kzB_;_#edUr!tsdo*gw(=r9O4%6Sw%gl0-!@C!h#pM2EW?_Xefh&t%LM=0t~Rze zcwisiLVah5kpx?<46QRT$^yUZ8tdoAY9?pAl*0|dc|N|c@n7OdQTsuo07-51;kz3&U*~k*P484K!_>gXd#|eA@QE-W;hEX#iVoUhD*R zWw3f|IJa>%Q0%FSd=?159LA;o!ZdRU33d`P%MBQ2n0z>@H)Q5x+djpP^f+R9!!LBW zOyGUOUop;$J^WW=W}Sjt{bfvdZO7iFn3%2+Gubaf8^LxCOEe9p8lL8pCcM74#G?}r zcWq;s+Y>0`>ELNr*J)uH$Ia^rTNKE{`NE%1jsYxnotq0Elw0_8oR?=U*7F7Jne;?j z3;;0fd6?uyT|*~^F?oC^zjQl(Da;(*dunjsj0u+B_@(B0`pAighJ@AMKgZO5&Z%)8WRF5mWra*kJL0^Y&Ib-(>?XyfPY>Xl_*hpytBQoN^U?{Q?-6K7TA zSxg7Iw&HUVXcbe!ylyK-i@>+;Gu1p2?qDn>cDXjrrg+K;_d0 z@eW+WVxyXTlZBFk9iM*mxV$Qb*B>xBXPcnT+MP=rqOpsvGntcglF8!VePx+3cRppJ z;Rs_jJ*%Jh^m_V)>HZ<2jL>w}s$?;y={&_qQnAqAt z3;Tx%w+E`OUu3?ah)2vB_qjr+!zW29zUcC5Y3$%?T&VNak7ESlsB&>NI?`8V@g=M% zUP?B!XXhxu{s&K?oJRr=+u+}Evq7|@#N*82*I?bAm-pKOq$%0kk;|^kdysAdIWyXY zVXrPaHij@R^NBpQ2$Em-JXbO{1nZ4>!7-!?{ad>HcRKo=dUwPPBCsC0zRr57T&V`Q z5UD`(xMPHvlpLFnZ7y5lw9fH_toyx!TF4$xIbg8+HznJzE0DZ~u6BKhHTrs%A? z=B$p(V%DEgt}JoV6~xBJ6JS@QY@yni-8BLo(BHIU`IHp*y+iupSeLB7PXxB>RS}7L z^)JtWZ=GFM##+0hnTnEUmr3I?nz{a=nat$q?o#DP4C{A65*9 zb>M&r47=4rxrl&_f&6h{SqPvYs17!4zS*HtLhN!u0(rR|{9JCApC-P%Fm`W#Mlr|v z9#@9sDU{~79{VR_4~s7nihX& znvv0MvUA6T#OdZn`{!6qXWmsD^-Lm6!i%-uo=esY&RE|QGa{X;Qr;|$9c@=AV$TNP zfq&}$9O$AuWWoMPqZOXd&8DX+6afFFIfW2Z#Aqy;ZRgRtSMgyyUzk`S6NK0jvz51w zCL0Xxs?EsyJavLd4$M3nI$8=bVof`k^`}04Lg`x?gVSpG!h88`EE)RKYOmM&7_IJj zi1eZsPvi2@p^!D?Ogllsrul{a{f`?S7 zUG@q69U}B4BsJ6Dzh<(v{xI65ycujW(y`vRihJ_+*e#BHu6Vwl1eF%v6v0n(wao*! zXO7}nVzo~4a@yV;*wb|O%nwYT9D$^epf@j+sm@JUZR z?+gqXBW>4z5F%K|Mlgt43Ci-sWHxX>>gh0Xe~C5>VV@BC(*;kr!ZA|6ErBlStjfQL` zG3i>@ArjWvImq>`sYOUF7BWp*6`S%ZN6Hb2Y0Dp{D3pS2TV}R)Npk4b%*vg49N4(o z)IHZO@lpeYpy7z4JA*g&q8ZC4@TGZyljnsKV6)b<$EAiN$rG+I9(e=V+Fs_4OXx+l zI60M)Tk@%=HI3+W3~;(hW3zm;jwD_qq>6tR0vb%bGB4Ju!hoTyW9u|SPEAD%e;@3? z@s-m2>5T~9+`jrqZ_m<+DeBs4+)bzw3N7`s@EXQ@a)NltG z?srcSek=tBZ{`Nv1XjwW5 z^w-{i2`Y;Rkohv}Y?zd!*>LC#%E>eeKT*9``3LExu;^%T3yHwJ-ISgzhJ+*C($Cs} zfiKl~nX3!io0Y^ajF{c!qK44@LwFTc|Z}*Wk3yw@3{KKL-mq`s0uF3VY0r>nHefKhs2g)AX zqovQd=f_C)GaNgFF*9?g8>dr=RzxB6m}fg^}J8 z=V&I*5-oS84)+U5-$Ol<>sGh+!<-q#6_)}Zq&Tbz*rY$qa8-ZY*a**&!vKu(@$EiA z#H~cIKw9eNFW%Y*phC*Uo$s?zl=SeKcl*(~;#I)w(~FPl(Px{>QLE#&8et|&5{J+= zdauUP5E}BNyQS0Z%tYm|cCWb3pqIW;jtva({wP?J++yQEN+`Rb4}Ibk z6EDU|&Rlyjs=M*aWU~NyB4Uu2V$P|BS^GSxjpes*`P+MYy^pHd7rxD0WqUDSDK5We z``ln&B}evfZBe&{$b;AFGO3RY4Nq~il`7RZT&(%YvreUc59u30yBYJpQRL!BdU$#5 zv~a=w1d1x!{h);ujdQcI`;?j(xsd5&tG7m-t5tzlns7CkPd(%8^fs4EuJ}3J$k-^O zMx~mJK@#-qBt5h4W-o?%3|LsTm|c@Erw^a;s)`JG z(<2$?2|-gCRbGUd0x3^3;(KPV{$3j~D88O7gi9u_mZy(vKVVqn?BcQgdy41GMS?V$R-IcWG@V#@58+Dbv-KY zoFg;&lQv-hX^V_~OA-3!yH!K@^YzG-RJbcH_`}Dbkm_npj@AaW-Ngkqet-kBUqf51 zs~mA|6>;9v0hi|OGgE!p;Z)BagJ$);%wE)?|Ay|{PQyVu#;=w`>h3Hv)3>_<{Vs^g z%m7I6Y}hggSEKi;*JluJRZ-}J%^CZx4u{g6LMeD}Yb4*%7jL$U^rb9Wxy=U#nKH_J zu19-kpP{RrwFTGEVJZpb5J-CGnmtqOWr=pWCL3cO!sa^zUz)(A3d6^;{n0k%)u*<_ z>k-m}du*{uD(4SY8+X73|k7TmHgJxk2VI8*=v zT)n3C8eDc)Jqn$z;bWa7pp``PJtUUTL^c(62qo4QOtTM6dy>xNdCE2V^1hMP;AMyn z?x0B#1L)Gyt^Mk=J>wHv5Cmb-52~eR-CXE+e}Zb2fI6-|!#*J!^;uzEciCsxy=pka zr>RnNEUek&(cLQU#z925;nn!OvoEiB5h2>&+|1jJD_ss9G3Ud_4y3Qt2D`K$!%=H= zy!#%nas&0eJBl?vRh`o2>Q$k(8Hs-Qkhtz`LH3Qz)jj^?Z2Uat4@HiYlx%H#_C6$b zd@@78+|1o=oyUBg?d0>GKlGBMCj{_g$ulofxPeGd+o^XZ;oI zb?VNEVqx6C%t&6buGPM?uaHvL8on`@HO_#C2XAsiJZfLp3C=0fHy%b5c<7ShaOt4X zpG*~&smi?giLvH5FlEvpEy039;&wXpq8AnlM!&+D4BEqJ+2quEYCJ?aSoxe`w%Vm^cgC6j6cntbROfKn~Tm4)dLqqS@fw?71 zxHbMLg;%G3*ec&6cvMOVf8ceq-(#{{x_WN9{vl6F#uK)W`F^Qu!+d@-E9E1b?W`@) zDtS5-O58uVcpged_97i_NOTB;^!Am;rXSa?vDJL!k($olo>CfCuX(lo=}w8Go}EI|DpO5y-02kN_XCW z3i)x!RJ0E%y4q-Z30l$__)Xw|pw>n7Eti|YXZxaNmJ==mFX{F0{2ARd zQ+fv(moAUCo7kKT6uk7D05Q>RKYtX&_^*3l#25Ph23Zx}{TUmsrKdc-hOGI{j68wN zgB7K`oLHA$oeA9a0Q^ph>KRS&_Erc*c`A-h;GN#(Zl6;_5Cz zo}StJ)fcTAjA~9^Gdlnl0tZO%!YCc&oC-R8(BwK3Sy9mMSF~D$(<|_dKt; zp2#~AxoHl=(w!SC-@_cE)ryjlk@J#x>4uqtVn9VLIP}V^MhlJqy1BMCuC$bV?0Xx) zt&KR|a^tSHPc?DpoW8W}hhnILWN$f*imKWV6`e>R_zUMEHhyD8h$t2=E+)!x|Dad3 zCWDIVJqfzOZIYLrTH$Q)npj~ZjR+6+-kDG`U$_Qc&md#j>QWzP- zz}TW3kA{w3TzgGyqsN$uvmk)B)7pLa}a9T(x$vLdFP^+S8+IpHP>{h zF5mbsU`kzBetib!}9;(>@9<`jM_(E6hXQh1nKVXP*9}1ySqCj1PKw4?(XjH zF6r))?yj?V|L@-WoSF0C%=zHVBRs=>ue$PgT}w<(h?tZgN=dFz;giimFemqixGt;$ z8Q~S@kU`J-7Fe=Aj~6yAHv?#IpTJy{TlOCmsJHBXxoF^1FWe)Zh5?j!cu56hTkjMG z0Oz``(QaYKCBLq@+>)JIxRaemx0eZsug$UnH0<-_1e?!?|JD+SOyR>m{`WJ<;Qs-~ z26(rBbc-yb$QHHZ27l|<6E)iHpwP%$#v92N$DGQ@#ix5bf~ zO!RX8sx`}u04X(z*F+Okyu%eQ8RX1uUbo17T)%0{lY*$oN}L&O_@8GYb$xJI>c|$4 zh6xl2*TN+JMU$92-sEBfFqy?UCDcr{NWxdLaD&NmsXkJcDqwPtjF>p1qUa)y3BjO@ zurkreqqTO|YnxwZahBuD`9g<0MqqO@AK1&*b@*ko9;({}VS+^_N3+PeQXZw~2+;G5 zPP7oXxVR&QvtvvM_V)G=r>=aie5sJ78k5#wK)*il&Q%!n>B!3vXM(bcS42-QzX4@r zbaclbS-|w~Kw*zH`}ft1R^?AhN@8I+9glYxX3LH2Y2No{Rea7NnWvi(g7)iy*l$nw z^3u?-v1j{hcwA#*VhoLp-u(XkyWM5g>n3lJDx|afC%W6y)hvFayU1tq|4el-scwbp zR~Z?bL-XKIvKNvGnN%b9bJNEqlH%gT)YJv277nIwzYA@x=H%oU8X6KzIV%^{WE7Hp zo{BC`{htMdLEZj8{N?|L#r(hT9Iy-j2jcwv2XvzN-<#uq*R1?M{(w2h^Z(|x7I#dPN` z4w+PN0%?rrAQ6`LAMsgeJf6xgXaJ@I=SRir6}uEE1qFruC>z42OdYpoVc~`&j~qtM z$`Dp2GdL~#n(d$Qi>C2vtXL5syO=U-y+Sfk-~#cszO)p8?I-8)sHhaaP`lSic!wP9 ziJV1#8oHCdY{>!nd6bsDuax`UH;+Q@FEik)0bT3Hlc@`76^Wa+=@(v7N!O1n?26#D zvPH?%zMhR=>f;kdSybPqszqVOnf`yT2usQ*{=wz)e`{k-xG#!eccn;^UvVIFb*IAqiKDXVCR=0o zIv>urBi5fvORTaZ6gwsf+UtgOtORB zyw4qpCAp#b^GMO-l_dP{7tw6cEdnHK2pHy zXcbF^{ARDRw_z_aI9%Zc#1s%V2tV{bOKAk4Abxt=cApuf+EjyPu;U@={g2Zfi7Rl3 zvBCFU-p@OHntHk>dbl-lNpjW>QEP1hja|mA+sQtC<^tcCm3b@G25*Iw_A)9T-XHYe zho{~_uvGGFoLY70C=CquB;h(ZQ$px(M*o}1f-JgljuP}CR6CMsH)qb!&-92W^*mjy z{JMq>7kK-0vs#d!e+*SUZ|Qiv)FA8}n%Vy4fIc$4tk)l(Kr^Z!+R|c zo-kZ=k_|(&L%-tlc6JMe!qpx%-l8&~XtD~&xlc29ZbeCzl;kkT?GM41%xln)g76!R zuRcvyLkrsc=+Xn>drJ<^e-DS^lWe*-ZaF)gBC}U*)r42Va({Rp#`G)F&G4{7bJX7X z)*Pr_XgGhYiBKC^1yW9U0k34&h&b~m?v4=)NlD+%?VsDBE(6yIU+c3jPqa?rs28VJ zKq-%{YN3y%EL$>1c%_gpiw^nG#)n9jB?eyd#i4^g8koi1<`F%`Af~}KEfzNYQC3OZ ztv}FWmenht;;5`R0i~maT1uX{L+(8(;+gR3dbJ1hP*_sb?^T7PNrDC-S;Pg@S_eyj>5vIu{6% zCz|upD=VZy`U3fa!k)g6#+x+qW7@o=&S3Vhxu0#m0Frdxg-jg~anh%R=;)KWSD`ye zzc&71E~|9=5RagBMa%GiH)hG8vb_#8xa$kQSu>#vaB?(f@I??IhAQd_$fZ*J;j(Ik zpd3)Kc%c1{Cd7{rDdS{8lTwl@Tc4xAbe(CvKQIPJ=FWp^*^GjKt7VteE_awvr<;NC zZF^>io#B2_H7)|<-O-e*RvB{y+oW%y>Q%JFD}Pw|lChn8z!kW-{7x3{-_TJ>2C z0aV|#jOc!d7d>xDfrG%|%#J7N>B$G=x7?ywXpaccC5qa7J-xk|2Rz?}Ao2-C<>ckj z*!jJYf*{*gedZR63PWQsRoJ=hlGYESw-lWnVD^!R93QRGGJq3Vvc0;vuFKEzvs!$&;RGk8LFt%vv~$I z0mNUYr&q26QKUt`eZm?T<=ef;Gb)mF=ItPofE2o;XUe48T ziI`M(ob447r~9!z7SUplb5P21_xdrh2~&&nFNXh75@+N}saJ}<#Z7jANCpq(+WibG zNB`yrPs#az>3qdZe>Hl%j=b`5-p4H~a96UWk1rg^PL`Tl>4ckFua2KeO0(6KU5o0m zeHiL83(cgW({7G8Np<^ibub&9mL}uo#tr8AVR@$;UB;5-p}1fwOMoqIY$Mf)Z_a&I zJ1=8lf#5m~B?t&7_soPtPqrhYLP!7OBl+8nr@DWx8$Y?&SZ6hPl5bYsqT?^bD%{s) z-lfnFNdBZW#P95ym0p`2)wb*)@zzKhJHoE+``cx?TnS}k4^NP)*jQtGI{H9#UbpFS zG2M6KFnk(Xp1^jGj5i!~B1753{UdqXLG2)vl$7|#?TY{1*pm!PlLG!yVH%?`t%(vcrcDB!;#n+@R4)px|yr#nl zJKxLW1(<vI5WK7zzK5cSuyLZ@PNPRp58V2MZ7xR$Z10vm89E*H ztd0pl`o1A$G$=Yj)b_HYVP3%~ZMptNfQ3m~f%#H@ttsI4ASF^5IF79&+;sU_M9F)_ z-HnA($NoM4B8A;Lrpnn)bzS^@r{~0?%X)6;mrSa^KK*+b7!Ph%`-;hD)L~6~&~$}< zv;XV2?-6Ta&?gJSKL#!>fyfq}=%Ng13U!rIz|p;+|# zGsS9KJ3Ez*d-BeQa~-28yg)l;XJ@Znxr=}MmMN9Q@hYkV0^Wg^M_(l+i=)@Fjc-fd z1&P2^aDp)>6q+B>!k>PxlrZ<8oApaNu5siTcXQXAb!#agjaCdii zVD`Y=h{(yIUhIq*EY;g#U||8NyKWmVOVtfancNP(zPHrCYG-FxZ9YvCC8biy0Z06c z0))XS2FmAQ2^|P#64Jhxe&UkMw?KBZ@ib5YZEU)USr8zj2fEcvQhKQKk~=GnhLDPq z7Zi+L`3U(4WwuWAx_;P=@!x>s31rZp*jRaB_zMfa5q!QSoz-+e1d0Mo0}UhJ6Em+r6?JNdjV z0%KPVe9X&(*)rkOuA)7}u<&ql4i1h=&%f!ylJQJ$fPX=Sp?|v8L8o2~6E`FZc9UwW zC%53sBkN+FjmcuIm1*LSp=}Zl4y>Rq;Rr&096nbje)mhTDj8;Y##P)bo{CYU!2N{; zRd&(FTv`DO;_aZqf1`g|a#-uavU&Dk8@zbv<#7xcp1S(Ck0|Ns>Ap%u$`#H>>cIbD zY(9M(iVbQ;nv-WrG@@f;U!O)x8+yFlGzp4S`>R4LDk{J%41A^uFqH?sL*@!j03@&M z2s0H0_IrM(9zqO^cCO(P(42#yI5AyvwrDrbsX?<|bD9&ss!qcjEw9n z@U%@afQ>DOd|9afO+`RzzVSw`&-#cSPrHzd)oVgP;f1WK z>Q|tTkb?YyZ@96s(LX$#?e%aS$&H|(pa4#C$fei}f<{RJ5Yzr?XcfB1-o0ZgybgRSlD z$h5SGv0a0MnUdiI)*zp!108Tk#N5%Hw06T-N7I2|1g8;56y?LKR2I=d6n!7CK{jBh z3!$nkAYuX?_UY<+k;UXj-Yfn1CEV%*NmaW*Ue!?O`x~{lxF2v>FsM@3$uuO)%#q#Q zpCRtAkA@OVQU?qo_{F#geo4y6$;oxxUme^%Kb*K-j(<73^Nc+5xH*;r2HwEvCco zo@?jF75e1&kc;;8nU2bV}H@qB!Kw9;FxBd9s=YR6rIL z9B@mErb7w~BhzL5J`ra|fY=2AbyxeiJy~K4d5*90$=CxH!423SVDNW7&kPHz;FQwUQu*>XR(*_t^a&d;i+=R+j%Np!6V;8=jvK z8RBT6#sG{IB0z)~#J0e$1Rp@~O10J!msX|JuqOo5^Q;H^5v*JNvbB1hwG=q-wp;|S zn*~I)qEb_Tb*+CC5O`IIY*NwbvcbkeAo92$Eq@Wi(@Z1~WS$#J+VZ^rS`jSk;tr>z z1YVshMH)h0K>>sbd<*O)QjD!|}D%cMRK^@|55h1FyPb{;RSw zRtk?32{(7b)|MeqFsJSJd9 zZPgyPd`G1)xjSh-TL|^!a}dxAO%eDswlx^akSP;~rCH;e9zDAfned<3}gK_ zTSf%JW+1T=`TI8tERV%k+1uSgdllOb>Xchc>T1&j@w62!wTA~!>$XS9RPPN<;}H-4 z*wTfLw2qCkw44XCI^9qSil*6*Xcw`k&wqT*CpKNq7Y+#+aR1u7lxM%cyIJ0~f1tHU zxb64q#-QKaVKymW>k*9iv}2r7mx~efA>|DB@k5Px{Fc;(aNik29!A6*AkG^1J3dh? zZKb7Or_xv4MY9XCuC2dLLO|a%PMxzWoQa6fh2CWK63*F$Kuzt#X<}#d(^3nc(@Tlu zgB~?D!hSXHS9V9N<`*Utt;dhEt1o2yn~;2G?IWHKL(Z*U2iA1vN10DcSO4IvhgyGU z$l+an|8LB3Ee=F-QdOmwhE^B4={SRKf3^Gtd(H3fVD4gtZ|*oQm^tpNS2_RkLF=-IltY=6!}JFK zP;YH*mD_EK#nNl^MhHIR%+FZYIm{cy+0Ce#*e%)hjps_?R|4|_SideycJE~b?#PII z9+{IkZF&tCywtSOO$iYlu$BMVthG=Ju1Q<<;Hw}PBHcqM^6oA_K;*q$G=4^Jndw_g zp+>Gq#d$uvq>W5RP8VI3f-q0s4UNajkAK#6Si6#{ZEn9)3dN)NtLJzR6!=mk6=xgEO`a7%FWfxF}toZqm`A zQXL@T=58uNqgg-p5fIxgI$7UBVh);5-62l&SGW5O0tM?_&DQt1(v#KsRvIJ7sms<8 zgGBsNlO$t5KGOo#Gw*5Zz|Xtc4Tp+0H`+#AmbseWW%B=}7lMP2jAmWC4fZ&FK_6NDl&y|r38nt1- z6s~Z)Fveok9tJsv(ev%0H*>9Cyj?-aAgJcBlVCE@=HsJYYx%RVP?Zkr(AgvV@zXMq z)R{{k%)J`++t7JGw+PiC!>cGKI?o!-Jf1$^x3{;Ki%PBwPR6&){5Mom>zAP~Na%SN z&wpIvUA1+Jh3czeJ_JY1KQ)|Z=lEqR$fEp^^kuSnK6rsK)b1wj(jfBozeKSp^z;3R zQC4zn5fWHzaCrJ2+TW?#6(qYUl`!3|)Ki3p zdAs+swBLR7R#C`g@>yeEgT19<{!9sya*G^N zljZ6qib_iC&PsRz|2v8l2DBRvg4pq|FZiK7XfMR^YRxDvzm}?3%mV^W4cO8N>!#ZM`NlZF8xelJmMz6K5U;iNkddLAwdl4TC(jz zd_3`a>DsopJ0GBoj*II9@xRV$vq{x@TVio>aiI7oGDYDrL^E{Syk8YRXhRE<7S?r1 z_4V~%nd-~DatdiUlB~+M!v{93yW$NL+`NIiV?(E}TU6g~u^2bH)}r$=#+I&1AqdoJeiVE3x8mX{~2eP=#{N_Y{{+B1yM~>QiG41sY+@+Td z`QxWcZUPSzPi|LpggC<0?U`OYL)DycV$TmtkR&IF!Sc1{kc*)m;i zUnH;Dr~T|mIQ&B;_Uak|SYI*;XHiIw}geqGWVv|iD0r&7IOy%Bsyi21qf zX0gK>Nmw+vf%bO$j!Rx+V_wBGmtvxn)3a_wraTkvP`O!+^6w0+;L|OBR%MW{b`P@P z%dg{O31<-m*@0+e{!bg?q})EbL8O+Eo%84v7u$~t@%L*vIG8TXnLh6?g^0%l?2uP& z$}hS##wN*Ic@)R1U6+JLf9ZF2Fsa_)$mt82)${0%P?thza-l7r)nT~qPMh&DZJA4g z#xLe{%`Q33ot@0TZ+ow2krpfBT@4a5wiJkrL3`ZhB_pycv+If;@tJ z`kiy7DlOCtQ33&FCB>Ao!irH0k!Y~(xb=JOagZ%G{bmTIvF-dO>({HfIi4H-=ZmAa zsmW(N*60DCSHsnlOh6Y;!t(uij?p9mf|<=ZZ->s18bH)eD}qtnJ5p zXs*5OE4v%WGQ0i!3W7!U$ifXog!(IkV~_3kJY>A%{%^2`lX_-4Y4kOWS<5w)r=a{_ zGx1Z?2(#F)N;7StYPWocB7AZ$8=P(sQb3;Lw+hUpy7d&8Db*SRmN0OTESDQ&W_5g0 zVrW&+-~xfq4FvYz3WF~{5b=8}`Upe&Hi?OezXNNSj}SQ|QWV2^_ZY<%KHIy3CI(z2gX*x2t)B&xru=)>gdcoJ zDe~g_wA~|;p1pa@>-|iKyo19!W^G6965heh+Ku`;KUTBGs3wU#|N5YLzJDW$%UV8K zjr;%%y)JBA=6m%J1&@nMi<5~eEBLoi8h4MYE)KJE=(oby=f)_ zXfK2_X zGsxpaBlvg65`tW>?kt!cdB2ge?P2ggNCu9HNp@z(?T zxpa3}N@a$`l}1&sQcn9Y!&+9gdmdaTB*G6l)S*T5c}1IZH>bF~q_N zMCu*!v6pP=HB9?#9NKbZ?vIPJ@_zAzI?rBh^kN_DY1XwidoP+nR10jTHS?a)iV2=F zJn^(is3&<>+s87SoWTu~rmdG;M|16-5fId>JK!c3Cwujjy#90Am#ehGy;;6PLpQOe z95>sEh)6Zq4{MNoc4t>fTl(SDCY`lu7k56`?!9Dt9*JaO#q%0Tf@N{+PeBd!QE)=@ zi*j?;_jn&nwDw6X?ltj`Xfj6l2_)-5ZzEjX=`Q#P?j4-crehwEq{4uc1A#0MEA(zl zo3AptSg00_%PJ{iCyOCG@4xW!s1V4;Z#QxUB&@vGC`$S}_JGDke zk_P<-#TCT^ff7XCzgKGfTFvp|R_@=Hd3q9r3V5Z2EJ_!Zwq-KyrxK}o&6H{S>}`9t z*fOlo%zHi^=>NFf>-PEa0`L3lr%zOjYS~tn3ZapeJp3_E;=6#`^SiqIfA#{{DN{(k z9Q*qfhrSC^1 zup=DO$|$Rk5QL6h1m(?=L;1M_xn%GxB;q2meI*|V@!#3sTA>>{OZ6P{onC|_2{uMG zght)z^=_*wQch`s_|_XN&r%;HwWkNEHxo&?biE<58BR!N1cRUQ%h z1A`-GmuAs}!)3MQ&QC}zyp+>wK10n!s)(0|9s;G1-Cpa1B0nO$k}4wFKAku+LWC4< zynFk;zq2q=^YY)gL0g%}V)#3XDhe0&zcyvFki4PPj4R)>M0J{tKWYw4y)WHt!emIj zzagy-F2Ntb3v^IT(vf_zqeQn6ltnfsS0IY_y}7K=bQmsu$6V+qB`161$#Dsh!=^Hh z+6sqiWJxbl{)NLjTA^>l%$vIJwQaJUp%?<*n?J1Uy&k7WXX^cR{&A_EgypB*kh@3w z({kJ{m(;S2QIx4ZNxDZCA%$qEUOR$oU$BT3eyKl6=nQ;1F^RV0UfzL~?wZf3CwQnh=RU zJT5$+Z;lO(hqZMmw}tBq!+6IU;wN+wI8z+(Y9WUi>#t^A5^hDgZ&n@PuM>|Mr$;IV ztT~G3zx9-gic$4*bgc!%YD>ocgN@6~BNzFMY9SJQE9}ko<%)Wm0H4D9H$ZXzL%R@u z5YryXrfrgEx^dDx`f~dGMW@1W53TiOhjf(Z2#4^_khw+X!+=wZi;Tti<`RBGtA#|X z`?|4o^D$~`TGQ{qHqQ=0244R@!T6!_aTO+Wp@XN^yw_+%gwJo<=R2axXnm3)E9NtV zruDbqOji3dllA>Dw=IU+vXT=^-(yk!%|avndd&KS>qcVdEG<%*{us4b$T1SR3)mQ<)4%$%3vNBb@H z1QK_j`txMAZ<7Mly6?Avx3qRokKNU!=7;L9;x!XB-qmQxMkiLlapr4h!tMsXzxeVMJbhwg+*kR%bq5P;Gf|p-8n9-+YJvt=leX~=)Sx2 zz9;i$H0!~hMJ!xY$e!nVXneTa?^3%oI}+tY&|D!aO?gD}=OxsPAZn-H)ZUEV5(sO! zBerhw#K3H7B-gToO?DnA;d|-!ul`wdReMbM__(|n)_PAtq3wcN_r8m&5Y|Q?o}&4d z&_%}?YXt-tuOR0BPxZt%SGHhFeQR4oUF-DUheL^|#%l@0U|| zUc-us>9#q8u`c?sF&63VzZkXt)D@6iU9w0Ri+I)&gdiv z;%LRW`Ah=|U8OXe1p(HqH zyR3w|Q^%RI)vs=%Fe8Q(IY#&!DLzpN7{O*vd^!wjA~(_xFw#(|yzPq-6M$7B zZfbX$WNHk5)O0M!E#Xu_!Q;4Ze;RlkG_aGY)pz)5u3b9mI;i}afxe1o;@sCF->~{f zsFyfy$+Mpoh)dCj!>Q<00kCs`Fme5l@~$*d&X^&bF!LoB?mLX!VtBQaP&WR8==*0N z{|~Ye7`tK#M5^|=)S<)$N)Se9{>yT5Ii825p-3r87SSY^&@w|n1wsx-tJ@%uYCc(q zETN;l{+(cu*@GPqz!h@>bsoR&5J)Az+hD@iN_Lq^WJnB}YSt=Z;9|V49tcLqOgA4b8Ohj!hELRx{yh!s+nfZVEP=>az0k44rv{Q+hVrT z*b0eQD-|C7A&&~WYDOwJ8NqyKB%$SuxzpXqKuO|t4n(F%_s%RJbiA0t_jDL`8saF? zi-ZQQYr@iQnJQMu;NUNC*gWa5BluAH6+7tTbbjH*!L38=)H@!h+#>5=$5T110Q$`7 z-UOr?f5++cBf9W^J#nVSu+$#-6d~D(fi9ya{$^PQXP^c<^*dpq?zz89mA45_724;E zY=;xAI#h$0F8XWw`iO!z{=`Jk_LD-dZw0cH5jy;f^^ct1_XbeRxmry#f8CA>f^fO? z>23sV1#iOI5_H-geVd~oG-97Z1T$D~1Z)M7`7a@2&#fLS9IaB$F9%sxQqB9J3c9S?!4I{1MQnsHaS!c8<&m9_J)CTQe>fo@ z(d>+XpJD4fetG1#qyhhlyZEM%T{z8YWU-J-^{vXvLTcG!sv1ygUwM-%`NxoJ7AqMa zc$_Js3xr2AM)Y1_?YJ^8R2oom#Nz9;-BL>_AH%QBq%XCWDM6x;iMhL1^^GhWt9D*Y z)VaMg#-G2|zd7Z_^qYfn?#P`6uig(!|?30VCb9YJb76e`;zr}264->*dtoKF3k1KoEnJ3K{|++^jJER z|Fb;L0l~!730IgY%RilnrdAr{RBRk$FPd?kZSx}|tDwoBD5$H$qO1iZ9-#|m>U(;6 zzWlw3B-7hoWF?Cei-HmD%xquJbNVn1pxhNcz;|Q>7A@QffesN++k&W zy9$QhWLMOams^TLbul|Nax+3ovGS-70z`JBs(&s&E0@x)1r%dI)NW({PM&F%9uABq+DQ^KMt%!xO zTs@q~7jpn9=X1x7d%nh;YhG407mKLArJSga+*LMs-dr#3zV~P?Ymxr+B=-5@-~Ym2hs?#BT$_mZaF@>TZ&dsx_Lvd=e8!uktmk&s@y zyzpELWq3bK0m9Z@Br&ZAnZ_FFwGfNa^#*M1ourqRpPVg-$YE`;O5f4PxPqh@Wwvt< z%6c{~ZLg&2bSE*2OL$oCv$gr0Z&CvO{_{D9aW|d=3Vw#799V! z8ZV_qKW>PS*MW$mM2gGe>1iFo9K)Pl{-&pgYTSfwlgulBOl|Tj0|y z=HAj&6i}{WAIJa4dT2dw6}#N=PNa5wM1)_XlOv3=GB~=^d$C2ExCpB{O2BB`72d z;%jvh>(?XAXA`W={7*iA3auwtCv15RsoHbdE$()@Zm8+<*SaI~9G(z4Ct3_F6n{d9 z{(*Vt!eTRNP&~(so-`n}`$x|aI0~TS$YCKYc=+W}-71SgFG8*5d?yI$%>kmxd-lq` z#g)%PW}YknXQdmMeFXoJgV^)Y%euP<+)oYX zttR!>hcIN9=p!ir${h8Iz0=SURZ3@hCupVmV`M6}v>*%rm4q9o1Ep3!Ef;IUb$p(# ze1(aGs4z0q(?LoI#2x|k$0h~Y&J@;`*(eg6<@?TChzVn@p{|XhNhZ{chCMf z(V^YGS-wwk{(8XMLY?f=v5J?^A@Vql!NQkURq(HhMAhjSIAXom%bgJyXtVtv0p zywZIDv3Td-A!ZtblGzZ|ZiD>^S)rUCS;L&BRLf^aUpiJOP0(D?0<2~jK3OHqs(Auj4)qIfuxQa>(Q?}Uh~F&tZWY*^tzYXy z&y5tTF@-8fgS0_!2a7nC@_{ z+WdLlIaB{Iq*FF7+L@z>kC!$eSRD|P&6g7ruk0_iLN1fXwIvwFO+#s6 zA;3D4NB2K#001oP120w=6DZ)?9b#+dUCpU%zSe}+gJ^Fw}ORd|Uz+=$327Utbr-agkY$p*KT zEcegY;`rYBrSK)ic2ZZA^3%YWtKlr?`iV!D3ctz&DmcL7t${3^(Rybf$o+hnat0L@ zZ`t!E)hN;b#i(eD1NsG64w=QpVY*CT_4J4(;uws8PW^017S&U{%J(6lZ5= z02}>VqQSXPV-fV9>ay2ZJt$saLV$${3?=~s(`mbU=g&Ym=GGOI3DLSi(yCLRd|~4n zJawxI(W`Ao{c>|FWDhTK#)XDo*D1pGd-)SDzCb05@7`S?NO3=>wLJX#mAL@P^_()_ zYL>L}5k;hI!UG%iv(RVMf^*9`RQP;k6ck~)^vqyi0nzM&{PfOf68W5)6<+FW>$HTM zJ_-pD2w~y=)(39N_bl6^ExbtycOzE6OY@tZ7{YflI-b8rXY*kmOh^vnU&v-6a3uz; z6@CcF$HI7*K?W@1tnsgaJSiLMPk|Cn95*H7<0CkoW&rdA=IQi5u)P(P$N?DmN~K(k z3BG1HIafWq3Ka4Hi$_LBH@tdTt=V7?CJ6-}Um_3=6X3Mnx=gq4?@V&q^?5s%0x7NklyRQvV0BJmL4ljOv+Zyx)yfnv0QzUf5!AZ- zU&sB01X~BUbOu zhy02&5geP^+bV?8OfSji90Zbo z=wS36ib_{EXS6s@80p}0P>}n>hsc{0mzG9ihzpITlwt9C_FVCPuu7JuI%p4-xfElRcsx&Y zadrLYI#GFexa1I^>!9mxNTHyMq?$Fk_qNe8iY(^ed4g}15^dDZ70DZp7Nk$!Sr#qY zCmxdo*1nQ9KnK_d{zcO|ek+zW{=&ik`)gEyHc|ZSw|ZH4d07v+Q<{WA5ay{miQ z+lR(Kox2F0yA!5tvHfo(;W(GytzKoXrQXQI_5rdGYX=`0+{hUSGIbZDyeH{Ya19L& zh9;nyL*gL#*=xy-$MvD!LbVwImu-96t3YURzsdx}W3}nHUwJu`BTq`3WMb}JSq%2e z@^bJ(mryeU{97=12n$n;6Ume=hrVC>F5u$YI7+@LI3daT^XA6|Q=X;i$nzZ1XR@3q zQ0Jt{Fgr5Zi>Swj2U%u<1CAu-q!Hi<%C@*Wk%@zNCEx}7`}@c9e zpV#8XEDQ@~e9 z>TlFWr+YjrXgiaYKAW_+@?y78XXQ6yc{}-p=|ZRz<`GZC(QtcMG(T4?dNYoJT;zQp zC9Xr)UeI6tZz1%V|Csm@EgVm}^bT1@QoYY2t53GAxa1c4jnF%9-$GZ_N?neuzP7Qw z$-RK?Sp78z#__tv6r8I@K^Y;G0Q^ss>xI|>MrIsQDv+>KC?7_lJ9;=$yoWo>KPmrR z+jS7z_@$<*${9+A`Unez>jW^{CfOt@qT3SeBqb4Q%%{)G;>do2y$+3l(;kM)`r9ZW zEDXVTG@onPTgXX}QI2zWi%2C3vznwJeuT^llcGe^wQ0~J%nny6Vfd~MHnR;ists36IxSD0<3ef!dcy}qzwxKPjY8Bg z^GQszp@jWdTxU9;?&U}LJKVbAsc2&{<^G3VR(cq}kB{`&GNKuidw&nEo}ar=QBges z5#L`KMkWr-vg^b7uCvX)f#G38z&v^a;E0O=3Gn;i)V3S4jMrGGB(jt2_+&YLbI{{w&Rm7DOTmicT_jqh^8 zQvi1X3WJhJ(1%f``H^nSS^Jx(s<&^R+nv>@;Y=F!7$c;{d0gRFwm@;l)lm<@!%XYh zFYylp@1hKf)6M8o&!D{sKlJcJB3NqNqGrT#3+;@DBeXv34H0%Gxc#Wp_Ky6`D|WhE zydH0OdzF=Ec60AI!_Vg%tHkim)g#VlsLlfUJT1l_W@{i5C`L=oOWy6GDgBv0$gr#j z|0edNkxEuNh8wCx*C=7?AY`~%Q_t@vIWZnFHozs9UQZ8?WT#YhDRx3{H9qR;~tO+EdB2vL`5fiDkNOp%nkH5^kDu zvU0Qh#ikPR1L;c2A@V&$;mS%Htbgo;+g#kRYIB0#@;>BmcU|hHNj=lP*j7X*W1d>8 zuw#FpDoWa*Tc*|gQf`wQru_og%cQ2zkCQqNn&&LFtY)JOmpie;iO<9X>vzIAWOiTD zW^Zj@NZiBj?B6p7^8EV^dd2p{E(GR7Hzb%7GdMg2?M;tScgeq+4)+@T)M{UjU zq&i=R=y}^q6#Am6vz+Y_7kpCGp`eON$wQvdDZQSsb#xcZMhmRrbBaGjvkB}Tjvo>R5F6xI5QZY12 z`5JW$Pxn^_pk(WCxhd`guX8!@z6F@L`VDC4JY-|YmRZqb$Qy6=3MUi4Aol4J!_%!fckF46V z;UD9(_@1x#B!zGsU(;+Ol$&02#x40=9ok9WU&D|Jq>406P%<`zo8Q#jQloq5(bT2~ ztF{kSOd`Ehb`9mR7=5BK(58K!hsHy_eS-)0)jsn5=M?E6EZ-DAC}I?GG=!qxeO_|b zzuEUSFe#J=a^wOYVu_X&z<#{lHB0;f5Jnc3C{T<6z|8^PljbhKS8xMmM=71G3;f;T zG{H11di6D1F4amySbmRdV0)urVu}JoJYTDcW4Xy?chRP88x)Ix{u1l9T%{hiuFr=I zfBWPGTV07A`9d8v+x^`JnqRH25 zd%Z4*!{$g(LWtk_siY_!<6v=@6xZ{Ok#$Cc@6la{m8Ecb0r$UYn|Cc=q%yoWV(!tQ zp~t5tWoXcDF{@9oYM}gNS;*p*D3r%`$n$7(X%={!0CMS8a&omnxG7@tDT!t0PRy@G0RX{Rh7VO04*c<#0Go-GD=ExQ1l2a z0Z^}z4GwQWU?3_1fuy73rA{K{B$_f_^8?qBqmF*b5B@%trkEOcL2ch+f@C+`ci>h& z=K^y=>O?pv-8(a0!aR673X=ber?U>LqWj{#frKF4UD6?N=oF9+3F(kd=}zem=|(`h zLAty1NJ)1$NW{;<$pG8+g_TG@P?Wr+&G(P3~*49SnR-uE+2d<`yD?Ww7sm|9)EUqZ_LSo|{d zT3U&l#P~xLs&wtVUr1Yi;_2@AJ!bNu+12|3-E#3ao@IQ2Digx^?4!Cj2k!m{Mj$|5U>plsn$*dV*_-p?77+Cct z$3+VLR^*Pwaxr+rE-Ox|bE=?$`2{A-M^t|cN=bC^(h(KCzV-Q1q#V+8+)D5QR4zhy zyZ0~cu21xUtJVF3LVUK~mI9h5YXDrAIA+}pR-?hsO-|o>qe#4E`>d3U$tpsf`yLaS zGA|jcNN*>qiJXD@d>2wpuf0~Ejygt^7 z!Xmq-jgIjT-<5V1;;n96a^^uRw5{>Kg!QDmr~Qla>{^LR_)N=$3qhIZEZNjbNFASL z1r5{$R_MNq!n_v!n+;!Oc(&>v@+kps=%l0D*9c$2&5dtX3C7F!^Q)cFw9fl3P6;Mz z3G_yapz(CR$cIM(PF@RNQh4F-Kos>Txon`XANmI!`#z1&HKxX5#%^ak%R;X0^x-Lc zT2o2wv?sQCslR;)#ZkOGHo6(vIKm~41ySyrc<=o({7g~Ga{s1=7|j$M+mRo)>DW8N zi}NTy7XK|2B!j;{c_)gUs3J+^B%nGMmS3<S%M_Y#lzHk0FgCfvwh#51NDKCQp@=8e6_ z;l4BE@Oq#&>x_p9|FUjVlECWRyYmSyb}4)64OePJ<| zc0chkoW7PDDv;TYq3yno5Gh3M^N~3Rx=!%#7Jr9A$>nOOr`dhtEXM-pWa~IVj;2#HwTw}E)FXczb%{NRW^MU7PhZf`<9vUDHXa1wL*~(3CQ*t zWRe7e!mYHEp9ZA6z0Ps7%n(qc!my+57x0{qr*5yeiDxes@ zzb!`w&BMrz=&txzc>Bp&w_9=mw{2noJ-M)ukk=_($z(K33Y{u;piXXNeWenrUjhG- z<|V}9^C@R~j6WsZ;O4VkbL}0YRfjQOS7LHokRKc}UcGna=-_6A&pXZV>st@a=<6SK zGeRi-fohd7cs3dGUA6;%2n{n}1`LI!bsJfB-dKrUx%E@Uma-}uk4QH}OjK@tt zyfvv9l3d?xwsAiFiqowqtPhLmsg&oxo9bYA6u#B0t22e8FWG($eBKB3t zuJAe=-)Xv3d1@P46Mxj>AU~3fQwC+bEZh!T!{-u<4**H-B2> zefF!YQ5|wfqRx-qi%B`}H~~cbZMusSroU#&hQsyW z226xU2;>V!dGqs5h)&B@OZcGcndw!2-AIo0hsH{$67jp-w-mCMik6Tpy%55oMUh50 zDQ&`eSGmf4(zqhy4*eE8uILsIBHj*$-+R^9!B?65aUVg--RR66778}jS$dV`DJJ{B-bn9UQ<-4lKt(+Gr)QO*#TM)Kr0vD1=8gXL_ zROx&+i{))iz43F4#IPif|1|2(!Td2hgU?beI8oB({&RRubrqE&>sN&{`N5ajv@tlr z+xdqfTS!PqqU=n(duYa$Q^&%dx-s%$gvUgSE0oz_)}5I6z=OG;hi>|?pRFSzjC#Yd z+OETpgD7_AsXT~9f>^&)_3flG)jRKSMfS{%(_W(o=}Xg30!k+>6e<&CmkI9D7(OHw zwsU+-$(8=J_Bi8pN}6xIqf$Y&nA!BjRq7XGsM?(9iwqQXbivdFyEQZ>A;JV%M8 zdk*}f&=j>s|A@W97|#{NGh`3nHMuTa6%5T01>CM;9g(;A_LGk*UExz{r&|hBgk!~( zrDm9Mgc4V*V6A<}Mn~@*jUeFWULz8#dHUJkxHu%Zo{m@8?dXy$>b=aF4Chv}UP>Z< zwsmm>A$mFq2t~AJ!VtViAGYP2uRFxLEibgZTh&G`G`iW!LQXncr&Y2cKg9StanEQf zKPB|4d3V=vYjeEu&s4;L*|;XR-41Fhr>yaH`(2@7(}xn1qYy?UZ)+^`29ArCCRkj` zBO)$ecjkBzcUX}qS>tn-5C!X2S!IdSffYpFYCR6?dsCCMu`yKT6Q7+fQzGm2DL-nP z)l?qgb2qM1v)?pCK1;Z`HJ+sSd@oO+3(ltFM<3lB8SpP&x-80Vjorr}JJ+tY{|)$M zytR-f^VZi!Htrfm%h^FVBuu;}V7+ku#T`G&!hxkWhU#DAs@kZ|So;@K@ge62ZzpF2 z!}7Em`+9Q;ZFA2rk^>GLbARM=sl_@+tI9u0ByK(EjBQO5|G?`A#p~lfzg=zz^x|o2y&)2M*Yn+= z>z(;8kfwajD+rzQoBj^%F8b$s z)%eha*Yxg&0GCMt?PmeqH0zg}ogSvR^4aeocoLJ{^>||mW;KG6Z_0eFP^0Urm?5zQ zVc|XA-M|;qi~~ejmG>uxRvodPxCSolWqA(+YbDx0UF%OWOPzo)zTu?3l^8V=MV5NPXDWC`)>>u-ZY z!$jMu8~l`f340@Vr*q=#2n|}4Ki$+HFWnjVF$Q)`G;q=vw5;>)*eH@)uV3!PMfPCl zW$#(?@pwR-l|!2MSQq(s_{B0#P6pDF7nbfR+Gk8Eg)eVq9v($*or_$t3QbM-GOgcK zSntYgG0gScNpGP!C`_rYyJv1p=svF8$W6J<>66imi|1rexD;sDe8-!>0D*y>kYe-A z{*d3&W$ej*E_(eEqer?3_(|_xV*W$yeDd?oAC+MCivX#wuJ+E&0s^Yo|U1!nhCL0yLzVRe|B`B09xlE0BH3u`wHx*m|LAsJr zxjZZx+A36~OvHfvSc5rLw*Bv~9^`JR0$3Vw`O_TcDf(kC2csB#tS==zi)Ir+ClSyfbj z&A2>0Nz*{GU-%hmL^nf}zJm{l={I|RwST84<5)#)J7Xiu_de&!s<96ympzu2JezpM z(UN6LX&ZYpRbi2K>;y_j=5*s{yRs@8&MkIV@aiUci;LRjWE6?m$!f_>=*%R|Iw6L? z#}7uWRBOI1syHiNg=+2;F5V-XKjo&yA(5ZFKJq-#4aYy@xIEtJX$(*bi3&Y|`~2@f zruEj8fw_f0!c5vpAF=Ts=7=6=biJieSKrySKskq{9NF`SY3iFOiZC=0*lO2f+VGAn zKF$kAjj2-qJUS0D zCi!QhvlI8NW+c6+1h3)V<$A`Zd>@R`mC@fz)BPOX(4uLbfj2G^E=C9cK717OljS_g zz!Dpp^^u%!3`rOIOq#`R22lxHW#byuBkq>Bn26eyyo!vGVRPWGac_DS#qT0S`;xki zFTo%y2BuFIEOXJsqW-}{3zyqn6N&K1Y)dI_8Q9k;d+!e~%rFfwtz|PrG7}QL39{<# zZh*|S`E2V>nTe=p!R+{SWNgT#*#vIO^F!P<4-wvNZ#v;(lTV0}V7A!isf)bzcH*L| z%-tG&xCRqNV3fXR*Da((WjeEL%aL^xI#zGJf(oXmGgNo$bh4uE#$&Dcp3fUwQTW?d z^DR|Aveo)X-ExllgP9@8+_1NE+UokQ?$;k_GA?(FD&MSy!cqnjN7_i?>!{{X!%KUM zZ`!cF{d28hsPgHk{DzBEU=@O)J8xsMBji_7iru--8*CZl%wPQ+Foo}A6oc`45hH4M z2|Ij$L|tpG))g~VEFnGA^R@DGp%nk$-&Rr$zq;vb&*R^z^oA3Bvb2Q6Y;}CC_?awI zpu<_6NTmO#Zv-B>RtOrsw@1eEmw|%C%x|zhrYcFs1%}*QRt>E7sj3njH(# z;4B>qt(j*cxR#9Z$NpY^hqz^c&|QUv-M_b8b0P)&@C_DF;F5@Ica!&B)Nu* z82GDC^}Fb`pKL=JySIE?z8);^Uvr*bnu*E@b}-( zcemlIKB1Y1{n|ZDH=V4dDB4TyS4d!<`XvQAGef(n-)rGo$|r#4RI9^37G*Nd@J#G6 z10B3!diHx(-;wjR9sjiafI+0c=^@9FFPE>_a1;PR&!$iZtvUOD*qv8ZCQ(!*dE^h8 zBEKIl(7WQ|c2<#hFbrna&iQfD&lPF3VfVO0OK#5IZ_P@l7Zy(L303e;0Z|x$G))9t zkQdoGIj(?wS_aK`wV2qy^bdHmMN<+(NhYb-o*QgnQ0l)N(tR{*m2+q7hCpFaBIl`NCf4}TTd^} zSh;+Wa+Kk|*V>PK8E>wf(&fZWO-)1ioMR~UYIJbfNV&LhK`ms*R~odAXsD~66rOql z!n=1Zozpl+u5Xq{d_ARv)!(eDkZqm5_R=_^eIUl`&m+3)x956^kS^z}FrL5a#LZ>* zmD%9&f#G!qQP>r_S%80^jrZ}=A1P#$zc|$|5G1K{=r00wV}z!ZA??bh$O!5QpOtgf zQh0J&Keq`Ys85e8zmv5VV#bkE?cDz|F+RS_wD9{qrSg=fVUsUGlTJCa;vHDrg4>y1 zUnd~&jK`lQarnz&#kLE*4lrvk?c|X@k8D(c%_M?MXpxs74lgjt*KUK@7hjmh&(j z%#7KON?L?`PJvZMF$EgzF_X5d&5#qU1Nnb_{^^!YK`sQQp{e0~IJGONd=smL0(^jGwc~jBRy#-;ZzV#au6idHgF_o&?B!vZZu~cE1d9AZFo%%@o##9+9 zkXpTzKEVcDrq$Kef}+s*!&LF1{mYot)Ua$!Z(CJ#^-`4sr@Q~sum9DmJ(5Z)V1hiT zl-4**>&mYa*Q)u$dM&0jIg=0?`Bi+uFJFLereW<;Cm*T@U`TE)sx4!>?}u2y<;kxx zUPTt1zIK3RJ8b-C4C)h_xhvq7Wn98WotGuMU@r?)%ZZ&ACNfOm-sL#$w^}YWM(B+R z3X87N;=M@ZkKGPJGFhIe2H@X`2lvW(^oo@51$By;nthJ7Q##O$B@<(NF)lPL&-@{f zA(c0bun7a4BYAD3@T5R$yD#mwQBEuZoe|ZcjgAnR4X+)QyJZ+ zJ@zOJ4+WD$y)d7PLUrf#P!;9$?7=cFz z9`$7Ioe^wuimsQ@mtsqgraWeac&0It!37fGJP0F}xvFDLq-{}pBYB6(IyEmFoTxu$ z>G^Nx6QFA*)Ga?Hg`cbA4y%w;c)P?$*6!oehz`fPKGR(t`=W{ZIE38&~SuM<(e0nu`B zx&&)&u!JRA53~j*AQ~6bF2q5$ySEQ#8C+$x@khU7rrtxPGnbnek=~X|QSZMTMVX+~ zbe8AuKxHC-)utIux!W|A{3CnvLHq*7`~ zP3Qmbw6|z}O8oU3r7c;aK7fe>156-KaeIYl5_$6YtI)J+`2HJjWx%ju z+`4I2ftmr-I&7j-B-#*-9?_Td#f}k%+nl21O~|m|XUJ)c%<}|Ty$3d8XU`)!IiCMtM%@~-fsr38DoM8chz(CC8M5^gMJjlV^o?0=t|xajYQTRs}0Q>pW6io8O#6gT40?pqk@5=8_}r<+w#sec=C$Nr@y?2qgBnQzrIdQ z=YC#F8fK&o%8*zFSYj-Y1=$?aEHO(=-Uizo`X?AK99!ee^(?r?lcB$U-PGNr#dId4 zY1wA^0>epXOk~DgBcs`1+%{Mag*v`ogY6Z4A^FV+;;A3_Hiuj1sKIcuN&D@a7D37w znM7bXgWp}P&WZ?7;$$-T@Gi5~g0y@6HNru`|E%JE@n zpS>l0CWAy~2B(i@^#-o;4cs*45l?y~fY~3k70m1-AS1W__a?VG^3f0g#v6VFCochK z+?-{v;oFjZNuN~D$6wHf9l81ro7I_+kAzp4@G0K7?l9 zmpWKu0pS@i!m6unMKI%}6O5jYJ32bvS5ZmFP!4Y)QbfSLc>|zo2xQXT@aw8+-O|3xN!6Q@dOTh*cFJO#NrMDSIEXIBGr10c8#U~0XLempFc4` zZ1so~pQ(<3oZ{7)aRJR+aKFIMp3w%()nKF7_b3@)F#lO32>q#zl#)Pr4MNw%QQO8Q z+kt>ZQ?JEoy`+>r>eoR-p?&dS3roxOw6z>QVhs8MELd@_GzD&LZLM9pJ*hO-y#-{T zMXR>wNdj;fx0WT69jt{XDf5+W8?O7d?X4|zeEd-8?mhGj6w?Ey-Dny&xMNrHQCL88 zprMf>ctYp0t6<~#NIzqi58Zugo7o4N&8yJe|E%Mjr(;kc0<0RVe*QJ%wt94b0pP4$T>Px_4;UgpPbFg<2_Ad$`Q_?d|xun^^n42QE6vnKz$8M z_5p~@BJ%zBPgNj#vxhKv<3b_qD@FgdYu>6B+t`1wm6I+LGMjd09Eo@@neBA=7hq-M zfB>JxY?9LJ?yzpiO3-*1m)QH>Haj~T9tDM(fdNS}#uS?O4kW;+xViCKbfS9nJB+Ju zEXdhq>3@8zA%&p>_ay7&bmi1AlV|iH{Wx1#O8tU|%gX`96*?;8Fgr9TU{8tMiAcuOh8boppsFxtOkFv7Aj6+Nu3DVji?+<7}DV)~G z*D0DnX7`_<9^Cq_r51Ovnh}9#1^6td&;2B}hVcMyV1R5GDqVu~6X{t`qXjtT<# z|7Yr+jg3t^iVoF(USeo;Fq0DOr_(j3CfX%C@lO3p(lQaSZNCq~E)Zcs>~vH{eerK+ z!ViwbBf7lpF$Ihp6%WsK&Uhq-^5*XDCCHw~rlk$1J1-$z-wqB`9UMowoUDA5%ixm) z@68IZ>`F2!7PFq);lJ?oY95l=)QAbDmvT_*Je{e748U*{9pd(5-1- zn{U9{K{NTME|7MCw$EYJv#R{|jVpeNpnfqs^n%ZgZqVeq{e)-Duxue|BQx{r+wq81S}Z)H{c5@<}85# z^aVKu#SK^#2m)!2iF&P78UVBBS?h}L>moQ!_%Y}-s{QgN-bQ3Ove1UMozp$^(|6f zct~oDs-_Y7t^-5%*)OTWu>6i#-zzR`%r4>%8-vU+(vMfw%HBUF49SVx_pw0=3M9;< zDJ$sFVI{jrVJ|WLzXP2x&>e%3f(P*#@B~C2kAH;$Bn^l`fpeFUG3xUgj?3u)7hq$* z&DXM+j1qEKEl8-TC3(NV2X&)z(+N4*TW8~JFVIxYF=hj=WWnI+`2mr0!qjK0% z3TJ-5KX28QD2xu%Xr@ILaI1Zd;Y3}1lvpU5^rR71Z;*;TaW#Q6S7pJgjvP(NRgJkXsL>_R229Z0!MXmRBnk;ZAg2LN5Di&cX!kG%_EtJOy5669!O;P2TPD#Q&yVn0 z0n`bJl+Uo#Df2g^92^PY*(H-i2Q`;YO&~x9EEyn++nLCD&O!r0+z(I|0HB|<&8}?X zk%US$gw_J| zD9k)GjpPn>`jS+a^DYxShjmYg7!^Zdz4wvtUry8Lcx)kb`bPAa+pH73*X8o(k4VgL z4Ub)YH|J={zcuU$QqMm|$6EV}Xqms#VzQ?#J@8TpY7gaWZar@PJU%I!``xmwfSljB zbJ_j*$m&aC^K18;l@ZRJKc`MqGRZ^L+U9q4Gm!WWexB1+6pMnXM(Wr{Pfj!{-nIum z(6(OOM%)>$&5Ji?mQ?p$-d5##CQDSMohSZKElHkKQmQKPUhnz?$oiJsOVB_{i+K$M z4chKAj^rPLiWKTxB950qFn(pfkOGhYRFhI@dY8-n7E_zYiIk2U$#1nYQ9X8TuF8`e zjm+mR09dW?lsQpq*Bb>>Pj_cB{A>>RrP<8O7m}#{?JD(ng&=N)Uhqj=9C!(@0WD@_ zAF+U@l_~%f;XF&9!Rz)%e0RD?<=%dSNb;DXS zhRA+yhS@X0c-|;#ER6-P3XUIN9;>C~O>)o)7B~>;pJLNMtBCZx(Jx+Z9O;gj{nUy0 zb^o6Ai+P-Qg1qfc36$iTNc)_>d`xgOo4-x3zV>>G`Fuw>;Yga|-)Kj;>s~AQ*h&Xl zJT9l}2YQ~i&O0d1`OQ%4SIc=6sLiV=q`gC+GNts3zsiqT$)Ss?)Z519)3)lO@!4NM zt9?Q7m-9Sdx6$LxXZgxQHDFiTXDklvlZ+XiL?6?rHrH4Rq4|)Y#wN+gWS19dp1PIK zK8aALxB|yM4j!EpkKG2V#Y&akT%XAqBw?FCEuB0pZec^rAP zbCr7gpTT+C^_r_lTcDmkT`{2b;ZBi-c9mo}@!88OHCzdC!LcT)UGZL||M(-43vYg$C!k6r8o_CG*(B0V! ztZ2$3^O+c5c?le1In;`$ZamW+cPn{bPne1r#le{lc_rtw`yfr+ny<(T3GK%Z56+|N zFJcNcDk0Tvk1-Ef@gE{{^v)t~+6w~&>a z4Zb|(v8=35S@!io)>cE=AODmtw8{UbLxaiU(zJVhLTJ=8aUflCX?H(M_M)mT^XE?{ z3+QM>yQ>)gDZ@OgA0GRuXYV@_;l3i{*0gI@qrKk-+v-t*@rs-C3fyyp&13mFuRt}0 zS#@5{^xjAI)OUROxE!YhZv1Dgj)NLun`;`~94;@vGX`;L*?GEcy_{6+*mQ_aY?I*- zt^BrFeo>Gp-WBIE7$-}`-N1klB57GZ9f(ICIbNwzA{a3qJcwk9(~X02Nja4J65qZx zN!s)VA5yMC1mR?+Vy9>wvbgxBabz8z{|iJD$Wqqy9v7_Zi^ov6#?pJ*f{^t3>APl_}m)1G@;lWM`_g z?*c{G$p&H1jL4}q)y0xs^^1iY9=k2n&o9|-Qo$Ar#!f!3OlyFcg?P9lbhw1N?mxL*R%N^%P68_UFCl6vi z86K6mQDwLUH_L!dZ!E7jH-`&az7iXb40VhQnawx8A4y?fpBgxq7&*16>7;YaQ7nnK zrjFRG^TYe3O6Md%a>`_)Y|}tA`*6GmVNJ7Q9nD{N+*srAH(=Y59IMUQ%;=$+Q$S?j z9*6t%!SJ3hZUGBoIvbrAvB%=@S6_$Vfr)+X^4FnuuRl~B$s6!l7+DL&YtOZwR3H}9 z;bqK`+Y*iY_@kc(NME%HYq`1(FE&QWc@k5y9(1N6NvmOM|E*ie2c^3|h9TAq{`~V% zHuP<5*kEQoAnoK9JJwU&_mm-N4{Y2`2gIz?S8E4VqlT9FzIi$iB7b@?Au1aOCe;sI z$cN?m6!4v;S(epsE=|tTlXZ;i|D1@_D_ppNC#cu6X1(j&xA;q?EaF{7($Ly|HudQr zcY6um+Su(QGha#P)i2Ss{k>2FfuHyl#v>593}$#6!;}k;kQxFUJKgQ3`toIr11_7S z>CK!ILn4!f*(mVISM%OvuEteyjX%~Aaeuxy{jD=xB8Y;;R^e%PF;@O>x<7@?tfcxjxEl>jKq(KTl0w< zc5Rz^Yrb;LueG-hn!mA?ry=59@@YK7NLlitzJn6YtY0j$ijuSmBT>C(LBFpxMc~I* z?KMC1! zCBsV#IcC|)^Y$#W1BZ+*n%x4!P<<&y`N0(Oa_8wcAFqS@dp~ESU9fKxp|E`0wO&|! zGzo`A@MBN;=b5>7-lzFFSA(8hdj8D9k+OtLPpj->9!vAv#pAH8%ZBaQq zx1yEdC838v&(FU)zo9_Lvm2{DdT1{C9bB9UtqskZ**57P9hcK3Vu(LLe|w7OJ(#>2 zyV&XE&Us1ylWP=OcTAdg#47XPz~ej>ex8qSY%$y{>mCwT+JWeokz;XZ^vi!+zJ+Q) zluG66IHE|{hQV@C08*xXIF@*?{sa)x^l09D^>^<1Rhw!JhOSRA&93c#Fgjvg@@O__ z=R1d!;x0o>r%lrJN)dxLk#sq_k&L7+1iqx!R4rMF<$l@>%k-w+AE#SN|3@JodEM;d zSon-X`K*l9(h)piv%3DJnOMsTg(sajc0kP(2P+?n(4RC+Ww<*TVb5oULtO6L@wNLi zmv#@RHx--SWzmf*8`>5A)M?2YBDoFXt5yo3)bk`4SmT&DM;Ee5Em?GCLj9-1u zwXY^exeCqJa>7*5ba?ZXz5#hNX8*x#gK+Z{#hAroft)UYpr6+}sfW#3&+}e<*lihV zGj;n;GDP)yH;M?!ROt-@5t#}-4rc;V7>iat@&I-Q{^?6kSY+ePzav~N_<9IfrqBY| z!QIjgqg-^w_aUgH)K^@^qe2V(i)M;s31I^tv0NMMAk3CpLq#GDzy7q`r_G1*M@03t zN{u9xLm6~XHYzvzN|REYe%jLN7Fbq(A(bdMS=x(e{@eDN!~Uxl|ECq_9~y@w`Ztr5 zc^)5pIK>}c2u)!-SGe4;YOoh})PB-saqGl9I@a;|SHnhZO&#GS8P4msv$;=F!sEC6TqBu_DjfXRz@9Sa zf|7xP^IzJeW7@9Y-25+Q{bYs_-A_HlUZI;=GQKJ>bwzHXLS&+g#-hTQn6MQM7^Z&; znW>%kH2-BARTwUL2(7PSG`{*Acgp#6*M>qnuGnx9dV0Wj>f3R7ROFT;`sR(`6C0hzH7>3$+M~4h&U>38bd!x+g=Q9 zbuU+7<;a_VS-otVVK8_o@`xma)*0S>z>eN=$S!Kg_%<#V6c9~-q@qjw^(IlC)u1wU zIYCQ-(7QhReTVShS_V$Mmb_KCX>9)rq%fCVc1F+pqVeNkJ-(;uz7pNGNXPw|^PAyB zT%LN)w>z%)+~I=T7iw(dimRQLN9nhaz6`gQQT(?6sK&&mlZ4Q|aAk7R$Kf}iL}Glc z_s#YGG@m9^ZNsG0Xixgs{$|A4?)=O4>8Qon`AlPJd^U^gO+)@k?coq8uvjr6xDvdi z!aji}lAN7~%23IACQXPmo~`^b6J_XMtV+RSTQ<38ud|tV#5p-lWj^U^Eb%*A1UQyi z>A}dh`R+Bsu~$PdCK{@QkJIigbU~B52gUaSVWQ%!I^S?`oP~pVzD;dIC8gx@X4uZ1 z-5Ak_w+@4d#pS;Zavz3TT=8YVm-F3ym=U~%cz>I9RSw6n{^RX%P2o zj@GNbTHrqwOSukurOPzmy$to8y3%O5Z`9<< zmd9yEKnUlurQGKz>4S92xL?xC@jl4J#A0yU&C)1NIjLAL3{H}m*kA0bLTimEG=3j< zuqa0B%XY??i|kjK&Tjtf#B{I;Ehn9ovSHf8_%)v(*O%GPhbaGY|1!iDM(B;sh%)w-~`&u8qP zUVSQIL1NmO!HS~xntX4zr!Kj)CBw={uaZR&A(nIn#VMI1+{z;;FRGo0=t><7Hh{$v ztU&U5JcYaccNBpafgbE8Db4tZwkO9Kf$nKJHs2BGh@D7v^U)dh2Qt83ED9O_AV>KPFPNaf-I;0z*~fjzmE@jnxVs z&!N`zQ+33geIAZfMkZ^9-A2&yZT7HdgbKjxx6k!(n~tcfHCvg+D~7!KW{^pWZx?z{ zZi-LLo!nefCjR_r?mMoVvozJX*V>6Qsm9X%7ncC~6M!aDdFoFw`ni})s_KGNQGI@Z z+r>Fw=F!F`3k3?4Cau;BHO1T3$%itkn8iky1-qMh+hblO?%Rv!CXjGoI%!Y z4*@qrD)xiyD69`_kDE(bT*pTy*LtS|&HttzF7^e(vbw~7R5%h>duz3mSk&4swPkMq zooFd~7-pp3vUx&WKJUDL_p5%$OO+!kldU|Xmm+_;H*Q9L^%s-eQQNvsNd~n|?@8-a zS^2BYwLn#p&1!{hyY0f)H~0M=NITQM$E}$|%=X^7^DHK3TNriSR0iU^X*M%5R@(f* zg<}hHW7f-S6|EwsnV2i{u5BNjJLnpb`NQ(%=dE=9pBCV#4Zf~4<&@P<<^06E?dyU6 zu7ibg>s*um#vA9!+ZqKe$4iO@_frLr%RS0Y1wX`5{E+IS2f}~JlwMe-S~nZ9qO=wR z7V6lp^K~{40VY2dR&Dso7B8YIkXj!$WLuvoVl}N=eqJM|8F!)W*XLKvk)lVSvooNl z8SPi|I&?N*SzyXS+qWDN7CSzQ)FtEQeN=W{tB5D$iVrN?g%MWBR(NgQfM}e%D@xH} zw+~SV6`6pMpqB6{mP(cjwBEbAoi9R(0~?Qd*)d%v%(|?ZKEW77Hf)}iE>|CC?)29M zxe_f_Q#;d>v-X6V{F;pF(;~B4fXHgSx{l`(%FxphL>52@-Kq}<`_$d4t)1zPE!Z2D zry7?1C%Mn!ERwHW>GK4+12}b3d=MOPAt~D?0zK*mL95kY?ga=DMqG{_)Wh9Y6*9aY zzUd@6F~4{aOY5;$>SH4wpwsnmMLr8GsM54kCZGr z@H+B7H9;0!GtzO{$6HG0Ci?zcPTXc}1FrsJaNl%exq;K5Fp4CpJ5BbT>*^T`@8zt; ztS6Us>lxk5f)`v%W^2gBOX{3E6 zv`2QnVnB@Z{i-K`K)YM%*tlp#>~+0bRsYU!VL@w60jSnWjMno091ITL7rJkljm6LBqRc~?cqthH*1W2*_Yn2-zC3Z6Qh58L{&(i_0nYNLjYS@ zW;m96RPB7LOCy_)=CV$AX#t&h;LL%JnC^#w&ePZ{m-H`j!P6~)tPh(xuJ?jQ^R}u8 zEhFi}A&JZF2#MOdR`aSv21eBKo*~(VQw+tHNVsf~x^jGV8}5Ir9j7@R_pyzLzKW8I z(mfzzZk=xz)$HV{Ja9uL*I;8)`*&ylH}7tD;p)TVtvz0V59;byvap1Ex#d=EdVfu~p9L0h>}380 z%zHzp&``R!s=TwerBMq;R^+aBdpOwwN>X7{Cr7)s1#5rR>MDc%aH?S?PhR^_a2nw77jG*L%66=q+ay14qeISl!Z1_|TDegul7@os;EF zY;xvf_i2v&N;S$8%FruoI}thVRW7EMOAf2_3z+2M#M7&@00%riUn*K1q~#x&eE-Oy zCTg>tbB5|67yC=Z6zFSE6v3Zk;}Z~R*|lHmu%~}=l(Om$S z4Z+t=r@GXHb^8fz$2JuS(BZ;|B1HF1sH}8i#RvWFt>$WOt>62zAvl7Kp@?E6y5Ayn zDJ|^=K4H-DZA;P+J5aPZGH1BB(6x=;MRdWU+A4SC8}kcA=rvoxkE@{AjaHwQbgJi( z@l(wSy}`d4y-l0E-J9fia-Uq~Dm8NM{e7TwxF1ruOUP^9J(_81&$^SOhm{G1gw|T; ze{$15XfQtHohW%2t#Lw-Gy?~1$c^ng;XFj~nb+AO0^*^rF>cY`D%4He9h zkyNGO5=!&+nbPpxw|75Hm%TsWWb$U@z+Li5u$Ut8=r}# zQi|^RBX^iDJp&bWBvFo~IG7%MvqYi{22w~V#G~|v`^h#+llo5k2#1`9I7#Ag<0eg~ zO9mbtAxOHW^k$)`s$RMIUe|dTzNvysXdI6l)D(h!6Gq~UYoOr z91HD(%L(Z0L!;pdXnHq;H9NbKv)>dY>HbHHEYPJ4qwU+0Te(Z-WLZhGH=A!CRW}gN z4yVa%zM-30e1g<9-!p~~-yNv}c;Yi(64f!7(kuSrVFoi};N)!M^^DEj8%>2I$nl>r zcVf@T`7r{;`PF9t<(*9YeSQ%wF*vQ8Q}TYW8fnRM&B~30Es~^HS5i;!#fzvJaXULj zOGRgJbhZEsquLi!gDo)}rjfO{UNWBBY*Rap=Ya%*f%v~YznOW5a|3vfEEF+>i@p>Q>+r+Khu->IiQ$R$itA-#AOo*QttTS!j>OZ^PogQ{QPXg7Fb?N3zJlY7o;^5yKbL=z{yZ=#^foC5w$8+Aft9fG zR_gK!#eC<*K8h#-3Ez1I-j4s~zdc8jZ;Cy$=`U6`t%P$D+$j|6NAr}Wt_xPyVc5i> z7vG2+X~USvT?N&rEIcaHz|(oYok_VbptExl?^VpX}j6pQ@U!4}nwz#*NReT7+PH@WBKHO129>R-oP z4gpVp@M2fylcYZu4Rmzt)P)js}NS)@G zZ901ar5@E%B;0@0V{_i{*K)K0W&5`3951wI6pF4n&3aZz5uBQcKw~qrUR{j+$*{LT z5s}xk{J@!u`RtD{XT&K2^;n+SP9t(0kx0&jDWiDZlT_WJ4__J0CRGE}(eS~ELIv{T ztz`zsFUQswM$6S>3Q`jTGvC+L5r+QZY1>b9EaN7A8;d*#-$Pov9B;+j_2FNmR*fta zk~Uhko2$iy!;$(9P!^dZQ#Yefa<})iAF!pa>L#BOGoH+|OHN(1uwPN0zFvu~%Us)Z zZWVvoXl3xoNixy49CdLx@lLma`;K9AZR)92IEJ$o9au&Puzoc}Vi7>tVyn#onlY2w zM2m0nO2|MdflXzkhzR<513O5R{KsG9Trr@s5zhMjEl7bvin&A@gCCp@^Xih-wIEb+ z?>mXA7Gqyb#<-7fTW04ECx(nX5d<;E;G>&uOUB`xvtNFJSVqzvPKsLY96Kp_X0wO? z)T%x=9`1=`D<>-!bqZj^!vy~y#{M!cs_%OPhE)WS?hcXeE-7hg5Tv`iJC$yvr5i+~ zJETJxx?u*88DIeEZtjEM|L=M3m-oEk!^GL=>{xrRwXSttGDQHdsIOY?3EZ-e1CZWp zSGpUa-$}E;Mt*6#K?YPAI5zuTqN1oxe1N`kKlBkOOQsOY$@bl4{KQtqLwJzX9`Do%Q==$t9Fo(qX5jbfFCbqX@uK_~;Y3o}9+V^$0z8J4LCI&TDDoY+o#bz{ zMp3XtGy^ivjl`u`=g~BQpTWtI?M)&7sQbri6r09~W3bWkwOT-SCOUqi)_U=3{h{Yh z$8HZs11gI+S2Y7VXcvuw`Rf!vSU`ZB-d6z1b9a6E$Tv0d?aYSC%0HVRs@dR9dL1E?6nm1m3=Cb z?;38!?kUq>R#YDI83!h=nHjT)R}rMX;X`eh zG<|>&8E}L?wci2QjP2QS+)|tAC+=1N@d^0gK(qv)-F@QLTyAz!0p0*0as~@iJm^X@ zOgyxeDx=nsV=-OOfrP+P2J<9HAvOH|FIr}?Zv}BnU!n0Wfv68W=&tDDZ(74TK3mKi0nW2vJ9rnczbivE%nC-<$qfKA#pxoivOD&i4em(2`ONu{9(V1_e z@}7nTn*BCl@zVbK#z|$%$-+CUd<^TY}+~){LPfBzEhDe@~3`6$#Zp_0TrTYE2 z?;!u-=p*6_qEczyA9)T-K9GREO)?;;1@Doxg+2M0HPuSu7})lV2pV$i$7pXB}Z&paY!`O_TL1)CIT7 zWbb?O)d48MI$0m0QpW-FItYNOE4SZ>bzE$FH#oB;vk?uf;10U=KriLaQ%lk5P{sT* z68OD!yiTLGJ($G^+W&8MVZ3e=d9G4lU>*M>vuZzA)-72<{msXd*nw|xaZd;NaJiep z$}2*PFQt)9t{*w*5NFt5|LLq@K`omZK3eZ1=C#GM$_wD53^>DnV!-<`sbB9p^CXx$ zrs-J@fMyjdG%J}{)HBIQ!oQdh0XpLV$d`eE!5Cn=22_`Uf3E(7V75Fy++N>bt^u6{ zFy|$vq=XK^#1}l?dF)C7G@Va~Z^LS!VoU*`5q&~hmumiG1rr7Z-7{hOr-;kXa9=@( zgJA6YA9q!t2MdN)GsnMtVBu~Vq7Rff`0t-%3g5lE9(OMlxu;0aTcqdnvv`CJ9z%XH z3+zH=uP;ZPd|*Fd&lc-;Ngo!8peOaRW(&`>aIA9NS&~=%zu~uDlN~TuZDN>LTTDA# zazo=2tW#>>aZvlV7o2n%Aej<1LUb1_F zD;nQj?G#ZrCcksk?}^039JTPGD1#qk9wjp@bi9Qr05Y;?026YtYy!n?L+%L^eZ(`R zOdcwL9CLnlrj7-zubW2+Ft;ReNr{O)Z61HM8*T9c!fiZ!{8~WE-f}Ec8URxRWVFBp z=24>Nva`pxz1naGb95Vz;Pw_Q2e(&r;U)&QPnAJ7okq#fQGxoI?qcL@Wiu zajUsD)`r9790~)Wd}hS7;qW=;YmfDhNu-VD-Je&l^amUoz09q+oW}K|S?Cd7QOdx0 zmU^Bx-+L2vXJyy7YxsC|oFs_Ga>@F>4k{A$cq3eL>=4akrFR19glYm@#2~%%BT~~t zT3Xumb6-32&uJrnHRL~i0Ln(_fnBJ`pxqOVl<(*A`4iJGpaTx{^!4>M9v&Wm=OE$Y z;<|WOI*K0i-md>RC%MA=Q<1o@dbjzYo?DAecdh-zIJFr4d*A9+o^so69WUIQ(jL8~ zAT?DiI;!C>u6X=n31x42UU}@SWlt6eX(OQ$+E zmWT+a=>9VxhpFnm$l*xF?c4>T!4Q?Z!^r;>6KHt?n*cBk1q~-)E^qf~8s#p3E^5p} z@rR~3h$m>po`^g6&q%dPlSposW82GiOOEULET93SSR(y-_a2s!trZox>uK?@yY>9% zf(Vbz%yvrCE5FC}7?m~HwmSR99&UX88~Tv;>R<6cd+UE2-jEl2!7u)@4Hgu3yn|77 z-kht93zuqD`m;Z6P{KBr&29!oNZNSxZu8%S3$Nde)YJsn_a5S>H)oT~i?=|3B+TGO zip*3f1pN^#4eCgORoQ7DOH==bTpJ-^Ev7}irr~i?ln=4|xp;*e~zE zZ~Qz=?SvKgGu{5HNsIUNhMBIUX0PrEgD#vP7l-NF0rRki59c4RCK7~i*N5~^AumW; zd+HQVPER-#oqYm?lrXU&iAss(1DPQNFJtV69oF9h#XhDWo3@;OsWn4hQv6#PoGygl zTXx+=Dxx4lr_nd6C)wDqPj5AoLEuKI=dT&jtydT_STppNTC9m9^1LxTm7eq6c-n$S zcuARP0+)33CUHq8Tf|-F0%7yyai`l`7f z-)pikHMz6=i}aB7P9*J$IJ^fCf)}Ux2oYRacb)`5p&{qIfWw3#>jGrG`B+XL zfc?i{!K_btNPQ{4#lNJ_c42_3@ltDNK*xp{R_?zEBB%yexnpoBF*`PCwNYl4>VT&9 z^QIlj{`x{Fv__n`p8?9>;C-R<<7Ij_RMbeaoN%+_8>nZ&GqsrCCV_i_m(ROuSSg!V z9Ay{I7a7t$0&YL88p%)0%`tY3$x|L4h}`~Q3-pql=#2E_7aOwTk2krHvlHxW?Ao|A zuibv6*bKMHdx6C#p-u3|0d%$gtL0DPgur1+`}F}>jwbDn4&96tZse{$oLPjILKC<+ zb9J@P-+FXczcW+!d=V0Rv*6HiKf){U#i|q{_r1?Y-&MkC)R=7{&T+V249^5$@N8{` zgEk>t-+hLCl?+9HQfpO?u->kulbLB6g0qko(gTKaFvW5Z*SfwmC{*|)U6H@91be+S z?Xt_i>bA6@=on8DVwE3)P5;J3Vqid-;%l@&NDh@EY}G?$`pJ#+dfk=YbU^{rTUl5c zS*jOLCg2Kkk#xT7{0?G2%vk6w`^+dJbTNL**@t0-HS3WM3z@^`#2h#xeYVg6DBhDY zxUDD_*^A;;<=0QIf%pT7ny+c}!U;JFphpi`Ezpu*cOQh?be#3hII@dA?6$=({H?qM zoWBx{pZ3S4t|$36B%*nT-Hz=PhNB?fD-HyGk~4+gK=*f^6JN;Z)ER;BM!bk;tG)nP zJdod#Zx-KKGdw(i2dm2uZ2e&t=~|fOY^b6Q${h^wKCOon@$7z-X_70|yA+Kj^P!39 zybW%xc%kTX;X-HNN{%gyz=N1CaM=mJ_T6Ey47`3u{;s3%>$;f@nSOWK8Y@*JNDktv zrcUAO!CS7ovN zl@%HOhGf=BS92UC?=$FR_PWins-T{V2W;%%hNhKntvqojHY z>=s!Niq6a}GKgeS@Df7u4f+6Z5QOKgfWE~m4WOl6qC(js>UR3?6T|8ynpk|K$=m4n45F0>-cCIfo(On#P5Wak~M@d$(t zA%+sB>@^e=)DM7)Iv&bg%o&SIpmUM26TCD zb(@M^qVp&pmbLeNnGS=Xf08rjNJ`y>v~q&p9(EiXY5u&4P!aP_kF3UL%_^R@`J)Uzg!URD}-prTp^kl33@OWaa!uxcrc}MI2k^-pguhSn~LPSb&ouqsODopO{zeZ&K6BDc)U>j*Nt@9nVz9Xm|XvvV1&wHALQd z>9+E8dM|XWvj{&PWwSdr(Zlb5Shi&N;X5m<*5zQtpGT}BI!qsAFUhqw7;*_2)4Qzi3eh;uwJlNe(>m-%jSGQNtres!%vwe5;!wQBI8AhsVnuvNKcQM5JTiHXz9x9 z5_fZKbHfR@DcG^-gYGJ0mSqQE)8`02FR|HNE@bva7fl<&6e_(s+|-nL*>=pFD(%7>1@l<-IVb0x)fcu7q3y!v2bNlwp8DB$3wRV;zp+d0;R5F8y^Cw}>1n3sxd?uV4gRv3-V{fWeJyug&;XOWL3LYlZK>8TL%FLb ze@MVwyf_Dyx^j=pF+wPeteKL5LVXa8xhXL5bG1@a!(lW6pVo`Z4w1Y@>Ui^I{2`8e zU&1y^Y?Sa1j;nFaU4H57K;`qnuJ6gdKSuzmS(Q)lxY7e=0$7fR>7OI15>6pQx0xb_FyS4`zB~1Daskus|}Wyo@lyk7n_W+_JSh>l0|ixquz z|KAlNI1o2Tkc{>j-iR*<2=~OO%rK2VR(*ZF9`+snU+^qcAoQLX*>&(n$y}=NSwquK z2e^rrb#I{W%C1$7tRjSHn6GdwN1KnBQzr7}!~eA_netjVvr#1N?D@|q65$rc49 z5ZW`1eu;_LKQ|?Rj3XFB1)lxyuB3c;;&5^Og+zih3Nf`7B|zLv9$-|h^Jy^0E!y>2Ply400!X)9#@fY(mAa(NlMe9 zB8zjadz0L4<*@g%)Q4UC?p*s1FMugU|8GjwVzMYyxx&Od>}5f9w#6)Amfw`t$!mw+ z(P6T;Wu|?Rk@`#?;}M+Yz(d}SFh)$dR)Td5jQ^K^BCE?7MKarM6wMZZ7FUS|By9o+ zjn^+hw+eqUVMHKgv5I2$0a-Prn2F=QfAFuxCT5Xvl5jOpA~_L|*sDrlekPBVuLVKG&mr9USOo-;tCZ5q&cSO}3r>+GePu0HJI zReFk)fCye`H=*b6>Hn%|QAl6#5V`R7I$YA)G{b$H+YiT&SY$4>!9hcUoxHbd2EYEq zDW)b2WaNP0XA5hdg$|H?Yf-ub6diy=iak@RxzuaS!1@nu+JX`~_7%+cgips7ev^m4U>b_JM?&aBfVQ0g;vN`IxW)DoS*6 zXA+7Qew}kbZPD^(7#6SOe+v@1c4#_+@aru5d+Qibm`B~);!D$5SS`!Agi z{P)kpuwLo=Uvv=uGZDOliS}P71snbX&=)N!-In|l(^8%(JgD*i#VjRei9oHVo25zp zDv=?||JR5bH%p|u^Oq_nVJjN29XlNrhFa2p@|}?+W>*e)*-2!6*~xeo^Iu07VFKAo z@{#Z4ki7mWc4P4l)zvt$=8l*cNQhcd-s8LJ6W68}3AvBvhr13A>3@@${qzy`-`cT_ zbx!;48g@F>F6wPruPU#Kd^2tzVqDTL7#o?G5GIyV^jFoR;4(* ze~WG}cvkIJVhNzuR&2+1YmN4xp`G5~kac#Wyt<;>2}t5H9L{#y(CfEs&1%r_;Rr8rM|pgSyI- z%|&)`PB~lPu5<+mgi%2+sF!4VY#52kQ+H|c1{}l*H;0RZB=%nY@6mFw4xm(rh3?;l zOY&Yku)Q<@2L2-Te|O|Fn*^GjeyOVdZOovNAQ|gC zCw7-OBG&mcK-D@F%}Pba6w2C&?zQI1Dw=NZ`1CNd0hYna>)&Zkx|YTX(b2*!ta(la z)qDWk9vMK(8|Y%)7k&K`^*ZC72Z}mCKoV|#41;#r39Eq_IKLX@LAP(J zBl!g|q;y_yWi4{0SpeFI0h2IcB0z=o_e8>kB^6&*93VH-gE*;Y}cFin=I`hInXTuRJuRCLDp}_E|ptRtu zpD=+16b)uVqA5JySsiDL?o(my!dvAxPzSILIgd z2^{c;s${(9pwj&lkyjD{AF{(138y0kF+6s?@Vf3#(CzpvYdl~iqSqH%@q&d~4iU6Z zi?Vj8BXcl@HEZg=p>zWR!TY-!FLf)A>jNyd*{gwq@X9uq>8}UEu8$($tj{RlB?Xda z?Oz1^Gu$3gN{nr7`rC%&Nf7Oi7F7MB!1z!w<|LNI!GFV425NY4?p}XvWd3J!`l%Fg zy1Yy9I{U9kEdBwOfVQ2A-va2&f1g9M|8MmIne+c02cKMn{|+Wk-{}`5hDbjfNNyVf zrZyyPtd1o*IvU#Kf84SE48Rd{Dsel4h$g`7L}&*ZP2NmS!l@;7);#iWuoic&Z?so; zo74lA0zd6bw&NndWKvu-R0q87wLGj_3E%+pG*1&Iu;23nn|d8%$rG&^|C&EHx7B#R zWlWbr>si0v0}UlR5MqK2#=!s>KnXw=;-Vv?G~?W=|Ag#m*(uEu*!pH>x;f)P4#0p+ zh#JY3z5tP7&-PzUeFA?{KKegg!vXoCh~8d_{JlG?OiKW^beZze``79s44B4%1{X#| zks+Bnz|pzqRDu`S6_oB`WohXOa4;t&CnKcftl9nenRFNBjTfX9W181*233`xf!0gc zn-Bp^Y@|SOjqmM1kt#z!Kz1b0fG4qVzKl^e*bnOj&Mu%d)*>K_T#MQ_O_Julj0tlA zKi`1E>j9xl6uGuioAaiQ)C9`&a69_Wkuaa4HC0lnZF7 zVE#eC-Y6OIc82G#1$rU_-O$*LLQ@Xbtt?=icgtsIHFAA-%UkS|@DKMqg#ek&Qn_Th zM)K%I1$oA*)-4hw3^E47f^$_M*-s=E=nvtOjJE;=hW}Qzv$3^1A-mK7o1v6QYElv) zh>Q#U;j>iGyY9>Za14>~JL-cAA_JU97|Wj0*K&8e1yu&^T)t=45JxWnMRbw0b_kE< z9)H8hS=%IuUAX$+cD#tmg$w{HWiP0xFfhuQunH-V5u<;zqy0I)K5EJS#BL6t&G+A! zYG2(4+P6U#%27wHLm}io+n)3fQftc95ii0v2sJ4T4XJ5)iIv#GZPP81n$2FRsjsU5 z$}69o6884&OBFNT@_s%KEw!;ztok)Av)o4FV%k1i8Npssu~ad*-wr zL1S#zeu{@#u+034m=N2DyR=fr<0E$NS7oCJB2doCMz%|~kny>{-tm7t#c_ASf$O_7 zO5S)B_}|oQN2vJhEoBhV%H?9eHuK3kovz1RcN_WDs%w^Uce45F?r=qCBLZN*p?A4p zEOJf(mjT35Lx3W(VT?G6%eK6nGSrAZPrkFjo}NcicaFdu_919DeR- z;MartaD9wU;1ti{ag%KBbd|$6Q#SKZAZwlUk?&G-qAvj+_GRb=7CJsBtiW&E>tr+O zY_6_jZ%p`pi{;?zQiSJ-Sg|z-2jqKR>~yC5RZaf02vFRLW;V;qXp~x^mGi1bsU75=a~E3# zP#p(ak%W;4h>MYlNn*xb5w~MC=hjTuGR54Bi+}BrLoWW!rBX)-rEb&4mG}y#<(0&& zkgEj%JttroHG7y&6?OyRK`?}og_&n=?X7HiF}{dW9)(S%}P3ERqplwzk> zuVPvc)0h)GeqC%hfcsYxI{P=;54&LviCNc}ga!eGPMycTt;OTUyE)BnkQTe0l>)|R z6Uwa_nz9L2{rOdmgdJ~A^VYntVTfpyvtNet9)cH)R$5mOP7vLR;FLaH{vp5+Ve33L(eRVZwg`D;Lqwbm2 zbdDCxWwuMG`?!~*(A*JavPRsdOEk_sFX3m$9aB2{2zZ3PzH4&?O!0NOP}i!~W`&8Q zm|WZjq#UFx>B(|W9D~cuGbeJpH`5vb6%qDV!C*6bajD`*(OeWbM>LStTgW0jE@jV0 z4&>9?agSu8IT*A4KB_Dmz&i_U0ZSdYlVHRT2Qy)yUfPHIv^!j3oHHjF4Ivh z@I69_JsH0#Q8J}p>|78UmDN2-69I|42Cz=}lonzp+}nx1^uB;>W=8C0hTCM!b%`TC z|Lu+;F>Axuj@wMp&l5B8*|In?@^#dKE43e!bdn@IlKNFrX7*<>+8l8azZ&PAyhgmn z&(O@UVsp|~#FF7)Y30DL8!w&k6|(Mlys(ET0$O4>bm$q9W=InAvf|WSk9EGKhtyl# zqBI61cGJyQfnX_|MK-Gqj6hXHdYss!qRd6SZj>%wJs8-0Q8{9vQg29@^557FY1*vq zzB|&UXgSoJ#{HH3?%<zD$fuD5Zw?7t=nudEB9e2XQtD(BFa*;Pr;Jsar zbowt%tRJ=8d=W}bYD||*726|tg5*w&pIf7mK#~Vy(m!G$Q4(?&+XAFHu~`#M*uVpRhn(Ov_UF z&X6bQ7HBcL{RKa5YX9}NZ=}axKC?_ZLA`mL3p%Ll2L~gElUtzP%<|~+W{hYGS%L6D zUOwVs0O2U5jwSHs#7xslR`7Js%{uLc!otgKNR`e$BAaK$m1u0C#clj(-aWBT*z>mq zZ@0#7Q$5Kyv+4T}p4_Efj&=~JV^*FY8ya1gx0k;hZ-|v<|T;Swq zl%6C;Ds2gV2iLgOD>T%vN)6}hoBG818D+AA$hGtX(32l^we95PF=F$j(( z`x4xyQWs_z1{4CfbUnW+>VEvx_@$7qnD=l0W3w0B0(`G^WLsYnY>V0_h+69;jwQv^G^{C;Q>p&e=jDsa*a0SxGO8>jO6@Nx!j+ZZ_C1DAa%3E<> zqQ;ZI{TTC9)0NT9ypIZwQ=&uj+&YAPbxy@<8{ZNlRl?|9p9{FKq;8~yq;!2J*;E*K z@zw2ZD*5m$8`ZAoO^C`2zgIU_Hp290zZ)Xt(Rp51|Ct!0v+=@N#VL{du8Ey)u^#%u z9?}pN^Vd+KctT0pl7vyrO0``RooD*AE8;bx96)dOGhZpLe^;u-;Ij#QeysnKV!pn= zT=!ScGph^AyNo*gijTGW!Z#?7s6>5JlCK1^GsQPUcT=K=jYihD3-Gv3^k)^n$0C}S zmeulGh;1@`D)C?PzJI(lqWRm;dGU}kVzf*9;?KmmvDLJw?SkFsh19OV{Z0d+VU)IV zwTgF%netj2*?4yVQzP)|&0amm1>_R3v0uBP1OjY?z{^CrN9Es#G^31HfOb3Z3KdlP zLi6@*R~GJBz1@Q2$%`ISutR#*w?zRa4Tgk;`=JSrtegzx>D7?gAL8$kv)F*w})YI^|+6*ZBKOb#N?0m8p6um;iuZ;oK$_fh)SBnaeoEn*R(XvGdYpbhM|M?&4`GT zm8}%9PV_VbfzW}?riQfK$jIL|PY=1#As7!N=GpGN{ipN+8j`3_u%&jVpa=268aJ5k z;3{7V;3>+8oT0T}P%vnYtH|kn<9T>)OHX%l3BEPo;zq6+nWnw{g)|9V#MmkjIn zawYS#oOcZ+(y%tg>>E0Cqrl)R9PChu=gQEn@pbBQgDUe{H)@4J6Fm$fNezuN8(PwC zX#bSRSyvMwC{VG$wPYkEnq6{ckk@##?J)}#4;#C+`)P9j_7U>1$j3RxzuvyjXs`h% z%jz{jmxOL{h{eBS20tfyCvsFYJE*JPtGLt_^8iCcK&g-^6wV`C?xM$&l!e;z$NbUn@XzUh7A z+V$;R>�!-fVf;GJyn{5w3pw^;u+CZZz*4#i4q8#1A#iq@DWP=}59-fInV!*CV5x zUsA^DygSm`JsQ3(4Gz8+wjATI{Yq0@v54Q5^_GJBo*OCBq-_H*T<}r~V7RVV(Efx< zY=lXa?bj0fFKAl^G^GSVgu+$e9iTN?vBB3MP|ZWo0QovKKYxN6jP7_chyR7MPZxHF zJIn8YzFX(!HM#}dK6P~?%l8pCFSEvs*VD$`4;SN+6Cy@lpZf%Cp*imHf88ir@{W_F zFy;uo3%?TsT0e10J0_;TsE=rBpp{OZHBi|j6Qof*s*Bmff9V=GGd(UhX?ykd7#LrObfZy5e(w?uxy=z%+W`xa;G|8Mew z^ZpsN58u>}{y*8ir>PIT5;@?2HwtMHYsUsI=xTzpX;EjqPH4YY8}g+1Y-NOktybHg z65IPixtQ*_Zwo==hXw|??wO>FFB9qfPs^2XrD7{fY*NygiNB1R2g68B)f>;AVc*J0 ziEA{3u;|uhj?DdldJqbi0ZdN>fik)H(itCXwGoFX#C$&wFwZAyQ-2wy_S0$e#U=by z84PYhRYv7JgsCtXwK64kLeH~p7t+o*u;dior5=0+Yy3b+nw5sKlP9We8f0mRuPzTZ z{yJ%@{&3|j3^*v{qZI2UG-9EnFGRzK76#Vlu8?ZEZ4?(MmM^Y3m!S*#hzmO|u0`WV zo4(H5jsqiMF}$OC+og~4npN_lf15AW?R9R^HSG}#QFomso?qCnjYg;1)lw^9QL4WH zjXWZjP(X(2oHU&UM6%>lM+onfrF*|V_4{=W*M$?0Vfs+R;o>l$3qR0cH!~}x@xxKm zam6Z@)AH~~`k058uyx(NxEa~d-O;%)?D$4;KU@jWelSo>9Z8J^8-VS_zH0u2cu`PV zLL8fY+@@{{<9hUA=iZ9?edBbS+sD0Qf4efQMp7z_gQjZWg9kGi>~;U8SWEi=A< z$kQ>HEaeuYv$(WXXRi(RX|E9-VtG80#wEsgc-PR?+b=EijggXte~{?(I4e081mnFF z6#|7SGs0#T28qEISya_Qx0HuV4SrQzxVc#YBdu`gx651-d%Dir-vV9kQXHTLhp9;x z-Flc-y6=ySnZc~#>Y%IkaS?;bJi9@l0&b=jejWyX;X|+3%5_|7lAKnmIrh@cW~f5R zaXY=Oz5zgLR#2JhNG;F?2F zDLHM=7%V$3K2BY{c)j%oH`h1*{clF=9!3$KH0iiZ-s>R;Nx66a9oj;b^%omN^DOs~ zX#wHMx$AgMyq8iW38Jo;F$?=p(16kLX7EynsLczVp1XSoCz7$=FgL-()K~90*dr{~ zoc@ep?4yDNGBU(9l@Ew{m)ZCUoZncmg8_~+n3U&wr}3MFUw%o8br#Him*g}Xz^~HJ z*Vt)?@}!lk#JcuiHuh_6sX6UQzq`;0rnHIZyAKS6mTLV}*6$euTQOuY_pSb>vF@$b z69#cSezS9G&FZ;t!#;v1W@hT{xk1eXZzT$z$+dgddHnT^&lwf1w|_&=ovP>CYo0K%f}=*JbJ(TiX*&XR^!gfregnuAh3Rko;zlxtlm}Pr?zkm z(pXi$;HE6@h#Y)sNX&R=g!1YxhJC5yPVyk=ijzxu>tsV+<@9LQOQ*@4vpr}TG{g=8 ztxE1Zwo)Vl><&-%Et?SR@*4!ArKgXGHM6mY)aGoV+vU1I;PtEdwhN@&Q>C1a>8PMO z=XMdaj#KR10DJFs7uhEEvJ;B{V!=H+fK<@LZKT$86C(6LtjV4|0I?vQSkB!txN+9H zjA`|Q;GIa64Gqo^^=++W{Cx0#t~Fn-M#PmsDcF5to`p0wVhrZ<*qtjXcFVq(<*r53 z=KLgBv&HOrAcpdOFj3?u#CWPuEhiXve+%m#kiDW@v#4<4p}1rrqz8VA$wF*`6L%_ zxxqHv0W6goTapB#yZ*2o^e`5wpC9Ps+z5>F@9IDHx-#twdeAyniJoV<-@OS`tkr9p z8GP)Q8lvs0S`bvk)3tXW0>~7FFw(+6Fe)rAZ`O$8X3WxwuQnR;SExC9k6~V7cPaF; z=qxN{?y0(pK6@15tHR&-#Mp$2L)G20M9uF)bG z6Jx`@7JA+&FGBgbNQMwI=v|-vM%ESPhoXqyLhVQvD1P59ac}c|7gKwcmQ0E0@WA;O zfhGOYJ{8BH>3JTTcUsE_Z^ykHv@C&;YuN32#at*QoTb=w2Ztr^6348R6?!0@+zQ*3 zDijjwn|!DwatXjhf}E>RbXKwp63Y?`@3F<_jrJh&{PjRhmO-bnxnIy>E0e71lyX3m=(Hj=^{3g zR0_=wlVX67yl+b>D>7OTd>c>DI)vdo=Mg~6>%z3JH%t31Al7E4)Ko{>D9uG;)-EM- zKPum#tb#M15}soCXVgsR%3t(t$MvCXo4#R(CW?;Wk~v%o*tVnYmRYhCSYtwV;M+QQ zt?$c+o*c9Bm`~Nbr(#B#6qI&7M1bwDL6v;J-B9+f1O<4}l?TkJPcQLajt_Iq2K z?QlASm2$K{o?#&15;CgCu4vjH2`1+DL}`&|=@5PUi>^d-iDmGslF4 zs}=zaM}y(t^lQdjeuqYW8D1sgR{ z(;du|Tlt^E{=^)a>JYdb9}oC$M+@lB74j~9BAx2=;D$6wPv(INBqIQ=4b>_5(Rwdc zW15A_&7v{bBFB>*Z7_;y+k;Oa3?_k2|M|ljyJ9}I>lgKe|7SVY1mt z?BQD8NSV@NsZpKKF_RbVRf@)`wUWCEw#CgiB)t zg69V(hU?A*I7Q6Rg-;ncK#Z$v)z5h1s}f)L$b>q(rm!s2~pYr&Wm0=HmM|+tOw{ z|E^c`!sFt+6~DFqZt#KmWaF=$X{N*>Lx|d)B6x7W zZg^}gsCkkof)(7PT4{?fLEw#QH6naFn&Oe^VR}H%Dwviz0)q{@ctix=wOD{pugf5wUQnDcJstq@bdYu{Xg)>b>**Pm|VD+ zV{qy3r3Sr^tt_CIpUR}n(ExQFG6Cn5!8v(YXfTVX1VnEVfdR+ypR=&h(GvW8dS;sA z!Jdr%H*^AytyB8qx`Gj}r0Zt?cG~*{hDPp5|48=ujFYzT$ja*YAnUQ$vz00ZbQgB| z?Pyjbljnz?w_&xYh{sT4-9EwP)G)enxO+_@R&BuQED zA)>y<5IX1md^aO?{T+FI@HM-lpC4aeReKWVYMz|^+@QV&Q!qLb8WNav;HW@^`1Xjp z;gCDY+Ur=rm5UnzA}TK>jmHjgJw5D%3t3p=3Nl<$%u}uS5|!YmD6+VXd*NO zO`Ks8LIo^P*Jc#NZn%A+iipf4lkC7=;GQ#%xcG4 zgRY*@spRxurhD`nP2#3^X$!%4s9b!hge+E2 zKXYt;l?O%?P;4n+#uPgpAwF*rc4V{DPv}|jDswE_n(a98J@{3N+b9HiKYC1AZq!WR z!Db^{jx^S@s^ZuUSo~(r&7K}%P6s5Ip?6MFM7o+#Ey38b=bs&7qv;bU)5`$t7L=-~ zI{U&xWg~@2)8G4-wFk*s-#-;00T&wi{(8dAC2V@pke!XR^Uj-3Nc|UDXDK~sWtbxz4$|Yx_Z^eks#LO?UZfgWJ!JNbV{P?+a6Vg_T9U>BSYg` z@3XOqZ~U90wzJ=dA(i1a39E9&LtCA)d0VEQPfxR7P26k6tf|JwcC#@1*0rZLRxqHG z{>f1{8X}755PoFTV*!dm4uZo5<8W!Pc)V3~@>hr?-~1VJL2Gh=U3k~9q7J=Y$9x%e zbgf*`_h$msw||a(27g@ekcT28BD|Eu5Y5A+$D`T)yZ{qzeCB$3a^xE@AOYLCdl1%V z!>m#dXGaQo9^8ZF*j#p$Dbe$WD%mK46C*I_jhMYILC^bVX01mp4i^qO!~9tZ+%-nc zrsppa0&Db%V1EQKD5{$|Qo<#~UwnDfwBNh$f_?M~xVkpc$;nC03I#xL9FP0<59VhJ z*r`v<^5Mlsmx>kHozHWBY^zQ)A{3<8FGIAO;uc^7b-$hX37RaBUrG+dH`{9kPVyoF zAOtfs$Ao4Mn!Gq%6dbgihhOkD<{Fd(!p|o6aCU6l*#t{`WgX$pe75F=2#3T+FwnNN z)LrkJdFY9&+wqnEKyTo>mya&^F@XJ)lEiylGy8+K>c3#gH3CbfP!}dy^S-w{#-`. + +There are several benefits to using cuVS and GPUs for vector search, including + +#. Fast index build +#. Latency critical and high throughput search +#. Parameter tuning +#. Cost savings +#. Interoperability (build on GPU, deploy on CPU) +#. Multiple language support +#. Building blocks for composing new or accelerating existing algorithms + Useful Resources ################ @@ -13,13 +26,9 @@ Useful Resources - `Issue tracker `_: Report issues or request features. -What is cuVS? -############# - -cuVS is a library for vector search and clustering on the GPU. .. toctree:: - :maxdepth: 3 + :maxdepth: 4 :caption: Contents: build.rst diff --git a/docs/source/vector_databases_vs_vector_search.rst b/docs/source/vector_databases_vs_vector_search.rst index a1a579946..ec3a28d09 100644 --- a/docs/source/vector_databases_vs_vector_search.rst +++ b/docs/source/vector_databases_vs_vector_search.rst @@ -1,6 +1,6 @@ -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -A brief primer on vector databases and how they relate to vector search -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~ +Primer on vector search +~~~~~~~~~~~~~~~~~~~~~~~ One of the primary differences between vector database indexes and traditional database indexes is that vector search often uses approximations to trade-off accuracy of the results for speed. Because of this, while many mature databases offer mechanisms to tune their indexes and achieve better performance, vector database indexes can return completely garbage results if they aren’t tuned for a reasonable level of search quality in addition to performance tuning. This is because vector database indexes are more closely related to machine learning models than they are to traditional database indexes. @@ -65,8 +65,6 @@ Some special-purpose vector databases follow this design, such as Yahoo’s Vesp The challenge when setting hyper-parameters for globally partitioned indexes is that they need to account for the entire set of vectors, and thus the hyperparameters of the global index generally account for all of the vectors in the database, rather than any local partition. - - Of course, the two approaches outlined above can also be used together (e.g. training a global “coarse” index and then creating localized vector search indexes within each of the global indexes) but to my knowledge, no such architecture has implemented this pattern. A challenge with GPUs in vector databases today is that the resulting vector indexes are expected to fit into the memory of available GPUs for fast search. That is to say, there doesn’t exist today an efficient mechanism for offloading or swapping GPU indexes so they can be cached from disk or host memory, for example. We are working on mechanisms to do this, and to also utilize technologies like GPUDirect Storage and GPUDirect RDMA to improve the IO performance further. @@ -77,21 +75,28 @@ Configuring localized vector search indexes Since most vector databases use localized partitioning, we’ll focus on that in this document. If global partitioning becomes more widely used, we can add more details at a later date. Tiny datasets (< 100 thousand vectors) +-------------------------------------- + These datasets are very small and it’s questionable whether or not the GPU would provide any value at all. If the dimensionality is also relatively small (< 1024), you could just use brute-force or HNSW on the CPU and get great performance. If the dimensionality is relatively large (1536, 2048, 4096), you should consider using HNSW. If build time performance is critical, you should consider using CAGRA to build the graph and convert it to an HNSW graph for search (this capability exists today in the standalone cuVS/RAFT libraries and will soon be added to Milvus). An IVF flat index can also be a great candidate here, as it can improve the search performance over brute-force by partitioning the vector space and thus reducing the search space. You could even use FAISS or cuVS standalone if you don’t need the additional features in a fully-fledged database. Small datasets where GPU might not be needed (< 1 million vectors) +------------------------------------------------------------------ + For smaller dimensionality, such as 1024 or below, you could consider using a brute-force (aka flat) index on GPU and get very good search performance with exact results. You could also use a graph-based index like HNSW on the CPU or CAGRA on the GPU. If build time is critical, you could even build a CAGRA graph on the GPU and convert it to HNSW graph on the CPU. For larger dimensionality (1536, 2048, 4096), you will start to see lower build-time performance with HNSW for higher quality search settings, and so it becomes more clear that building a CAGRA graph can be useful instead. + Large datasets (> 1 million vectors), goal is fast index creation at the expense of search quality +-------------------------------------------------------------------------------------------------- For fast ingest where slightly lower search quality is acceptable (85% recall and above), the IVF (inverted file index) methods can be very useful, as they can be very fast to build and still have acceptable search performance. IVF-flat index will partition the vectors into some number of clusters (specified by the user as n_lists) and at search time, some number of closest clusters (defined by n_probes) will be searched with brute-force for each query vector. IVF-PQ is similar to IVF-flat with the major difference that the vectors are compressed using a lossy product quantized compression so the index can have a much smaller footprint on the GPU. In general, it’s advised to set n_lists = sqrt(n_vectors) and set n_probes to some percentage of n_lists (e.g. 1%, 2%, 4%, 8%, 16%). Because IVF-PQ is a lossy compression, a refinement step can be performed by initially increasing the number of neighbors (by some multiple factor) and using the raw vectors to compute the exact distances, ultimately reducing the neighborhoods down to size k. Even a refinement of 2x (which would query initially for k*2) can be quite effective in making up for recall lost by the PQ compression, but it does come at the expense of having to keep the raw vectors around (keeping in mind many databases store the raw vectors anyways). Large datasets (> 1 million vectors), goal is high quality search at the expense of fast index creation +------------------------------------------------------------------------------------------------------- By trading off index creation performance, an extremely high quality search model can be built. Generally, all of the vector search index types have hyperparameters that have a direct correlation with the search accuracy and so they can be cranked up to yield better recall. Unfortunately, this can also significantly increase the index build time and reduce the search throughput. The trick here is to find the fastest build time that can achieve the best recall with the lowest latency or highest throughput possible. From e8494c47b955cc9387207ac719ba0a2a659df4e0 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Tue, 24 Sep 2024 10:03:16 -0400 Subject: [PATCH 09/22] Moving a few things around --- docs/source/api_docs.rst | 8 +++++++- docs/source/index.rst | 10 ++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/source/api_docs.rst b/docs/source/api_docs.rst index fe71ff313..8d7ae40eb 100644 --- a/docs/source/api_docs.rst +++ b/docs/source/api_docs.rst @@ -2,10 +2,16 @@ API Reference ============= .. toctree:: - :maxdepth: 1 + :maxdepth: 3 :caption: Contents: c_api.rst cpp_api.rst python_api.rst rust_api/index.rst + +Index +===== + +* :ref:`genindex` +* :ref:`search` \ No newline at end of file diff --git a/docs/source/index.rst b/docs/source/index.rst index 46092c6b6..f2d57866f 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -26,10 +26,11 @@ Useful Resources - `Issue tracker `_: Report issues or request features. +Contents +######## .. toctree:: :maxdepth: 4 - :caption: Contents: build.rst getting_started.rst @@ -37,10 +38,3 @@ Useful Resources api_docs.rst contributing.md - -Indices and tables -================== - -* :ref:`genindex` -* :ref:`modindex` -* :ref:`search` From 9d0e01251af97460077fd162bf75f87d81274cf4 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Fri, 27 Sep 2024 14:23:13 -0400 Subject: [PATCH 10/22] More updates to index guides in docs --- docs/source/api_docs.rst | 4 - .../choosing_and_configuring_indexes.rst | 98 +++++++++++++++ docs/source/getting_started.rst | 19 ++- docs/source/indexes/bruteforce.rst | 18 +-- docs/source/indexes/cagra.rst | 17 +-- docs/source/indexes/ivfflat.rst | 26 ++-- docs/source/indexes/ivfpq.rst | 40 +++--- docs/source/integrations.rst | 53 ++------ docs/source/integrations/faiss.rst | 14 +++ docs/source/integrations/kinetica.rst | 6 + docs/source/integrations/lucene.rst | 6 + docs/source/integrations/milvus.rst | 8 ++ .../vector_databases_vs_vector_search.rst | 115 +++--------------- 13 files changed, 230 insertions(+), 194 deletions(-) create mode 100644 docs/source/choosing_and_configuring_indexes.rst create mode 100644 docs/source/integrations/faiss.rst create mode 100644 docs/source/integrations/kinetica.rst create mode 100644 docs/source/integrations/lucene.rst create mode 100644 docs/source/integrations/milvus.rst diff --git a/docs/source/api_docs.rst b/docs/source/api_docs.rst index 8d7ae40eb..f4deef506 100644 --- a/docs/source/api_docs.rst +++ b/docs/source/api_docs.rst @@ -3,15 +3,11 @@ API Reference .. toctree:: :maxdepth: 3 - :caption: Contents: c_api.rst cpp_api.rst python_api.rst rust_api/index.rst -Index -===== - * :ref:`genindex` * :ref:`search` \ No newline at end of file diff --git a/docs/source/choosing_and_configuring_indexes.rst b/docs/source/choosing_and_configuring_indexes.rst new file mode 100644 index 000000000..e3a0c8467 --- /dev/null +++ b/docs/source/choosing_and_configuring_indexes.rst @@ -0,0 +1,98 @@ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Primer on vector search indexes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Vector search indexes often use approximations to trade-off accuracy of the results for speed, either through lowering latency (end-to-end single query speed) or by increating throughput (the number of query vectors that can be satisfied in a short period of time). Vector search indexes, especially ones that use approximations, are very closely related to machine learning models but they are optimized for fast search and accuracy of results. + +When the number of vectors is very small, such as less than 100 thousand vectors, it could be fast enough to use a brute-force (also known as a flat index), which returns exact results but at the expense of exhaustively searching all possible neighbors + +Objectives +========== + +This primer addresses the challenge of configuring vector search indexes, but its primary goal is to get a user up and running quickly with acceptable enough results for a good choice of index type and a small and manageable tuning knob, rather than providing a comprehensive guide to tuning each and every hyper-parameter. + +For this reason, we focus on 4 primary data sizes: + +#. Tiny datasets where GPU is likely not needed (< 100 thousand vectors) +#. Small datasets where GPU might not be needed (< 1 million vectors) +#. Large datasets (> 1 million vectors), goal is fast index creation at the expense of search quality +#. Large datasets where high quality is preferred at the expense of fast index creation + +Like other machine learning algorithms, vector search indexes generally have a training step – which means building the index – and an inference – or search step. The hyper-parameters also tend to be broken down into build and search parameters. + +While not always the case, a general trend is often observed where the search speed decreases as the quality increases. This also tends to be the case with the index build performance, though different algorithms have different relationships between build time, quality, and search time. It’s important to understand that there’s no free lunch so there will always be trade-offs for each index type. + +Definition of quality +===================== + +What do we mean when we say quality of an index? In machine learning terminology, we measure this using recall, which is sometimes used interchangeably to mean accuracy, even though the two are slightly different measures. Recall, when used in vector search, essentially means “out of all of my results, which results would have been included in the exact results?” In vector search, the objective is to find some number of vectors that are closest to a given query vector so recall tends to be more relaxed than accuracy, discriminating only on set inclusion, rather than on exact ordered list matching, which would be closer to an accuracy measure. + +Choosing vector search indexes +============================== + +Many vector search algorithms improve scalability while reducing the number of distances by partitioning the vector space into smaller pieces, often through the use of clustering, hashing, trees, and other techniques. Another popular technique is to reduce the width or dimensionality of the space in order to decrease the cost of computing each distance. + +Tiny datasets (< 100 thousand vectors) +-------------------------------------- + +These datasets are very small and it’s questionable whether or not the GPU would provide any value at all. If the dimensionality is also relatively small (< 1024), you could just use brute-force or HNSW on the CPU and get great performance. If the dimensionality is relatively large (1536, 2048, 4096), you should consider using HNSW. If build time performance is critical, you should consider using CAGRA to build the graph and convert it to an HNSW graph for search (this capability exists today in the standalone cuVS/RAFT libraries and will soon be added to Milvus). An IVF flat index can also be a great candidate here, as it can improve the search performance over brute-force by partitioning the vector space and thus reducing the search space. + +Small datasets where GPU might not be needed (< 1 million vectors) +------------------------------------------------------------------ + +For smaller dimensionality, such as 1024 or below, you could consider using a brute-force (aka flat) index on GPU and get very good search performance with exact results. You could also use a graph-based index like HNSW on the CPU or CAGRA on the GPU. If build time is critical, you could even build a CAGRA graph on the GPU and convert it to HNSW graph on the CPU. + +For larger dimensionality (1536, 2048, 4096), you will start to see lower build-time performance with HNSW for higher quality search settings, and so it becomes more clear that building a CAGRA graph can be useful instead. + +Large datasets (> 1 million vectors), goal is fast index creation at the expense of search quality +-------------------------------------------------------------------------------------------------- + +For fast ingest where slightly lower search quality is acceptable (85% recall and above), the IVF (inverted file index) methods can be very useful, as they can be very fast to build and still have acceptable search performance. IVF-flat index will partition the vectors into some number of clusters (specified by the user as n_lists) and at search time, some number of closest clusters (defined by n_probes) will be searched with brute-force for each query vector. + +IVF-PQ is similar to IVF-flat with the major difference that the vectors are compressed using a lossy product quantized compression so the index can have a much smaller footprint on the GPU. In general, it’s advised to set n_lists = sqrt(n_vectors) and set n_probes to some percentage of n_lists (e.g. 1%, 2%, 4%, 8%, 16%). Because IVF-PQ is a lossy compression, a refinement step can be performed by initially increasing the number of neighbors (by some multiple factor) and using the raw vectors to compute the exact distances, ultimately reducing the neighborhoods down to size k. Even a refinement of 2x (which would query initially for k*2) can be quite effective in making up for recall lost by the PQ compression, but it does come at the expense of having to keep the raw vectors around (keeping in mind many databases store the raw vectors anyways). + +Large datasets (> 1 million vectors), goal is high quality search at the expense of fast index creation +------------------------------------------------------------------------------------------------------- + +By trading off index creation performance, an extremely high quality search model can be built. Generally, all of the vector search index types have hyperparameters that have a direct correlation with the search accuracy and so they can be cranked up to yield better recall. Unfortunately, this can also significantly increase the index build time and reduce the search throughput. The trick here is to find the fastest build time that can achieve the best recall with the lowest latency or highest throughput possible. + +As for suggested index types, graph-based algorithms like HNSW and CAGRA tend to scale very well to larger datasets while having superior search performance with respect to quality. The challenge is that graph-based indexes require learning a graph and so, as the subtitle of this section suggests, have a tendency to be slower to build than other options. Using the CAGRA algorithm on the GPU can reduce the build time significantly over HNSW, while also having a superior throughput (and lower latency) than searching on the CPU. Currently, the downside to using CAGRA on the GPU is that it requires both the graph and the raw vectors to fit into GPU memory. A middle-ground can be reached by building a CAGRA graph on the GPU and converting it to an HNSW for high quality (and moderately fast) search on the CPU. + + +Tuning and hyperparameter optimization +====================================== + +Unfortunately, for large datasets, doing a hyper-parameter optimization on the whole dataset is not always feasible. It is possible, however, to perform a hyper-parameter optimization on the smaller subsets and find reasonably acceptable parameters that should generalize fairly well to the entire dataset. Generally this hyper-parameter optimization will require computing a ground truth on the subset with an exact method like brute-force and then using it to evaluate several searches on randomly sampled vectors. + +Full hyper-parameter optimization may also not always be necessary- for example, once you have built a ground truth dataset on a subset, many times you can start by building an index with the default build parameters and then playing around with different search parameters until you get the desired quality and search performance. For massive indexes that might be multiple terabytes, you could also take this subsampling of, say, 10M vectors, train an index and then tune the search parameters from there. While there might be a small margin of error, the chosen build/search parameters should generalize fairly well for the databases that build locally partitioned indexes. + + +Summary of vector search index types +==================================== + +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + + * - Name + - Trade-offs + - Best to use with... + * - Brute-force (aka flat) + - Exact search but requires exhaustive distance computations + - Tiny datasets (< 100k vectors) + * - IVF-Flat + - Partitions the vector space to reduce distance computations for brute-force search at the expense of recall + - Small datasets (<1M vectors) or larger datasets (>1M vectors) where fast index build time is prioritized over quality. + * - IVF-PQ + - Adds product quantization to IVF-Flat to achieve scale at the expense of recall + - Large datasets (>>1M vectors) where fast index build is prioritized over quality + * - HNSW + - Significantly reduces distance computations at the expense of longer build times + - Small datasets (<1M vectors) or large datasets (>1M vectors) where quality and speed of search are prioritized over index build times + * - CAGRA + - Significantly reduces distance computations at the expense of longer build times (though build times improve over HNSW) + - Large datasets (>>1M vectors) where quality and speed of search are prioritized over index build times but index build times are still important. + * - CAGRA build +HNSW search + - (coming soon to Milvus) + - Significantly reduces distance computations and improves build times at the expense of higher search latency / lower throughput. + Large datasets (>>1M vectors) where index build times and quality of search is important but GPU resources are limited and latency of search is not. diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst index 902f2ca81..632ccd391 100644 --- a/docs/source/getting_started.rst +++ b/docs/source/getting_started.rst @@ -4,7 +4,9 @@ Getting Started - `New to vector search?`_ - * :doc:`Primer on vector search ` + * :doc:`Primer on vector search ` + + * :doc:`Vector search indexes vs vector databases ` * :doc:`Index tuning guide ` @@ -41,7 +43,7 @@ Getting Started New to vector search? ===================== -If you are unfamiliar with the basics of vector search or how vector search differs from vector databases, then :doc:`this primer on vector search guide ` should provide some good insight. As outlined in the primer, vector search as used in vector databases is often closer to machine learning than to traditional databases. This means that while traditional databases can often be slow without any performance tuning, they will usually still yield the correct results. Unfortunately, vector search indexes, like other machine learning models, can yield garbage results of not tuned correctly. +If you are unfamiliar with the basics of vector search or how vector search differs from vector databases, then :doc:`this primer on vector search guide ` should provide some good insight. Another good resource for the uninitiated is our :doc:`vector databases vs vector search ` guide. As outlined in the primer, vector search as used in vector databases is often closer to machine learning than to traditional databases. This means that while traditional databases can often be slow without any performance tuning, they will usually still yield the correct results. Unfortunately, vector search indexes, like other machine learning models, can yield garbage results of not tuned correctly. Fortunately, this opens up the whole world of hyperparamer optimization to improve vector search performance and quality. Please see our :doc:`index tuning guide ` for more information. @@ -103,3 +105,16 @@ Get involved ------------ We always welcome patches for new features and bug fixes. Please read our `contributing guide `_ for more information on contributing patches to cuVS. + + + +.. toctree:: + :hidden: + + choosing_and_configuring_indexes.rst + vector_databases_vs_vector_search.rst + tuning_guide.rst + comparing_indexes.rst + indexes/indexes.rst + api_basics.rst + api_interoperability.rst \ No newline at end of file diff --git a/docs/source/indexes/bruteforce.rst b/docs/source/indexes/bruteforce.rst index b43e532cb..97b4b85d5 100644 --- a/docs/source/indexes/bruteforce.rst +++ b/docs/source/indexes/bruteforce.rst @@ -3,7 +3,7 @@ Brute-force Brute-force, or flat index, is the most simple index type, as it ultimately boils down to an exhaustive matrix multiplication. -While it scales with O(N^2*D), brute-force can be a great choice when +While it scales with :math:`O(N^2*D)`, brute-force can be a great choice when 1. exact nearest neighbors are required, and 2. when the number of vectors is relatively small (a few thousand to a few million) @@ -12,8 +12,7 @@ Brute-force can also be a good choice for heavily filtered queries where other a when filtering out 90%-95% of the vectors from a search, the IVF methods could struggle to return anything at all with smaller number of probes and graph-based algorithms with limited hash table memory could end up skipping over important unfiltered entries. - -[ `C API <../c_api.rst>` | `C++ API <../cpp_api.rst` | `Python API <../python_api.rst` | `Rust API <../rust_api/index.rst` ] +[ :doc:`C API <../c_api/neighbors_bruteforce_c>` | :doc:`C++ API <../cpp_api/neighbors_bruteforce>` | :doc:`Python API <../python_api/neighbors_bruteforce>` | :doc:`Rust API <../rust_api/index>` ] Filtering considerations ------------------------ @@ -22,9 +21,9 @@ Because it is exhaustive, brute-force can quickly become the slowest, albeit mos when the number of vectors in an index are very large, brute-force can still be used to search vectors efficiently with a filter. This is especially true for cases where the filter is excluding 90%-99% of the vectors in the index where the partitioning - inherent in other approximate algorithms would simply not include expected vectors in the results. In the case of pre-filtered - brute-force, the computation is inverted so distances are only computed between vectors that pass the filter, significantly reducing - the amount of computation required. +inherent in other approximate algorithms would simply not include expected vectors in the results. In the case of pre-filtered +brute-force, the computation is inverted so distances are only computed between vectors that pass the filter, significantly reducing +the amount of computation required. Configuration parameters ------------------------ @@ -52,14 +51,15 @@ to differ from ground truth. This is not often a problem in practice and can usu Memory footprint ---------------- -`precision` is the number of bytes in each element of each vector (e.g. 32-bit = 4-bytes) +:math:`precision` is the number of bytes in each element of each vector (e.g. 32-bit = 4-bytes) Index footprint ~~~~~~~~~~~~~~~ -Raw vectors: :math:`n\_vectors * n\_dimensions * precision` -Vector norms (for distances which require them): :math:`n\_vectors * precision` +Raw vectors: :math:`n_vectors * n_dimensions * precision` + +Vector norms (for distances which require them): :math:`n_vectors * precision` Search footprint ~~~~~~~~~~~~~~~~ diff --git a/docs/source/indexes/cagra.rst b/docs/source/indexes/cagra.rst index 15a418703..de8821e74 100644 --- a/docs/source/indexes/cagra.rst +++ b/docs/source/indexes/cagra.rst @@ -13,16 +13,14 @@ I-force could be used to construct the initial kNN graph. This would yield the m we find that in practice the kNN graph does not need to be very accurate since the pruning step helps to boost the overall recall of the index. cuVS provides IVF-PQ and NN-Descent strategies for building the initial kNN graph and these can be selected in index params object during index construction. +[ :doc:`C API <../c_api/neighbors_cagra_c>` | :doc:`C++ API <../cpp_api/neighbors_cagra>` | :doc:`Python API <../python_api/neighbors_cagra>` | :doc:`Rust API <../rust_api/index>` ] + Interoperability with HNSW -------------------------- cuVS provides the capability to convert a CAGRA graph to an HNSW graph, which enables the GPU to be used only for building the index while the CPU can be leveraged for search. -# TODO: Add code example for this conversion - -[ :ref:`C API <../c_api.rst>` | :ref:`C++ API <../cpp_api.rst>` | :ref:`Python API <../python_api.rst>` | :ref:`Rust API <../rust_api/index.rst>` ] - Filtering considerations ------------------------ @@ -110,13 +108,14 @@ IVFPQ or NN-DESCENT can be used to build the graph (additions to the peak memory Dataset on device (graph on host): ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Index memory footprint (device): :math: `n\_index\_vectors * n\_dims * sizeof(T)` -Index memory footprint (host): :math: `graph\_degree * n\_index\_vectors * sizeof(T)`` +Index memory footprint (device): :math:`n_index_vectors * n_dims * sizeof(T)` + +Index memory footprint (host): :math:`graph_degree * n_index_vectors * sizeof(T)`` Dataset on host (graph on host): ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Index memory footprint (host): :math: `n\_index\_vectors * n\_dims * sizeof(T) + graph\_degree * n\_index\_vectors * sizeof(T)` +Index memory footprint (host): :math:`n_index_vectors * n_dims * sizeof(T) + graph_degree * n_index_vectors * sizeof(T)` Build peak memory usage: ~~~~~~~~~~~~~~~~~~~~~~~~ @@ -124,7 +123,7 @@ Build peak memory usage: When built using NN-descent / IVF-PQ, the build process consists of two phases: (1) building an initial/(intermediate) graph and then (2) optimizing the graph. Key input parameters are n_vectors, intermediate_graph_degree, graph_degree. The memory usage in the first phase (building) depends on the chosen method. The biggest allocation is the graph (n_vectors*intermediate_graph_degree), but it’s stored in the host memory. Usually, the second phase (optimize) uses the most device memory. The peak memory usage is achieved during the pruning step (graph_core.cuh/optimize) -Optimize: formula for peak memory usage (device): :math: `n\_vectors * (4 + (sizeof(IdxT) + 1) * intermediate\_degree)`` +Optimize: formula for peak memory usage (device): :math:`n_vectors * (4 + (sizeof(IdxT) + 1) * intermediate_degree)`` Build with out-of-core IVF-PQ peak memory usage: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -134,6 +133,7 @@ Out-of-core CAGA build consists of IVF-PQ build, IVF-PQ search, CAGRA optimizati IVF-PQ Build: .. math:: + n_vectors / train_set_ratio * dim * sizeof(float) // trainset, may be in managed mem + n_vectors / train_set_ratio * sizeof(uint32_t) // labels, may be in managed mem + n_clusters * n_dim * sizeof(float) // cluster centers @@ -141,6 +141,7 @@ IVF-PQ Build: IVF-PQ Search (max batch size 1024 vectors on device at a time): .. math:: + [n_vectors * (pq_dim * pq_bits / 8 + sizeof(int64_t)) + O(n_clusters)] + [batch_size * n_dim * sizeof(float)] + [batch_size * intermediate_degree * sizeof(uint32_t)] + [batch_size * intermediate_degree * sizeof(float)] diff --git a/docs/source/indexes/ivfflat.rst b/docs/source/indexes/ivfflat.rst index c206758b2..76b2f815f 100644 --- a/docs/source/indexes/ivfflat.rst +++ b/docs/source/indexes/ivfflat.rst @@ -14,11 +14,14 @@ IVF-Flat tends to be a great choice when in the index, and 2. exact recall is not needed. as with the other index types, the tuning parameters are used to trade-off recall for search latency / throughput. -[ C API | C++ API | Python API | Rust API ] +[ :doc:`C API <../c_api/neighbors_ivf_flat_c>` | :doc:`C++ API <../cpp_api/neighbors_ivf_flat>` | :doc:`Python API <../python_api/neighbors_ivf_flat>` | :doc:`Rust API <../rust_api/index>` ] Filtering considerations ------------------------ +IVF methods only apply filters to the lists which are probed for each query point. As a result, the results of a filtered query will likely differ signficiantly from the results of a filtering applid to an exact method like brute-force. For example. imagine you have 3 IVF lists each containing 2 vectors and you perform a query against only the closest 2 lists but you filter out all but 1 element. If that remaining element happens to be in one of the lists which was not proved, it will not be considered at all in the search results. It's important to consider this when using any of the IVF methods in your applications. + + Configuration parameters ------------------------ @@ -74,8 +77,8 @@ assumption that the number of lists, and thus the max size of the data in the in might not matter. For example, most vector databases build many smaller physical approximate nearest neighbors indexes, each from fixed-size or maximum-sized immutable segments and so the number of lists can be tuned based on the number of vectors in the indexes. -Empirically, we've found `sqrt(n_index_vectors)` to be a good starting point for the `n_lists` hyper-parameter. Remember, having more -lists means less points to search within each list, but it could also mean more `n_probes` are needed at search time to reach an acceptable +Empirically, we've found :math:`\sqrt{n_index_vectors}` to be a good starting point for the :math:`n_lists` hyper-parameter. Remember, having more +lists means less points to search within each list, but it could also mean more :math:`n_probes` are needed at search time to reach an acceptable recall. @@ -83,7 +86,7 @@ Memory footprint ---------------- Each cluster is padded to at least 32 vectors (but potentially up to 1024). Assuming uniform random distribution of vectors/list, we would have -:math:`cluster\_overhead = (conservative\_memory\_allocation ? 16 : 512 ) * dim * sizeof(T)` +:math:`cluster\_overhead = (conservative\_memory\_allocation ? 16 : 512 ) * dim * sizeof_{float})` Note that each cluster is allocated as a separate allocation. If we use a `cuda_memory_resource`, that would grab memory in 1 MiB chunks, so on average we might have 0.5 MiB overhead per cluster. If we us 10s of thousands of clusters, it becomes essential to use pool allocator to avoid this overhead. @@ -95,14 +98,19 @@ Index (device memory): .. math:: - n_vectors * n_dimensions * sizeof(T) + // interleaved form - n_vectors * sizeof(int_type) + // list indices - n_clusters * n_dimensions * sizeof(T) + // cluster centers - n_clusters * cluster_overhead` + n\_vectors * n\_dimensions * sizeof(T) + + + n\_vectors * sizeof(int_type) + + + n\_clusters * n\_dimensions * sizeof(T) + + + n\_clusters * cluster_overhead` Peak device memory usage for index build: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -:math:`workspace = min(1GB, n\_queries * [(n\_lists + 1 + n\_probes*(k+1))*sizeof(T) + n\_probes*k*sizeof(Idx)])` + +:math:`workspace = min(1GB, n\_queries * [(n\_lists + 1 + n\_probes * (k + 1)) * sizeof_{float}) + n\_probes * k * sizeof_{idx}])` + :math:`index\_size + workspace` diff --git a/docs/source/indexes/ivfpq.rst b/docs/source/indexes/ivfpq.rst index 112607aec..e4bd81395 100644 --- a/docs/source/indexes/ivfpq.rst +++ b/docs/source/indexes/ivfpq.rst @@ -9,7 +9,7 @@ Often a strategy called refinement reranking is employed to make up for the lost `k` than desired and performing a reordering and reduction to `k` based on the distances from the unquantized vectors. Unfortunately, this does mean that the unquantized raw vectors need to be available and often this can be done efficiently using multiple CPU threads. -[ C API | C++ API | Python API | Rust API ] +[ :doc:`C API <../c_api/neighbors_ivf_pq_c>` | :doc:`C++ API <../cpp_api/neighbors_ivf_pq>` | :doc:`Python API <../python_api/neighbors_ivf_pq>` | :doc:`Rust API <../rust_api/index>` ] Configuration parameters @@ -34,7 +34,7 @@ Build parameters * - kmeans_trainset_fraction - 0.5 - The fraction of training data to use for iterative k-means building - - pq_bits + * - pq_bits - 8 - The bit length of each vector element after compressing with PQ. Possible values are any integer between 4 and 8. * - pq_dim @@ -93,38 +93,44 @@ Memory footprint Index (device memory): ~~~~~~~~~~~~~~~~~~~~~~ -Simple approximate formula: :math: `n\_vectors * (pq\_dim * pq\_bits / 8 + sizeof(IdxT)) + O(n\_clusters)` +Simple approximate formula: :math:`n\_vectors * (pq\_dim * \frac{pq\_bits}{8} + sizeof_{idx}) + n\_clusters` The IVF lists end up being represented by a sparse data structure that stores the pointers to each list, an indices array that contains the indexes of each vector in each list, and an array with the encoded (and interleaved) data for each list. -IVF list pointers: :math: `n_clusters * sizeof(uint32_t)*` -Indices: :math: `n\_vectors * sizeof(IdxT)`` -Encoded data (interleaved): :math: `n\_vectors * pq\_dim * pq\_bits / 8` -Codebooks: -.. math:: - 4 * pq_dim * pq_len * 2^pq_bits // per-subspace (default) - 4 * n_clusters * pq_len * 2^pq_bits // per-cluster +IVF list pointers: :math:`n\_clusters * sizeof_{uint32_t}` + +Indices: :math:`n\_vectors * sizeof_{idx}`` + +Encoded data (interleaved): :math:`n\_vectors * pq\_dim * \frac{pq\_bits}{8}` + +Per subspace method: :math:`4 * pq\_dim * pq\_len * 2^pq\_bits` + +Per cluster method: :math:`4 * n\_clusters * pq\_len * 2^pq\_bits` -Extras: :math: `n\_clusters * (20 + 8 * dim)` +Extras: :math:`n\_clusters * (20 + 8 * dim)` Index (host memory): ~~~~~~~~~~~~~~~~~~~~ -When refinement is used with the dataset on host, the original raw vectors are needed: :math: `n\_vectors * n\_dims * sizeof(T)` +When refinement is used with the dataset on host, the original raw vectors are needed: :math:`n\_vectors * dims * sizeof_{Tloat}` Search peak memory usage (device); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Total usage: :math: `Index + Queries + Output indices + Output distances + workspace` -Workspace size is not trivial, a heuristic controls the batch size to make sure the workspace fits the resource::get_workspace_free_bytes(res). +Total usage: :math:`index + queries + output\_indices + output\_distances + workspace` + +Workspace size is not trivial, a heuristic controls the batch size to make sure the workspace fits the `raft::resource::get_workspace_free_bytes(res)``. Build peak memory usage (device): ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. math:: - n_vectors / trainset_ratio * dim * sizeof(float) // trainset, may be in managed mem - + n_vectors / trainset_ratio * sizeof(uint32_t) // labels, may be in managed mem - + n_clusters * dim * sizeof(float) // cluster centers + + \frac{n\_vectors}{trainset\_ratio * dims * sizeof_{float}} + + + \frac{n\_vectors}{trainset\_ratio * sizeof_{uint32_t}} + + + n\_clusters * dim * sizeof_{float} Note, if there’s not enough space left in the workspace memory resource, IVF-PQ build automatically switches to the managed memory for the training set and labels. diff --git a/docs/source/integrations.rst b/docs/source/integrations.rst index 19d72fd90..760892a98 100644 --- a/docs/source/integrations.rst +++ b/docs/source/integrations.rst @@ -1,50 +1,13 @@ +============ Integrations ============ -Aside from using cuVS directly, it can be consumed through a number of sdk and vector database integrations. - -- `FAISS`_ -- `Milvus`_ -- `Lucene`_ -- `Kinetica`_ - - -FAISS ------ - -FAISS v1.8 provides a special conda package that enables a RAFT backend for the Flat, IVF-Flat and IVF-PQ indexes on the GPU. Like the classical FAISS GPU indexes, the RAFT backend also enables interoperability between FAISS CPU indexes, allowing an index to be trained on GPU, searched on CPU, and vice versa. - -The RAFT backend can be enabled by building FAISS from source with the `FAISS_USE_RAFT` cmake flag enabled and setting the `use_raft` configuration option for the RAFT-enabled GPU indexes. - -A pre-compiled conda package can also be installed using the following command: - -.. code-block:: bash - - conda install -c conda-forge -c pytorch -c rapidsai -c nvidia -c "nvidia/label/cuda-11.8.0" faiss-gpu-raft - -The next release of FAISS will feature cuVS support as we continue to migrate the vector search algorithms from RAFT to cuVS. - -Milvus ------- - -In version 2.3, Milvus released support for IVF-Flat and IVF-PQ indexes on the GPU through RAFT. Version 2.4 adds support for brute-force and the graph-based CAGRA index on the GPU. Please refer to the `Milvus documentation `_ to install Milvus with GPU support. - -The GPU indexes can be enabled by using the index types prefixed with `GPU_`, as outlined in the `Milvus index build guide `_. - -Milvus will be migrating their GPU support from RAFT to cuVS as we continue to move the vector search algorithms out of RAFT and into cuVS. - - -Lucene ------- - -An experimental Lucene connector for cuVS enables GPU-accelerated vector search indexes through Lucene. Initial benchmarks are showing that this connector can drastically improve the performance of both indexing and search in Lucene. This connector will continue to be improved over time and any interested developers are encouraged to contribute. - -Install and evaluate the `lucene-cuvs` connector on `Github `_. - - -Kinetica --------- +Aside from using cuVS standalone, it can be consumed through a number of sdk and vector database integrations. -Starting with release 7.2, Kinetica supports the graph-based the CAGRA algorithm from RAFT. Kinetica will continue to improve its support over coming versions, while also migrating to cuVS as we work to move the vector search algorithms out of RAFT and into cuVS. +.. toctree:: + :maxdepth: 4 -Kinetica currently offers the ability to create a CAGRA index in a SQL `CREATE_TABLE` statement, as outlined in their `vector search indexing docs `_. Kinetica is not open source, but the RAFT indexes can be enabled in the developer edition, which can be installed `here `_. + integrations/faiss.rst + integrations/milvus.rst + integrations/lucene.rst + integrations/kinetica.rst diff --git a/docs/source/integrations/faiss.rst b/docs/source/integrations/faiss.rst new file mode 100644 index 000000000..bf8be8225 --- /dev/null +++ b/docs/source/integrations/faiss.rst @@ -0,0 +1,14 @@ +FAISS +----- + +FAISS v1.8 provides a special conda package that enables a RAFT backend for the Flat, IVF-Flat and IVF-PQ indexes on the GPU. Like the classical FAISS GPU indexes, the RAFT backend also enables interoperability between FAISS CPU indexes, allowing an index to be trained on GPU, searched on CPU, and vice versa. + +The RAFT backend can be enabled by building FAISS from source with the `FAISS_USE_RAFT` cmake flag enabled and setting the `use_raft` configuration option for the RAFT-enabled GPU indexes. + +A pre-compiled conda package can also be installed using the following command: + +.. code-block:: bash + + conda install -c conda-forge -c pytorch -c rapidsai -c nvidia -c "nvidia/label/cuda-11.8.0" faiss-gpu-raft + +The next release of FAISS will feature cuVS support as we continue to migrate the vector search algorithms from RAFT to cuVS. diff --git a/docs/source/integrations/kinetica.rst b/docs/source/integrations/kinetica.rst new file mode 100644 index 000000000..e74cfe82f --- /dev/null +++ b/docs/source/integrations/kinetica.rst @@ -0,0 +1,6 @@ +Kinetica +-------- + +Starting with release 7.2, Kinetica supports the graph-based the CAGRA algorithm from RAFT. Kinetica will continue to improve its support over coming versions, while also migrating to cuVS as we work to move the vector search algorithms out of RAFT and into cuVS. + +Kinetica currently offers the ability to create a CAGRA index in a SQL `CREATE_TABLE` statement, as outlined in their `vector search indexing docs `_. Kinetica is not open source, but the RAFT indexes can be enabled in the developer edition, which can be installed `here `_. diff --git a/docs/source/integrations/lucene.rst b/docs/source/integrations/lucene.rst new file mode 100644 index 000000000..d20052545 --- /dev/null +++ b/docs/source/integrations/lucene.rst @@ -0,0 +1,6 @@ +Lucene +------ + +An experimental Lucene connector for cuVS enables GPU-accelerated vector search indexes through Lucene. Initial benchmarks are showing that this connector can drastically improve the performance of both indexing and search in Lucene. This connector will continue to be improved over time and any interested developers are encouraged to contribute. + +Install and evaluate the `lucene-cuvs` connector on `Github `_. diff --git a/docs/source/integrations/milvus.rst b/docs/source/integrations/milvus.rst new file mode 100644 index 000000000..4139cca52 --- /dev/null +++ b/docs/source/integrations/milvus.rst @@ -0,0 +1,8 @@ +Milvus +------ + +In version 2.3, Milvus released support for IVF-Flat and IVF-PQ indexes on the GPU through RAFT. Version 2.4 adds support for brute-force and the graph-based CAGRA index on the GPU. Please refer to the `Milvus documentation `_ to install Milvus with GPU support. + +The GPU indexes can be enabled by using the index types prefixed with `GPU_`, as outlined in the `Milvus index build guide `_. + +Milvus will be migrating their GPU support from RAFT to cuVS as we continue to move the vector search algorithms out of RAFT and into cuVS. diff --git a/docs/source/vector_databases_vs_vector_search.rst b/docs/source/vector_databases_vs_vector_search.rst index ec3a28d09..446737c11 100644 --- a/docs/source/vector_databases_vs_vector_search.rst +++ b/docs/source/vector_databases_vs_vector_search.rst @@ -1,39 +1,17 @@ -~~~~~~~~~~~~~~~~~~~~~~~ -Primer on vector search -~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Vector search indexes vs vector databases +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -One of the primary differences between vector database indexes and traditional database indexes is that vector search often uses approximations to trade-off accuracy of the results for speed. Because of this, while many mature databases offer mechanisms to tune their indexes and achieve better performance, vector database indexes can return completely garbage results if they aren’t tuned for a reasonable level of search quality in addition to performance tuning. This is because vector database indexes are more closely related to machine learning models than they are to traditional database indexes. - -Of course, if the number of vectors is very small, such as less than 100 thousand vectors, it could be fast enough to use a brute-force (also known as a flat index), which exhaustively searches all possible neighbors. - -Objectives -========== - -This primer addresses the challenge of configuring vector database indexes, but its primary goal is to get a user up and running quickly with acceptable enough results for a good choice of index type and a small and manageable tuning knob, rather than providing a comprehensive guide to tuning each and every hyper-parameter. - -For this reason, we focus on 4 primary data sizes: - -#. Tiny datasets (< 100 thousand vectors) -#. Small datasets where GPU might not be needed (< 1 million vectors) -#. Large datasets (> 1 million vectors), goal is fast index creation at the expense of search quality -#. High quality at the expense of fast index creation - -Like other machine learning algorithms, vector search indexes generally have a training step – which means building the index – and an inference – or search step. The hyperparameters also tend to be broken down into build and search parameters. - -While not always the case, a general trend is often observed where the search speed decreases as the quality increases. This also tends to be the case with the index build performance, though different algorithms have different relationships between build time, quality, and search time. It’s important to understand that there’s no free lunch so there will always be trade-offs for each index type. - -Definition of quality -===================== - -What do we mean when we say quality of an index? In machine learning terminology, we measure this using recall, which is sometimes used interchangeably to mean accuracy, even though the two are slightly different measures. Recall, when used in vector search, essentially means “out of all of my results, which results would have been included in the exact results?” In vector search, the objective is to find some number of vectors that are closest to a given query vector so recall tends to be more relaxed than accuracy, discriminating only on set inclusion, rather than on exact ordered list matching, which would be closer to an accuracy measure. +This guide provides information on the differences between vector search indexes and fully-fledged vector databases. For more information on selecting and configuring vector search indexes, please refer to our :doc:`guide on choosing and configuring indexes ` +One of the primary differences between vector database indexes and traditional database indexes is that vector search often uses approximations to trade-off accuracy of the results for speed. Because of this, while many mature databases offer mechanisms to tune their indexes and achieve better performance, vector database indexes can return completely garbage results if they aren’t tuned for a reasonable level of search quality in addition to performance tuning. This is because vector database indexes are more closely related to machine learning models than they are to traditional database indexes. -Differences between vector databases and vector search -====================================================== +What are the differences between vector databases and vector search indexes? +============================================================================ -As mentioned above, vector search in and of itself refers to the objective of finding the closest vectors in an index around a given set of query vectors. At the lowest level, vector search indexes are just machine learning models, which have a build, search, and recall performance that can be traded off, depending on the algorithm and various hyperparameters. +Vector search in and of itself refers to the objective of finding the closest vectors in an index around a given set of query vectors. At the lowest level, vector search indexes are just machine learning models, which have a build, search, and recall performance that can be traded off, depending on the algorithm and various hyper-parameters. -Vector search indexes alone are considered primitives that enable, but are not considered by themselves, a fully-fledged vector database. Vector databases provide more production-level features that often use vector search algorithms in concert with other popular database design techniques to add important capabilities like durability, fault tolerance, vertical scalability, partition tolerance, and horizontal scalability. +Vector search indexes alone are considered building blocks that enable, but are not considered by themselves to be, a fully-fledged vector database. Vector databases provide more production-level features that often use vector search algorithms in concert with other popular database design techniques to add important capabilities like durability, fault tolerance, vertical scalability, partition tolerance, and horizontal scalability. In the world of vector databases, there are special purpose-built databases that focus primarily on vector search but might also provide some small capability of more general-purpose databases, like being able to perform a hybrid search across both vectors and metadata. Many general-purpose databases, both relational and nosql / document databases for example, are beginning to add first-class vector types also. @@ -53,7 +31,7 @@ This leads us to two core architectural designs that we encounter in vector data Locally partitioned vector search indexes ----------------------------------------- ->ost databases follow this design, and vectors are often first written to a write-ahead log for durability. After some number of vectors are written, the write-ahead logs become immutable and may be merged with other write-ahead logs before eventually being converted to a new vector search index. +Most databases follow this design, and vectors are often first written to a write-ahead log for durability. After some number of vectors are written, the write-ahead logs become immutable and may be merged with other write-ahead logs before eventually being converted to a new vector search index. The search is generally done over each locally partitioned index and the results combined. When setting hyperparameters, only the local vector search indexes need to be considered, though the same hyperparameters are going to be used across all of the local partitions. So, for example, if you’ve ingested 100M vectors but each partition only contains about 10M vectors, the size of the index only needs to consider its local 10M vectors. Details like number of vectors in the index are important, for example, when setting the number of clusters in an IVF-based (inverted file index) method, as I’ll cover below. @@ -69,74 +47,11 @@ Of course, the two approaches outlined above can also be used together (e.g. tra A challenge with GPUs in vector databases today is that the resulting vector indexes are expected to fit into the memory of available GPUs for fast search. That is to say, there doesn’t exist today an efficient mechanism for offloading or swapping GPU indexes so they can be cached from disk or host memory, for example. We are working on mechanisms to do this, and to also utilize technologies like GPUDirect Storage and GPUDirect RDMA to improve the IO performance further. -Configuring localized vector search indexes -=========================================== - -Since most vector databases use localized partitioning, we’ll focus on that in this document. If global partitioning becomes more widely used, we can add more details at a later date. - -Tiny datasets (< 100 thousand vectors) --------------------------------------- - -These datasets are very small and it’s questionable whether or not the GPU would provide any value at all. If the dimensionality is also relatively small (< 1024), you could just use brute-force or HNSW on the CPU and get great performance. If the dimensionality is relatively large (1536, 2048, 4096), you should consider using HNSW. If build time performance is critical, you should consider using CAGRA to build the graph and convert it to an HNSW graph for search (this capability exists today in the standalone cuVS/RAFT libraries and will soon be added to Milvus). An IVF flat index can also be a great candidate here, as it can improve the search performance over brute-force by partitioning the vector space and thus reducing the search space. - -You could even use FAISS or cuVS standalone if you don’t need the additional features in a fully-fledged database. - -Small datasets where GPU might not be needed (< 1 million vectors) ------------------------------------------------------------------- - -For smaller dimensionality, such as 1024 or below, you could consider using a brute-force (aka flat) index on GPU and get very good search performance with exact results. You could also use a graph-based index like HNSW on the CPU or CAGRA on the GPU. If build time is critical, you could even build a CAGRA graph on the GPU and convert it to HNSW graph on the CPU. - -For larger dimensionality (1536, 2048, 4096), you will start to see lower build-time performance with HNSW for higher quality search settings, and so it becomes more clear that building a CAGRA graph can be useful instead. - -Large datasets (> 1 million vectors), goal is fast index creation at the expense of search quality --------------------------------------------------------------------------------------------------- - -For fast ingest where slightly lower search quality is acceptable (85% recall and above), the IVF (inverted file index) methods can be very useful, as they can be very fast to build and still have acceptable search performance. IVF-flat index will partition the vectors into some number of clusters (specified by the user as n_lists) and at search time, some number of closest clusters (defined by n_probes) will be searched with brute-force for each query vector. - -IVF-PQ is similar to IVF-flat with the major difference that the vectors are compressed using a lossy product quantized compression so the index can have a much smaller footprint on the GPU. In general, it’s advised to set n_lists = sqrt(n_vectors) and set n_probes to some percentage of n_lists (e.g. 1%, 2%, 4%, 8%, 16%). Because IVF-PQ is a lossy compression, a refinement step can be performed by initially increasing the number of neighbors (by some multiple factor) and using the raw vectors to compute the exact distances, ultimately reducing the neighborhoods down to size k. Even a refinement of 2x (which would query initially for k*2) can be quite effective in making up for recall lost by the PQ compression, but it does come at the expense of having to keep the raw vectors around (keeping in mind many databases store the raw vectors anyways). - -Large datasets (> 1 million vectors), goal is high quality search at the expense of fast index creation -------------------------------------------------------------------------------------------------------- - -By trading off index creation performance, an extremely high quality search model can be built. Generally, all of the vector search index types have hyperparameters that have a direct correlation with the search accuracy and so they can be cranked up to yield better recall. Unfortunately, this can also significantly increase the index build time and reduce the search throughput. The trick here is to find the fastest build time that can achieve the best recall with the lowest latency or highest throughput possible. - -As for suggested index types, graph-based algorithms like HNSW and CAGRA tend to scale very well to larger datasets while having superior search performance with respect to quality. The challenge is that graph-based indexes require learning a graph and so, as the subtitle of this section suggests, have a tendency to be slower to build than other options. Using the CAGRA algorithm on the GPU can reduce the build time significantly over HNSW, while also having a superior throughput (and lower latency) than searching on the CPU. Currently, the downside to using CAGRA on the GPU is that it requires both the graph and the raw vectors to fit into GPU memory. A middle-ground can be reached by building a CAGRA graph on the GPU and converting it to an HNSW for high quality (and moderately fast) search on the CPU. - - Tuning and hyperparameter optimization ====================================== -Unfortunately, for large datasets, doing a hyperparameter optimization on the whole dataset is not always feasible and this is actually where the locally partitioned vector search indexes have an advantage because you can think of each smaller segment of the larger index as a uniform random sample of the total vectors in the dataset. This means that it is possible to perform a hyperparameter optimization on the smaller subsets and find reasonably acceptable parameters that should generalize fairly well to the entire dataset. Generally this hyperparameter optimization will require computing a ground truth on the subset with an exact method like brute-force and then using it to evaluate several searches on randomly sampled vectors. - -Full hyperparameter optimization may also not always be necessary- for example, once you have built a ground truth dataset on a subset, many times you can start by building an index with the default build parameters and then playing around with different search parameters until you get the desired quality and search performance. For massive indexes that might be multiple terabytes, you could also take this subsampling of, say, 10M vectors, train an index and then tune the search parameters from there. While there might be a small margin of error, the chosen build/search parameters should generalize fairly well for the databases that build locally partitioned indexes. - - -Summary of vector search index types -==================================== - -.. list-table:: - :widths: 25 25 50 - :header-rows: 1 - - * - Name - - Trade-offs - - Best to use with... - * - Brute-force (aka flat) - - Exact search but requires exhaustive distance computations - - Tiny datasets (< 100k vectors) - * - IVF-Flat - - Partitions the vector space to reduce distance computations for brute-force search at the expense of recall - - Small datasets (<1M vectors) or larger datasets (>1M vectors) where fast index build time is prioritized over quality. - * - IVF-PQ - - Adds product quantization to IVF-Flat to achieve scale at the expense of recall - - Large datasets (>>1M vectors) where fast index build is prioritized over quality - * - HNSW - - Significantly reduces distance computations at the expense of longer build times - - Small datasets (<1M vectors) or large datasets (>1M vectors) where quality and speed of search are prioritized over index build times - * - CAGRA - - Significantly reduces distance computations at the expense of longer build times (though build times improve over HNSW) - - Large datasets (>>1M vectors) where quality and speed of search are prioritized over index build times but index build times are still important. - * - CAGRA build +HNSW search - - (coming soon to Milvus) - - Significantly reduces distance computations and improves build times at the expense of higher search latency / lower throughput. - Large datasets (>>1M vectors) where index build times and quality of search is important but GPU resources are limited and latency of search is not. +Unfortunately, for large datasets, doing a hyper-parameter optimization on the whole dataset is not always feasible and this is actually where the locally partitioned vector search indexes have an advantage because you can think of each smaller segment of the larger index as a uniform random sample of the total vectors in the dataset. This means that it is possible to perform a hyperparameter optimization on the smaller subsets and find reasonably acceptable parameters that should generalize fairly well to the entire dataset. Generally this hyperparameter optimization will require computing a ground truth on the subset with an exact method like brute-force and then using it to evaluate several searches on randomly sampled vectors. + +Full hyper-parameter optimization may also not always be necessary- for example, once you have built a ground truth dataset on a subset, many times you can start by building an index with the default build parameters and then playing around with different search parameters until you get the desired quality and search performance. For massive indexes that might be multiple terabytes, you could also take this subsampling of, say, 10M vectors, train an index and then tune the search parameters from there. While there might be a small margin of error, the chosen build/search parameters should generalize fairly well for the databases that build locally partitioned indexes. + +Refer to our :doc:`tuning guide ` for more information and examples on how to efficiently and automatically tune your vector search indexes based on your needs. \ No newline at end of file From 397e5ff735750462ad6e8ea416038ec4199257af Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Wed, 2 Oct 2024 17:28:13 -0400 Subject: [PATCH 11/22] MOre docs updates --- docs/source/build.rst | 4 +- docs/source/cuvs_bench/build.rst | 59 ++ docs/source/cuvs_bench/index.rst | 702 ++++++++++++++++++++ docs/source/cuvs_bench/param_tuning.rst | 653 ++++++++++++++++++ docs/source/cuvs_bench/wiki_all_dataset.rst | 55 ++ 5 files changed, 1471 insertions(+), 2 deletions(-) create mode 100644 docs/source/cuvs_bench/build.rst create mode 100644 docs/source/cuvs_bench/index.rst create mode 100644 docs/source/cuvs_bench/param_tuning.rst create mode 100644 docs/source/cuvs_bench/wiki_all_dataset.rst diff --git a/docs/source/build.rst b/docs/source/build.rst index 9c7c98989..ba0caafca 100644 --- a/docs/source/build.rst +++ b/docs/source/build.rst @@ -45,14 +45,14 @@ C/C++ Package .. code-block:: bash - mamba install -c rapidsai -c conda-forge -c nvidia libcuvs cuda-version=12.5 + mamba install -c rapidsai -c conda-forge -c nvidia libcuvs cuda-version=12.5 Python Package ~~~~~~~~~~~~~~ .. code-block:: bash - mamba install -c rapidsai -c conda-forge -c nvidia cuvs cuda-version=12.5 + mamba install -c rapidsai -c conda-forge -c nvidia cuvs cuda-version=12.5 Python through Pip ^^^^^^^^^^^^^^^^^^ diff --git a/docs/source/cuvs_bench/build.rst b/docs/source/cuvs_bench/build.rst new file mode 100644 index 000000000..4a7fc4897 --- /dev/null +++ b/docs/source/cuvs_bench/build.rst @@ -0,0 +1,59 @@ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Build cuVS Bench From Source +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Dependencies +============ + +CUDA 11 and a GPU with Pascal architecture or later are required to run the benchmarks. + +Please refer to the :doc:`installation docs <../build>` for the base requirements to build cuVS. + +In addition to the base requirements for building cuVS, additional dependencies needed to build the ANN benchmarks include: +1. FAISS GPU >= 1.7.1 +2. Google Logging (GLog) +3. H5Py +4. HNSWLib +5. nlohmann_json +6. GGNN + +`rapids-cmake `_ is used to build the ANN benchmarks so the code for dependencies not already supplied in the CUDA toolkit will be downloaded and built automatically. + +The easiest (and most reproducible) way to install the dependencies needed to build the ANN benchmarks is to use the conda environment file located in the `conda/environments` directory of the cuVS repository. The following command will use `mamba` (which is preferred over `conda`) to build and activate a new environment for compiling the benchmarks: + +.. code-block:: bash + + mamba env create --name cuvs_benchmarks -f conda/environments/cuvs_bench_cuda-118_arch-x86_64.yaml + conda activate cuvs_benchmarks + +The above conda environment will also reduce the compile times as dependencies like FAISS will already be installed and not need to be compiled with `rapids-cmake`. + +Compiling the Benchmarks +======================== + +After the needed dependencies are satisfied, the easiest way to compile ANN benchmarks is through the `build.sh` script in the root of the RAFT source code repository. The following will build the executables for all the support algorithms: + +.. code-block:: bash + + ./build.sh cuvs-bench + +You can limit the algorithms that are built by providing a semicolon-delimited list of executable names (each algorithm is suffixed with `_ANN_BENCH`): + +.. code-block:: bash + + ./build.sh cuvs-bench -n --limit-bench-ann=HNSWLIB_ANN_BENCH;CUVS_IVF_PQ_ANN_BENCH + +Available targets to use with `--limit-bench-ann` are: +- FAISS_GPU_IVF_FLAT_ANN_BENCH +- FAISS_GPU_IVF_PQ_ANN_BENCH +- FAISS_CPU_IVF_FLAT_ANN_BENCH +- FAISS_CPU_IVF_PQ_ANN_BENCH +- FAISS_GPU_FLAT_ANN_BENCH +- FAISS_CPU_FLAT_ANN_BENCH +- GGNN_ANN_BENCH +- HNSWLIB_ANN_BENCH +- CUVS_CAGRA_ANN_BENCH +- CUVS_IVF_PQ_ANN_BENCH +- CUVS_IVF_FLAT_ANN_BENCH + +By default, the `*_ANN_BENCH` executables program infer the dataset's datatype from the filename's extension. For example, an extension of `fbin` uses a `float` datatype, `f16bin` uses a `float16` datatype, extension of `i8bin` uses `int8_t` datatype, and `u8bin` uses `uint8_t` type. Currently, only `float`, `float16`, int8_t`, and `unit8_t` are supported. \ No newline at end of file diff --git a/docs/source/cuvs_bench/index.rst b/docs/source/cuvs_bench/index.rst new file mode 100644 index 000000000..faddbb9dc --- /dev/null +++ b/docs/source/cuvs_bench/index.rst @@ -0,0 +1,702 @@ +~~~~~~~~~~ +cuVS Bench +~~~~~~~~~~ + +cuVS bench provides a benchmarking tool for various ANN search implementations. It's especially suitable for comparing GPU implementations as well as comparing GPU against CPU. + +- `Installing the benchmarks`_ + + * `Conda`_ + + * `Docker`_ + + +Installing the benchmarks +========================= + +There are two main ways pre-compiled benchmarks are distributed: + +- `Conda`_ For users not using containers but want an easy to install and use Python package. Pip wheels are planned to be added as an alternative for users that cannot use conda and prefer to not use containers. +- `Docker`_ Only needs docker and [NVIDIA docker](https://github.com/NVIDIA/nvidia-docker) to use. Provides a single docker run command for basic dataset benchmarking, as well as all the functionality of the conda solution inside the containers. + +Conda +----- + +.. code-block:: bash + + mamba create --name raft_ann_benchmarks + conda activate raft_ann_benchmarks + + # to install GPU package: + mamba install -c rapidsai -c conda-forge -c nvidia raft-ann-bench= cuda-version=11.8* + + # to install CPU package for usage in CPU-only systems: + mamba install -c rapidsai -c conda-forge raft-ann-bench-cpu + +The channel `rapidsai` can easily be substituted `rapidsai-nightly` if nightly benchmarks are desired. The CPU package currently allows to run the HNSW benchmarks. + +Please see the :doc:`build instructions <../build>` to build the benchmarks from source. + +Docker +------ + +We provide images for GPU enabled systems, as well as systems without a GPU. The following images are available: + +- `cuvs_bench`: Contains GPU and CPU benchmarks, can run all algorithms supported. Will download million-scale datasets as required. Best suited for users that prefer a smaller container size for GPU based systems. Requires the NVIDIA Container Toolkit to run GPU algorithms, can run CPU algorithms without it. +- `cuvs_bench-datasets`: Contains the GPU and CPU benchmarks with million-scale datasets already included in the container. Best suited for users that want to run multiple million scale datasets already included in the image. +- `cuvs_bench-cpu`: Contains only CPU benchmarks with minimal size. Best suited for users that want the smallest containers to reproduce benchmarks on systems without a GPU. + +Nightly images are located in `dockerhub `_, meanwhile release (stable) versions are located in `NGC `_, starting with release 24.10. + +The following command pulls the nightly container for python version 10, cuda version 12, and RAFT version 23.10: + +.. code-block:: bash + + docker pull rapidsai/cuvs_bench:24.10a-cuda12.0-py3.10 #substitute cuvs_bench for the exact desired container. + +The CUDA and python versions can be changed for the supported values: +- Supported CUDA versions: 11.4 and 12.x +- Supported Python versions: 3.9 and 3.10. + +You can see the exact versions as well in the dockerhub site: +- `cuVS bench images `_ +- `cuVS bench with datasets preloaded images `_ +- `cuVS bench CPU only images `_ + +**Note:** GPU containers use the CUDA toolkit from inside the container, the only requirement is a driver installed on the host machine that supports that version. So, for example, CUDA 11.8 containers can run in systems with a CUDA 12.x capable driver. Please also note that the Nvidia-Docker runtime from the `Nvidia Container Toolkit `_ is required to use GPUs inside docker containers. + +How to run the benchmarks +========================= + +We provide a collection of lightweight Python scripts to run the benchmarks. There are 4 general steps to running the benchmarks and visualizing the results. +#. Prepare Dataset +#. Build Index and Search Index +#. Data Export +#. Plot Results + +Step 1: Prepare the dataset +--------------------------- + +The script `cuvs_bench.get_dataset` will download and unpack the dataset in directory that the user provides. As of now, only million-scale datasets are supported by this script. For more information on :doc:`datasets and formats `. + +The usage of this script is: + +.. code-block:: bash + + usage: get_dataset.py [-h] [--name NAME] [--dataset-path DATASET_PATH] [--normalize] + + options: + -h, --help show this help message and exit + --dataset DATASET dataset to download (default: glove-100-angular) + --dataset-path DATASET_PATH + path to download dataset (default: ${RAPIDS_DATASET_ROOT_DIR}) + --normalize normalize cosine distance to inner product (default: False) + +When option `normalize` is provided to the script, any dataset that has cosine distances +will be normalized to inner product. So, for example, the dataset `glove-100-angular` +will be written at location `datasets/glove-100-inner/`. + +Step 2: Build and search index +------------------------------ + +The script `cuvs_bench.run` will build and search indices for a given dataset and its +specified configuration. + +The usage of the script `cuvs_bench.run` is: + +.. code-block:: bash + + usage: __main__.py [-h] [--subset-size SUBSET_SIZE] [-k COUNT] [-bs BATCH_SIZE] [--dataset-configuration DATASET_CONFIGURATION] [--configuration CONFIGURATION] [--dataset DATASET] + [--dataset-path DATASET_PATH] [--build] [--search] [--algorithms ALGORITHMS] [--groups GROUPS] [--algo-groups ALGO_GROUPS] [-f] [-m SEARCH_MODE] + + options: + -h, --help show this help message and exit + --subset-size SUBSET_SIZE + the number of subset rows of the dataset to build the index (default: None) + -k COUNT, --count COUNT + the number of nearest neighbors to search for (default: 10) + -bs BATCH_SIZE, --batch-size BATCH_SIZE + number of query vectors to use in each query trial (default: 10000) + --dataset-configuration DATASET_CONFIGURATION + path to YAML configuration file for datasets (default: None) + --configuration CONFIGURATION + path to YAML configuration file or directory for algorithms Any run groups found in the specified file/directory will automatically override groups of the same name + present in the default configurations, including `base` (default: None) + --dataset DATASET name of dataset (default: glove-100-inner) + --dataset-path DATASET_PATH + path to dataset folder, by default will look in RAPIDS_DATASET_ROOT_DIR if defined, otherwise a datasets subdirectory from the calling directory (default: + os.getcwd()/datasets/) + --build + --search + --algorithms ALGORITHMS + run only comma separated list of named algorithms. If parameters `groups` and `algo-groups are both undefined, then group `base` is run by default (default: None) + --groups GROUPS run only comma separated groups of parameters (default: base) + --algo-groups ALGO_GROUPS + add comma separated . to run. Example usage: "--algo-groups=raft_cagra.large,hnswlib.large" (default: None) + -f, --force re-run algorithms even if their results already exist (default: False) + -m SEARCH_MODE, --search-mode SEARCH_MODE + run search in 'latency' (measure individual batches) or 'throughput' (pipeline batches and measure end-to-end) mode (default: throughput) + -t SEARCH_THREADS, --search-threads SEARCH_THREADS + specify the number threads to use for throughput benchmark. Single value or a pair of min and max separated by ':'. Example --search-threads=1:4. Power of 2 values between 'min' and 'max' will be used. If only 'min' is + specified, then a single test is run with 'min' threads. By default min=1, max=. (default: None) + -r, --dry-run dry-run mode will convert the yaml config for the specified algorithms and datasets to the json format that's consumed by the lower-level c++ binaries and then print the command to run execute the benchmarks but + will not actually execute the command. (default: False) + +`dataset`: name of the dataset to be searched in `datasets.yaml`_ + +`dataset-configuration`: optional filepath to custom dataset YAML config which has an entry for arg `dataset` + +`configuration`: optional filepath to YAML configuration for an algorithm or to directory that contains YAML configurations for several algorithms. Refer to `Dataset.yaml config`_ for more info. + +`algorithms`: runs all algorithms that it can find in YAML configs found by `configuration`. By default, only `base` group will be run. + +`groups`: run only specific groups of parameters configurations for an algorithm. Groups are defined in YAML configs (see `configuration`), and by default run `base` group + +`algo-groups`: this parameter is helpful to append any specific algorithm+group combination to run the benchmark for in addition to all the arguments from `algorithms` and `groups`. It is of the format `.`, or for example, `cuvs_cagra.large` + +For every algorithm run by this script, it outputs an index build statistics JSON file in `/result/build/<{algo},{group}.json>` +and an index search statistics JSON file in `/result/search/<{algo},{group},k{k},bs{batch_size}.json>`. NOTE: The filenames will not have ",{group}" if `group = "base"`. + +For every algorithm run by this script, it outputs an index build statistics JSON file in `/result/build/<{algo},{group}.json>` +and an index search statistics JSON file in `/result/search/<{algo},{group},k{k},bs{batch_size}.json>`. NOTE: The filenames will not have ",{group}" if `group = "base"`. + +`dataset-path` : +#. data is read from `/` +#. indices are built in `//index` +#. build/search results are stored in `//result` + +`build` and `search` : if both parameters are not supplied to the script then it is assumed both are `True`. + +`indices` and `algorithms` : these parameters ensure that the algorithm specified for an index is available in `algos.yaml` and not disabled, as well as having an associated executable. + +Step 3: Data export +------------------- + +The script `cuvs_bench.data_export` will convert the intermediate JSON outputs produced by `cuvs_bench.run` to more easily readable CSV files, which are needed to build charts made by `cuvs_bench.plot`. + +.. code-block:: bash + + usage: data_export.py [-h] [--dataset DATASET] [--dataset-path DATASET_PATH] + + options: + -h, --help show this help message and exit + --dataset DATASET dataset to download (default: glove-100-inner) + --dataset-path DATASET_PATH + path to dataset folder (default: ${RAPIDS_DATASET_ROOT_DIR}) + +Build statistics CSV file is stored in `/result/build/<{algo},{group}.csv>` +and index search statistics CSV file in `/result/search/<{algo},{group},k{k},bs{batch_size},{suffix}.csv>`, where suffix has three values: +#. `raw`: All search results are exported +#. `throughput`: Pareto frontier of throughput results is exported +#. `latency`: Pareto frontier of latency results is exported + +Step 4: Plot results +-------------------- + +The script `cuvs_bench.plot` will plot results for all algorithms found in index search statistics CSV files `/result/search/*.csv`. + +The usage of this script is: + +.. code-block:: bash + + usage: [-h] [--dataset DATASET] [--dataset-path DATASET_PATH] [--output-filepath OUTPUT_FILEPATH] [--algorithms ALGORITHMS] [--groups GROUPS] [--algo-groups ALGO_GROUPS] + [-k COUNT] [-bs BATCH_SIZE] [--build] [--search] [--x-scale X_SCALE] [--y-scale {linear,log,symlog,logit}] [--x-start X_START] [--mode {throughput,latency}] + [--time-unit {s,ms,us}] [--raw] + + options: + -h, --help show this help message and exit + --dataset DATASET dataset to plot (default: glove-100-inner) + --dataset-path DATASET_PATH + path to dataset folder (default: /home/coder/raft/datasets/) + --output-filepath OUTPUT_FILEPATH + directory for PNG to be saved (default: /home/coder/raft) + --algorithms ALGORITHMS + plot only comma separated list of named algorithms. If parameters `groups` and `algo-groups are both undefined, then group `base` is plot by default + (default: None) + --groups GROUPS plot only comma separated groups of parameters (default: base) + --algo-groups ALGO_GROUPS, --algo-groups ALGO_GROUPS + add comma separated . to plot. Example usage: "--algo-groups=raft_cagra.large,hnswlib.large" (default: None) + -k COUNT, --count COUNT + the number of nearest neighbors to search for (default: 10) + -bs BATCH_SIZE, --batch-size BATCH_SIZE + number of query vectors to use in each query trial (default: 10000) + --build + --search + --x-scale X_SCALE Scale to use when drawing the X-axis. Typically linear, logit or a2 (default: linear) + --y-scale {linear,log,symlog,logit} + Scale to use when drawing the Y-axis (default: linear) + --x-start X_START Recall values to start the x-axis from (default: 0.8) + --mode {throughput,latency} + search mode whose Pareto frontier is used on the y-axis (default: throughput) + --time-unit {s,ms,us} + time unit to plot when mode is latency (default: ms) + --raw Show raw results (not just Pareto frontier) of mode arg (default: False) + +`mode`: plots pareto frontier of `throughput` or `latency` results exported in the previous step + +`algorithms`: plots all algorithms that it can find results for the specified `dataset`. By default, only `base` group will be plotted. + +`groups`: plot only specific groups of parameters configurations for an algorithm. Groups are defined in YAML configs (see `configuration`), and by default run `base` group + +`algo-groups`: this parameter is helpful to append any specific algorithm+group combination to plot results for in addition to all the arguments from `algorithms` and `groups`. It is of the format `.`, or for example, `raft_cagra.large` + +Running the benchmarks +====================== + +End-to-end: smaller-scale benchmarks (<1M to 10M) +------------------------------------------------- + +The steps below demonstrate how to download, install, and run benchmarks on a subset of 10M vectors from the Yandex Deep-1B dataset By default the datasets will be stored and used from the folder indicated by the `RAPIDS_DATASET_ROOT_DIR` environment variable if defined, otherwise a datasets sub-folder from where the script is being called: + +.. code-block:: bash + + + # (1) prepare dataset. + python -m cuvs_bench.get_dataset --dataset deep-image-96-angular --normalize + + # (2) build and search index + python -m cuvs_bench.run --dataset deep-image-96-inner --algorithms cuvs_cagra --batch-size 10 -k 10 + + # (3) export data + python -m cuvs_bench.data_export --dataset deep-image-96-inner + + # (4) plot results + python -m cuvs_bench.plot --dataset deep-image-96-inner + + +.. list-table:: + + * - Dataset name + - Train rows + - Columns + - Test rows + - Distance + + * - `deep-image-96-angular` + - 10M + - 96 + - 10K + - Angular + + * - `fashion-mnist-784-euclidean` + - 60K + - 784 + - 10K + - Euclidean + + * - `glove-50-angular` + - 1.1M + - 50 + - 10K + - Angular + + * - `glove-100-angular` + - 1.1M + - 100 + - 10K + - Angular + + * - `mnist-784-euclidean` + - 60K + - 784 + - 10K + - Euclidean + + * - `nytimes-256-angular` + - 290K + - 256 + - 10K + - Angular + + * - `sift-128-euclidean` + - 1M + - 128 + - 10K + - Euclidean + +All of the datasets above contain ground test datasets with 100 neighbors. Thus `k` for these datasets must be less than or equal to 100. + +End-to-end: large-scale benchmarks (>10M vectors) +------------------------------------------------- + +`cuvs_bench.get_dataset` cannot be used to download the `billion-scale datasets`_ due to their size. You should instead use our billion-scale datasets guide to download and prepare them. +All other python commands mentioned below work as intended once the billion-scale dataset has been downloaded. + +To download billion-scale datasets, visit `big-ann-benchmarks `_ + +We also provide a new dataset called `wiki-all` containing 88 million 768-dimensional vectors. This dataset is meant for benchmarking a realistic retrieval-augmented generation (RAG)/LLM embedding size at scale. It also contains 1M and 10M vector subsets for smaller-scale experiments. See our :doc:`Wiki-all Dataset Guide ` for more information and to download the dataset. + + +The steps below demonstrate how to download, install, and run benchmarks on a subset of 100M vectors from the Yandex Deep-1B dataset. Please note that datasets of this scale are recommended for GPUs with larger amounts of memory, such as the A100 or H100. + +.. code-block:: bash + + mkdir -p datasets/deep-1B + # (1) prepare dataset + # download manually "Ground Truth" file of "Yandex DEEP" + # suppose the file name is deep_new_groundtruth.public.10K.bin + python -m raft_ann_bench.split_groundtruth --groundtruth datasets/deep-1B/deep_new_groundtruth.public.10K.bin + # two files 'groundtruth.neighbors.ibin' and 'groundtruth.distances.fbin' should be produced + + # (2) build and search index + python -m raft_ann_bench.run --dataset deep-1B --algorithms raft_cagra --batch-size 10 -k 10 + + # (3) export data + python -m raft_ann_bench.data_export --dataset deep-1B + + # (4) plot results + python -m raft_ann_bench.plot --dataset deep-1B + +The usage of `python -m cuvs_bench.split_groundtruth` is: + +.. code-block:: bash + usage: split_groundtruth.py [-h] --groundtruth GROUNDTRUTH + + options: + -h, --help show this help message and exit + --groundtruth GROUNDTRUTH + Path to billion-scale dataset groundtruth file (default: None) + +Running with Docker containers +------------------------------ + +Two methods are provided for running the benchmarks with the Docker containers. + +End-to-end run on GPU +~~~~~~~~~~~~~~~~~~~~~ + +When no other entrypoint is provided, an end-to-end script will run through all the steps in `Running the benchmarks`_ above. + +For GPU-enabled systems, the `DATA_FOLDER` variable should be a local folder where you want datasets stored in `$DATA_FOLDER/datasets` and results in `$DATA_FOLDER/result` (we highly recommend `$DATA_FOLDER` to be a dedicated folder for the datasets and results of the containers): + +.. code-block:: bash + + export DATA_FOLDER=path/to/store/datasets/and/results + docker run --gpus all --rm -it -u $(id -u) \ + -v $DATA_FOLDER:/data/benchmarks \ + rapidsai/cuvs-bench:24.10a-cuda11.8-py3.10 \ + "--dataset deep-image-96-angular" \ + "--normalize" \ + "--algorithms cuvs_cagra,cuvs_ivf_pq --batch-size 10 -k 10" \ + "" + +Usage of the above command is as follows: + +.. list-table:: + + * - Argument + - Description + + * - `rapidsai/cuvs-bench:24.10a-cuda11.8-py3.10` + - Image to use. Can be either `cuvs-bench` or `cuvs-bench-datasets` + + * - `"--dataset deep-image-96-angular"` + - Dataset name + + * - `"--normalize"` + - Whether to normalize the dataset + + * - `"--algorithms cuvs_cagra,hnswlib --batch-size 10 -k 10"` + - Arguments passed to the `run` script, such as the algorithms to benchmark, the batch size, and `k` + + * - `""` + - Additional (optional) arguments that will be passed to the `plot` script. + +***Note about user and file permissions:*** The flag `-u $(id -u)` allows the user inside the container to match the `uid` of the user outside the container, allowing the container to read and write to the mounted volume indicated by the `$DATA_FOLDER` variable. + +End-to-end run on CPU +~~~~~~~~~~~~~~~~~~~~~ + +The container arguments in the above section also be used for the CPU-only container, which can be used on systems that don't have a GPU installed. + +***Note:*** the image changes to `cuvs-bench-cpu` container and the `--gpus all` argument is no longer used: + +.. code-block:: bash + + export DATA_FOLDER=path/to/store/datasets/and/results + docker run --rm -it -u $(id -u) \ + -v $DATA_FOLDER:/data/benchmarks \ + rapidsai/cuvs-bench-cpu:24.10a-py3.10 \ + "--dataset deep-image-96-angular" \ + "--normalize" \ + "--algorithms hnswlib --batch-size 10 -k 10" \ + "" + +Manually run the scripts inside the container +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +All of the `cuvs-bench` images contain the Conda packages, so they can be used directly by logging directly into the container itself: + +.. code-block:: bash + + export DATA_FOLDER=path/to/store/datasets/and/results + docker run --gpus all --rm -it -u $(id -u) \ + --entrypoint /bin/bash \ + --workdir /data/benchmarks \ + -v $DATA_FOLDER:/data/benchmarks \ + rapidsai/cuvs-bench:24.10a-cuda11.8-py3.10 + +This will drop you into a command line in the container, with the `cuvs-bench` python package ready to use, as described in the [Running the benchmarks](#running-the-benchmarks) section above: + +.. code-block:: bash + + (base) root@00b068fbb862:/data/benchmarks# python -m cuvs_bench.get_dataset --dataset deep-image-96-angular --normalize + +Additionally, the containers can be run in detached mode without any issue. + +Evaluating the results +---------------------- + +The benchmarks capture several different measurements. The table below describes each of the measurements for index build benchmarks: + +.. list-table:: + + * - Name + - Description + + * - Benchmark + - A name that uniquely identifies the benchmark instance + + * - Time + - Wall-time spent training the index + + * - CPU + - CPU time spent training the index + + * - Iterations + - Number of iterations (this is usually 1) + + * - GPU + - GU time spent building + + * - index_size + - Number of vectors used to train index + +The table below describes each of the measurements for the index search benchmarks. The most important measurements `Latency`, `items_per_second`, `end_to_end`. + +.. list-table:: + + * - Name + - Description + + * - Benchmark + - A name that uniquely identifies the benchmark instance + + * - Time + - The wall-clock time of a single iteration (batch) divided by the number of threads. + + * - CPU + - The average CPU time (user + sys time). This does not include idle time (which can also happen while waiting for GPU sync). + + * - Iterations + - Total number of batches. This is going to be `total_queries` / `n_queries`. + + * - GPU + - GPU latency of a single batch (seconds). In throughput mode this is averaged over multiple threads. + + * - Latency + - Latency of a single batch (seconds), calculated from wall-clock time. In throughput mode this is averaged over multiple threads. + + * - Recall + - Proportion of correct neighbors to ground truth neighbors. Note this column is only present if groundtruth file is specified in dataset configuration. + + * - items_per_second + - Total throughput, a.k.a Queries per second (QPS). This is approximately `total_queries` / `end_to_end`. + + * - k + - Number of neighbors being queried in each iteration + + * - end_to_end + - Total time taken to run all batches for all iterations + + * - n_queries + - Total number of query vectors in each batch + + * - total_queries + - Total number of vectors queries across all iterations ( = `iterations` * `n_queries`) + +Note the following: +- A slightly different method is used to measure `Time` and `end_to_end`. That is why `end_to_end` = `Time` * `Iterations` holds only approximately. +- The actual table displayed on the screen may differ slightly as the hyper-parameters will also be displayed for each different combination being benchmarked. +- Recall calculation: the number of queries processed per test depends on the number of iterations. Because of this, recall can show slight fluctuations if less neighbors are processed then it is available for the benchmark. + +Creating and customizing dataset configurations +=============================================== + +A single configuration will often define a set of algorithms, with associated index and search parameters, that can be generalize across datasets. We use YAML to define dataset specific and algorithm specific configurations. + +A default `datasets.yaml` is provided by RAFT in `${RAFT_HOME}/python/raft-ann-bench/src/raft_ann_bench/run/conf` with configurations available for several datasets. Here's a simple example entry for the `sift-128-euclidean` dataset: + +.. code-block:: yaml + + - name: sift-128-euclidean + base_file: sift-128-euclidean/base.fbin + query_file: sift-128-euclidean/query.fbin + groundtruth_neighbors_file: sift-128-euclidean/groundtruth.neighbors.ibin + dims: 128 + distance: euclidean + +Configuration files for ANN algorithms supported by `cuvs-bench` are provided in `${RAFT_HOME}/python/cuvs-bench/src/raft_ann_bench/run/conf`. `cuvs_cagra` algorithm configuration looks like: + +.. code-block:: yaml + + name: cuvs_cagra + groups: + base: + build: + graph_degree: [32, 64] + intermediate_graph_degree: [64, 96] + graph_build_algo: ["NN_DESCENT"] + search: + itopk: [32, 64, 128] + + large: + build: + graph_degree: [32, 64] + search: + itopk: [32, 64, 128] + +The default parameters for which the benchmarks are run can be overridden by creating a custom YAML file for algorithms with a `base` group. + +There config above has 2 fields: +1. `name` - define the name of the algorithm for which the parameters are being specified. +2. `groups` - define a run group which has a particular set of parameters. Each group helps create a cross-product of all hyper-parameter fields for `build` and `search`. + +The table below contains all algorithms supported by cuVS. Each unique algorithm will have its own set of `build` and `search` settings. The :doc:`ANN Algorithm Parameter Tuning Guide ` contains detailed instructions on choosing build and search parameters for each supported algorithm. + +.. list-table:: + + * - Library + - Algorithms + + * - FAISS_GPU + - `faiss_gpu_flat`, `faiss_gpu_ivf_flat`, `faiss_gpu_ivf_pq` + + * - FAISS_CPU + - `faiss_cpu_flat`, `faiss_cpu_ivf_flat`, `faiss_cpu_ivf_pq` + + * - GGNN + - `ggnn` + + * - HNSWLIB + - `hnswlib` + + * - cuVS + - `cuvs_brute_force`, `cuvs_cagra`, `cuvs_ivf_flat`, `cuvs_ivf_pq`, `cuvs_cagra_hnswlib` + +Adding a new ANN algorithm +========================== + +Implementation and configuration +-------------------------------- + +Implementation of a new algorithm should be a C++ class that inherits `class ANN` (defined in `cpp/bench/ann/src/ann.h`) and implements all the pure virtual functions. + +In addition, it should define two `struct`s for building and searching parameters. The searching parameter class should inherit `struct ANN::AnnSearchParam`. Take `class HnswLib` as an example, its definition is: + +.. code-block:: c++ + template + class HnswLib : public ANN { + public: + struct BuildParam { + int M; + int ef_construction; + int num_threads; + }; + + using typename ANN::AnnSearchParam; + struct SearchParam : public AnnSearchParam { + int ef; + int num_threads; + }; + + // ... + }; + + +The benchmark program uses JSON format natively in a configuration file to specify indexes to build, along with the build and search parameters. However the JSON config files are overly verbose and are not meant to be used directly. Instead, the Python scripts parse YAML and create these json files automatically. It's important to realize that these json objects align with the yaml objects for `build_param`, whose value is a JSON object, and `search_param`, whose value is an array of JSON objects. Take the json configuration for `HnswLib` as an example of the json after it's been parsed from yaml: + +.. code-block:: json + { + "name" : "hnswlib.M12.ef500.th32", + "algo" : "hnswlib", + "build_param": {"M":12, "efConstruction":500, "numThreads":32}, + "file" : "/path/to/file", + "search_params" : [ + {"ef":10, "numThreads":1}, + {"ef":20, "numThreads":1}, + {"ef":40, "numThreads":1}, + ], + "search_result_file" : "/path/to/file" + }, + +The build and search params are ultimately passed to the C++ layer as json objects for each param configuration to benchmark. The code below shows how to parse these params for `Hnswlib`: + +1. First, add two functions for parsing JSON object to `struct BuildParam` and `struct SearchParam`, respectively: + +.. code-block:: c++ + + template + void parse_build_param(const nlohmann::json& conf, + typename cuann::HnswLib::BuildParam& param) { + param.ef_construction = conf.at("efConstruction"); + param.M = conf.at("M"); + if (conf.contains("numThreads")) { + param.num_threads = conf.at("numThreads"); + } + } + + template + void parse_search_param(const nlohmann::json& conf, + typename cuann::HnswLib::SearchParam& param) { + param.ef = conf.at("ef"); + if (conf.contains("numThreads")) { + param.num_threads = conf.at("numThreads"); + } + } + + + +2. Next, add corresponding `if` case to functions `create_algo()` (in `cpp/bench/ann/) and `create_search_param()` by calling parsing functions. The string literal in `if` condition statement must be the same as the value of `algo` in configuration file. For example, + +.. code-block:: c++ + // JSON configuration file contains a line like: "algo" : "hnswlib" + if (algo == "hnswlib") { + // ... + } + +Adding a Cmake target +--------------------- + +In `cuvs/cpp/bench/ann/CMakeLists.txt`, we provide a `CMake` function to configure a new Benchmark target with the following signature: + + +.. code-block:: cmake + ConfigureAnnBench( + NAME + PATH + INCLUDES + CXXFLAGS + LINKS + ) + +To add a target for `HNSWLIB`, we would call the function as: + +.. code-block:: cmake + + ConfigureAnnBench( + NAME HNSWLIB PATH bench/ann/src/hnswlib/hnswlib_benchmark.cpp INCLUDES + ${CMAKE_CURRENT_BINARY_DIR}/_deps/hnswlib-src/hnswlib CXXFLAGS "${HNSW_CXX_FLAGS}" + ) + +This will create an executable called `HNSWLIB_ANN_BENCH`, which can then be used to run `HNSWLIB` benchmarks. + +Add a new entry to `algos.yaml` to map the name of the algorithm to its binary executable and specify whether the algorithm requires GPU support. + +.. code-block:: yaml + cuvs_ivf_pq: + executable: RAFT_IVF_PQ_ANN_BENCH + requires_gpu: true + +`executable` : specifies the name of the binary that will build/search the index. It is assumed to be available in `cuvs/cpp/build/`. +`requires_gpu` : denotes whether an algorithm requires GPU to run. \ No newline at end of file diff --git a/docs/source/cuvs_bench/param_tuning.rst b/docs/source/cuvs_bench/param_tuning.rst new file mode 100644 index 000000000..8c247c5cb --- /dev/null +++ b/docs/source/cuvs_bench/param_tuning.rst @@ -0,0 +1,653 @@ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +cuVS Bench Parameter Tuning Guide +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This guide outlines the various parameter settings that can be specified in :doc:`cuVS Benchmarks ` yaml configuration files and explains the impact they have on corresponding algorithms to help inform their settings for benchmarking across desired levels of recall. + +cuVS Indexes +============ + +cuvs_brute_force +---------------- + +Use cuVS brute-force index for exact search. Brute-force has no further build or search parameters. + +cuvs_ivf_flat +------------- + +IVF-flat uses an inverted-file index, which partitions the vectors into a series of clusters, or lists, storing them in an interleaved format which is optimized for fast distance computation. The searching of an IVF-flat index reduces the total vectors in the index to those within some user-specified nearest clusters called probes. + +IVF-flat is a simple algorithm which won't save any space, but it provides competitive search times even at higher levels of recall. + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `nlist` + - `build` + - Y + - Positive integer >0 + - Number of clusters to partition the vectors into. Larger values will put less points into each cluster but this will impact index build time as more clusters need to be trained. + + * - `niter` + - `build` + - N + - Positive integer >0 + - 20 + - Number of kmeans iterations to use when training the ivf clusters + + * - `ratio` + - `build` + - N + - Positive integer >0 + - 2 + - `1/ratio` is the number of training points which should be used to train the clusters. + + * - `dataset_memory_type` + - `build` + - N + - [`device`, `host`, `mmap`] + - `mmap` + - Where should the dataset reside? + + * - `query_memory_type` + - `search` + - [`device`, `host`, `mmap`] + - `device` + - Where should the queries reside? + + * - `nprobe` + - `search` + - Y + - Positive integer >0 + - + - The closest number of clusters to search for each query vector. Larger values will improve recall but will search more points in the index. + +cuvs_ivf_pq +----------- + +IVF-pq is an inverted-file index, which partitions the vectors into a series of clusters, or lists, in a similar way to IVF-flat above. The difference is that IVF-PQ uses product quantization to also compress the vectors, giving the index a smaller memory footprint. Unfortunately, higher levels of compression can also shrink recall, which a refinement step can improve when the original vectors are still available. + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `nlist` + - `build` + - Y + - Positive integer >0 + - Number of clusters to partition the vectors into. Larger values will put less points into each cluster but this will impact index build time as more clusters need to be trained. + + * - `niter` + - `build` + - N + - Positive integer >0 + - 20 + - Number of kmeans iterations to use when training the ivf clusters + + * - `ratio` + - `build` + - N + - Positive integer >0 + - 2 + - `1/ratio` is the number of training points which should be used to train the clusters. + + * - `pq_dim` + - `build` + - N + - Positive integer. Multiple of 8. + - 0 + - Dimensionality of the vector after product quantization. When 0, a heuristic is used to select this value. + + * - `pq_bits` + - `build` + - N + - Positive integer [4-8] + - 8 + - Bit length of the vector element after quantization. + + * - `codebook_kind` + - `build` + - N + - [`cluster`, `subspace`] + - `subspace` + - Type of codebook. See :doc:`IVF-PQ index overview <../indexes/ivfpq>` for more detail + + * - `dataset_memory_type` + - `build` + - N + - [`device`, `host`, `mmap`] + - `mmap` + - Where should the dataset reside? + + * - `query_memory_type` + - `search` + - [`device`, `host`, `mmap`] + - `device` + - Where should the queries reside? + + * - `nprobe` + - `search` + - Y + - Positive integer >0 + - + - The closest number of clusters to search for each query vector. Larger values will improve recall but will search more points in the index. + + * - `internalDistanceDtype` + - `search` + - N + - [`float`, `half`] + - `half` + - The precision to use for the distance computations. Lower precision can increase performance at the cost of accuracy. + + * - `smemLutDtype` + - `search` + - N + - [`float`, `half`, `fp8`] + - `half` + - The precision to use for the lookup table in shared memory. Lower precision can increase performance at the cost of accuracy. + + * - `refine_ratio` + - `search` + - N + - Positive integer >0 + - 1 + - `refine_ratio * k` nearest neighbors are queried from the index initially and an additional refinement step improves recall by selecting only the best `k` neighbors. + + +cuvs_cagra +---------- + +CAGRA uses a graph-based index, which creates an intermediate, approximate kNN graph using IVF-PQ and then further refining and optimizing to create a final kNN graph. This kNN graph is used by CAGRA as an index for search. + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `graph_degree` + - `build` + - N + - Positive integer >0 + - 64 + - Degree of the final kNN graph index. + + * - `intermediate_graph_degree` + - `build` + - N + - Positive integer >0 + - 128 + - Degree of the intermediate kNN graph before the CAGRA graph is optimized + + * - `graph_build_algo` + - `build` + - `N + - [`IVF_PQ`, NN_DESCENT`] + - `IVF_PQ` + - Algorithm to use for building the initial kNN graph, from which CAGRA will optimize into the navigable CAGRA graph + + * - `dataset_memory_type` + - `build` + - N + - [`device`, `host`, `mmap`] + - `mmap` + - Where should the dataset reside? + + * - `query_memory_type` + - `search` + - [`device`, `host`, `mmap`] + - `device` + - Where should the queries reside? + + * - `itopk` + - `search` + - N + - Positive integer >0 + - 64 + - Number of intermediate search results retained during the search. Higher values improve search accuracy at the cost of speed + + * - `search_width` + - `search` + - N + - Positive integer >0 + - 1 + - Number of graph nodes to select as the starting point for the search in each iteration. + + * - `max_iterations` + - `search` + - N + - Positive integer >=0 + - 0 + - Upper limit of search iterations. Auto select when 0 + + * - `algo` + - `search` + - N + - [`auto`, `single_cta`, `multi_cta`, `multi_kernel`] + - `auto` + - Algorithm to use for search. It's usually best to leave this to `auto`. + + * - `graph_memory_type` + - `search` + - N + - [`device`, `host_pinned`, `host_huge_page`] + - `device` + - Memory type to store graph + + * - `internal_dataset_memory_type` + - `search` + - N + - [`device`, `host_pinned`, `host_huge_page`] + - `device` + - Memory type to store dataset + +The `graph_memory_type` or `internal_dataset_memory_type` options can be useful for large datasets that do not fit the device memory. Setting `internal_dataset_memory_type` other than `device` has negative impact on search speed. Using `host_huge_page` option is only supported on systems with Heterogeneous Memory Management or on platforms that natively support GPU access to system allocated memory, for example Grace Hopper. + +To fine tune CAGRA index building we can customize IVF-PQ index builder options using the following settings. These take effect only if `graph_build_algo == "IVF_PQ"`. It is recommended to experiment using a separate IVF-PQ index to find the config that gives the largest QPS for large batch. Recall does not need to be very high, since CAGRA further optimizes the kNN neighbor graph. Some of the default values are derived from the dataset size which is assumed to be [n_vecs, dim]. + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `ivf_pq_build_nlist` + - `build` + - N + - Positive integer >0 + - sqrt(n_vecs) + - Number of clusters to partition the vectors into. Larger values will put less points into each cluster but this will impact index build time as more clusters need to be trained. + + * - `ivf_pq_build_niter` + - `build` + - N + - Positive integer >0 + - 25 + - Number of k-means iterations to use when training the clusters. + + * - `ivf_pq_build_ratio` + - `build` + - N + - Positive integer >0 + - 10 + - `1/ratio` is the number of training points which should be used to train the clusters. + + * - `ivf_pq_pq_dim` + - `build` + - N + - Positive integer. Multiple of 8 + - dim/2 rounded up to 8 + - Dimensionality of the vector after product quantization. When 0, a heuristic is used to select this value. `pq_dim` * `pq_bits` must be a multiple of 8. + + * - `ivf_pq_build_pq_bits` + - `build` + - N + - Positive integer [4-8] + - 8 + - Bit length of the vector element after quantization. + + * - `ivf_pq_build_codebook_kind` + - `build` + - N + - [`cluster`, `subspace`] + - `subspace` + - Type of codebook. See :doc:`IVF-PQ index overview <../indexes/ivfpq>` for more detail + + * - `ivf_pq_build_nprobe` + - `search` + - N + - Positive integer >0 + - min(2*dim, nlist) + - The closest number of clusters to search for each query vector. Larger values will improve recall but will search more points in the index. + + * - `ivf_pq_build_internalDistanceDtype` + - `search` + - N + - [`float`, `half`] + - `half` + - The precision to use for the distance computations. Lower precision can increase performance at the cost of accuracy. + + * - `ivf_pq_build_smemLutDtype` + - `search` + - N + - [`float`, `half`, `fp8`] + - `fp8` + - The precision to use for the lookup table in shared memory. Lower precision can increase performance at the cost of accuracy. + + * - `ivf_pq_build_refine_ratio` + - `search` + - N + - Positive integer >0 + - 2 + - `refine_ratio * k` nearest neighbors are queried from the index initially and an additional refinement step improves recall by selecting only the best `k` neighbors. + +Alternatively, if `graph_build_algo == "NN_DESCENT"`, then we can customize the following parameters + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `nn_descent_niter` + - `build` + - N + - Positive integer >0 + - 20 + - Number of nn-descent iterations + + * - `nn_descent_intermediate_graph_degree + - `build` + - N + - Positive integer >0 + - `cagra.intermediate_graph_degree` * 1.5 + - Intermadiate graph degree during nn-descent iterations + + * - nn_descent_termination_threshold + - `build` + - N + - Positive float >0 + - 1e-4 + - Early stopping threshold for nn-descent convergence + +cuvs_cagra_hnswlib +------------------ + +This is a benchmark that enables interoperability between `CAGRA` built `HNSW` search. It uses the `CAGRA` built graph as the base layer of an `hnswlib` index to search queries only within the base layer (this is enabled with a simple patch to `hnswlib`). + +`build` : Same as `build` of CAGRA + +`search` : Same as `search` of Hnswlib + +FAISS Indexes +============= + +faiss_gpu_flat +-------------- + +Use FAISS flat index on the GPU, which performs an exact search using brute-force and doesn't have any further build or search parameters. + +faiss_gpu_ivf_flat +------------------ + +IVF-flat uses an inverted-file index, which partitions the vectors into a series of clusters, or lists, storing them in an interleaved format which is optimized for fast distance computation. The searching of an IVF-flat index reduces the total vectors in the index to those within some user-specified nearest clusters called probes. + +IVF-flat is a simple algorithm which won't save any space, but it provides competitive search times even at higher levels of recall. + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `nlists` + - `build` + - Y + - Positive integer >0 + - + - Number of clusters to partition the vectors into. Larger values will put less points into each cluster but this will impact index build time as more clusters need to be trained + + * - `ratio` + - `build` + - N + - Positive integer >0 + - 2 + - `1/ratio` is the number of training points which should be used to train the clusters. + + * - `nprobe` + - `search` + - Y + - Positive integer >0 + - + - The closest number of clusters to search for each query vector. Larger values will improve recall but will search more points in the index. + +faiss_gpu_ivf_pq +---------------- + +IVF-pq is an inverted-file index, which partitions the vectors into a series of clusters, or lists, in a similar way to IVF-flat above. The difference is that IVF-PQ uses product quantization to also compress the vectors, giving the index a smaller memory footprint. Unfortunately, higher levels of compression can also shrink recall, which a refinement step can improve when the original vectors are still available. + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `nlist` + - `build` + - Y + - Positive integer >0 + - + - Number of clusters to partition the vectors into. Larger values will put less points into each cluster but this will impact index build time as more clusters need to be trained. + + * - `ratio` + - `build` + - N + - Positive integer >0 + - 2 + - `1/ratio` is the number of training points which should be used to train the clusters. + + * - `M_ratio` + - `build` + - Y + - Positive integer. Power of 2 [8-64] + - + - Ratio of numbeer of chunks or subquantizers for each vector. Computed by `dims` / `M_ratio` + + * - `usePrecomputed` + - `build` + - N + - Boolean + - `false` + - Use pre-computed lookup tables to speed up search at the cost of increased memory usage. + + * - `useFloat16` + - `build` + - N + - Boolean + - `false` + - Use half-precision floats for clustering step. + + * - `nprobe` + - `search` + - Y + - Positive integer >0 + - + - The closest number of clusters to search for each query vector. Larger values will improve recall but will search more points in the index. + + * - `refine_ratio` + - `search` + - N + - Positive number >=1 + - 1 + - `refine_ratio * k` nearest neighbors are queried from the index initially and an additional refinement step improves recall by selecting only the best `k` neighbors. + + +faiss_cpu_flat +-------------- + +Use FAISS flat index on the CPU, which performs an exact search using brute-force and doesn't have any further build or search parameters. + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `numThreads` + - `search` + - N + - Positive integer >0 + - 1 + - Number of threads to use for queries. + +faiss_cpu_ivf_flat +------------------ + +Use FAISS IVF-Flat index on CPU + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `nlists` + - `build` + - Y + - Positive integer >0 + - + - Number of clusters to partition the vectors into. Larger values will put less points into each cluster but this will impact index build time as more clusters need to be trained + + * - `ratio` + - `build` + - N + - Positive integer >0 + - 2 + - `1/ratio` is the number of training points which should be used to train the clusters. + + * - `nprobe` + - `search` + - Y + - Positive integer >0 + - + - The closest number of clusters to search for each query vector. Larger values will improve recall but will search more points in the index. + + * - `numThreads` + - `search` + - N + - Positive integer >0 + - 1 + - Number of threads to use for queries. + +faiss_cpu_ivf_pq +---------------- + +Use FAISS IVF-PQ index on CPU + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `nlist` + - `build` + - Y + - Positive integer >0 + - + - Number of clusters to partition the vectors into. Larger values will put less points into each cluster but this will impact index build time as more clusters need to be trained. + + * - `ratio` + - `build` + - N + - Positive integer >0 + - 2 + - `1/ratio` is the number of training points which should be used to train the clusters. + + * - `M` + - `build` + - Y + - Positive integer. Power of 2 [8-64] + - + - Ratio of number of chunks or subquantizers for each vector. Computed by `dims` / `M_ratio` + + * - `usePrecomputed` + - `build` + - N + - Boolean + - `false` + - Use pre-computed lookup tables to speed up search at the cost of increased memory usage. + + * - `bitsPerCode` + - `build` + - N + - Positive integer [4-8] + - 8 + - Number of bits for representing each quantized code. + + * - `nprobe` + - `search` + - Y + - Positive integer >0 + - + - The closest number of clusters to search for each query vector. Larger values will improve recall but will search more points in the index. + + * - `refine_ratio` + - `search` + - N + - Positive number >=1 + - 1 + - `refine_ratio * k` nearest neighbors are queried from the index initially and an additional refinement step improves recall by selecting only the best `k` neighbors. + + * - `numThreads` + - `search` + - N + - Positive integer >0 + - 1 + - Number of threads to use for queries. + +HNSW +==== + +hnswlib +------- + +.. list-table:: + + * - Parameter + - Type + - Required + - Data Type + - Default + - Description + + * - `efConstruction` + - `build` + + - + + +| Parameter | Type | Required | Data Type | Default | Description | +|------------------|-----------|----------|--------------------------------------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `efConstruction` | `build` | Y | Positive Integer >0 | | Controls index time and accuracy. Bigger values increase the index quality. At some point, increasing this will no longer improve the quality. | +| `M` | `build` | Y | Positive Integer often between 2-100 | | Number of bi-directional links create for every new element during construction. Higher values work for higher intrinsic dimensionality and/or high recall, low values can work for datasets with low intrinsic dimensionality and/or low recalls. Also affects the algorithm's memory consumption. | +| `numThreads` | `build` | N | Positive Integer >0 | 1 | Number of threads to use to build the index. | +| `ef` | `search` | Y | Positive Integer >0 | | Size of the dynamic list for the nearest neighbors used for search. Higher value leads to more accurate but slower search. Cannot be lower than `k`. | +| `numThreads` | `search` | N | Positive Integer >0 | 1 | Number of threads to use for queries. | + +Please refer to `HNSW algorithm parameters guide `_ from `hnswlib` to learn more about these arguments. \ No newline at end of file diff --git a/docs/source/cuvs_bench/wiki_all_dataset.rst b/docs/source/cuvs_bench/wiki_all_dataset.rst new file mode 100644 index 000000000..04ac7d9a4 --- /dev/null +++ b/docs/source/cuvs_bench/wiki_all_dataset.rst @@ -0,0 +1,55 @@ +~~~~~~~~~~~~~~~~ +Wiki-all Dataset +~~~~~~~~~~~~~~~~ + + +The `wiki-all` dataset was created to stress vector search algorithms at scale with both a large number of vectors and dimensions. The entire dataset contains 88M vectors with 768 dimensions and is meant for testing the types of vectors one would typically encounter in retrieval augmented generation (RAG) workloads. The full dataset is ~251GB in size, which is intentionally larger than the typical memory of GPUs. The massive scale is intended to promote the use of compression and efficient out-of-core methods for both indexing and search. + +The dataset is composed of English wiki texts from `Kaggle `_ and multi-lingual wiki texts from `Cohere Wikipedia `_. + +Cohere's English Texts are older (2022) and smaller than the Kaggle English Wiki texts (2023) so the English texts have been removed from Cohere completely. The final Wiki texts include English Wiki from Kaggle and the other languages from Cohere. The English texts constitute 50% of the total text size. + +To form the final dataset, the Wiki texts were chunked into 85 million 128-token pieces. For reference, Cohere chunks Wiki texts into 104-token pieces. Finally, the embeddings of each chunk were computed using the `paraphrase-multilingual-mpnet-base-v2 `_ embedding model. The resulting dataset is an embedding matrix of size 88 million by 768. Also included with the dataset is a query file containing 10k query vectors and a groundtruth file to evaluate nearest neighbors algorithms. + +Getting the dataset +=================== + +Full dataset +------------ + +A version of the dataset is made available in the binary format that can be used directly by the :doc:`cuvs-bench ` tool. The full 88M dataset is ~251GB and the download link below contains tarballs that have been split into multiple parts. + +The following will download all 10 the parts and untar them to a `wiki_all_88M` directory: + +.. code-block:: bash + curl -s https://data.rapids.ai/raft/datasets/wiki_all/wiki_all.tar.{00..9} | tar -xf - -C wiki_all_88M/ + +The above has the unfortunate drawback that if the command should fail for any reason, all the parts need to be re-downloaded. The files can also be downloaded individually and then untarred to the directory. Each file is ~27GB and there are 10 of them. + +.. code-block:: bash + + curl -s https://data.rapids.ai/raft/datasets/wiki_all/wiki_all.tar.00 + ... + curl -s https://data.rapids.ai/raft/datasets/wiki_all/wiki_all.tar.09 + + cat wiki_all.tar.* | tar -xf - -C wiki_all_88M/ + +1M and 10M subsets +------------------ + +Also available are 1M and 10M subsets of the full dataset which are 2.9GB and 29GB, respectively. These subsets also include query sets of 10k vectors and corresponding groundtruth files. + +.. code-block:: bash + + curl -s https://data.rapids.ai/raft/datasets/wiki_all_1M/wiki_all_1M.tar + curl -s https://data.rapids.ai/raft/datasets/wiki_all_10M/wiki_all_10M.tar + +Using the dataset +================= + +After the dataset is downloaded and extracted to the `wiki_all_88M` directory (or `wiki_all_1M`/`wiki_all_10M` depending on whether the subsets are used), the files can be used in the benchmarking tool. The dataset name is `wiki_all` (or `wiki_all_1M`/`wiki_all_10M`), and the benchmarking tool can be used by specifying the appropriate name `--dataset wiki_all_88M` in the scripts. + +License info +============ + +The English wiki texts available on Kaggle come with the `CC BY-NCSA 4.0 `_ license and the Cohere wikipedia data set comes with the `Apache 2.0 `_ license. \ No newline at end of file From 55f11ee70c7df4bf2dfb4c49898fe0fbc2e07577 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Wed, 2 Oct 2024 17:50:28 -0400 Subject: [PATCH 12/22] MOre updates --- docs/source/cuvs_bench/param_tuning.rst | 35 ++++++++++++++++++++----- docs/source/indexes/bruteforce.rst | 5 ---- 2 files changed, 28 insertions(+), 12 deletions(-) diff --git a/docs/source/cuvs_bench/param_tuning.rst b/docs/source/cuvs_bench/param_tuning.rst index 8c247c5cb..faffa9daf 100644 --- a/docs/source/cuvs_bench/param_tuning.rst +++ b/docs/source/cuvs_bench/param_tuning.rst @@ -638,16 +638,37 @@ hnswlib * - `efConstruction` - `build` + - Y + - Positive integer >0 + - + - Controls index time and accuracy. Bigger values increase the index quality. At some point, increasing this will no longer improve the quality. + * - `M` + - `build` + - Y + - Positive integer. Often between 2-100 - + - umber of bi-directional links create for every new element during construction. Higher values work for higher intrinsic dimensionality and/or high recall, low values can work for datasets with low intrinsic dimensionality and/or low recalls. Also affects the algorithm's memory consumption. + + * - `numThreads` + - `build` + - N + - Positive integer >0 + - 1 + - Number of threads to use to build the index. + * - `ef` + - `search` + - Y + - Positive integer >0 + - + - Size of the dynamic list for the nearest neighbors used for search. Higher value leads to more accurate but slower search. Cannot be lower than `k`. -| Parameter | Type | Required | Data Type | Default | Description | -|------------------|-----------|----------|--------------------------------------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `efConstruction` | `build` | Y | Positive Integer >0 | | Controls index time and accuracy. Bigger values increase the index quality. At some point, increasing this will no longer improve the quality. | -| `M` | `build` | Y | Positive Integer often between 2-100 | | Number of bi-directional links create for every new element during construction. Higher values work for higher intrinsic dimensionality and/or high recall, low values can work for datasets with low intrinsic dimensionality and/or low recalls. Also affects the algorithm's memory consumption. | -| `numThreads` | `build` | N | Positive Integer >0 | 1 | Number of threads to use to build the index. | -| `ef` | `search` | Y | Positive Integer >0 | | Size of the dynamic list for the nearest neighbors used for search. Higher value leads to more accurate but slower search. Cannot be lower than `k`. | -| `numThreads` | `search` | N | Positive Integer >0 | 1 | Number of threads to use for queries. | + * - `numThreads` + - `search` + - N + - Positive integer >0 + - 1 + - Number of threads to use for queries. Please refer to `HNSW algorithm parameters guide `_ from `hnswlib` to learn more about these arguments. \ No newline at end of file diff --git a/docs/source/indexes/bruteforce.rst b/docs/source/indexes/bruteforce.rst index 97b4b85d5..0bd17dbf1 100644 --- a/docs/source/indexes/bruteforce.rst +++ b/docs/source/indexes/bruteforce.rst @@ -60,8 +60,3 @@ Index footprint Raw vectors: :math:`n_vectors * n_dimensions * precision` Vector norms (for distances which require them): :math:`n_vectors * precision` - -Search footprint -~~~~~~~~~~~~~~~~ - -TBD \ No newline at end of file From 8c1fa13c7022db62cae18c3a60993e51c23bd067 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Wed, 2 Oct 2024 18:11:57 -0400 Subject: [PATCH 13/22] Updates --- docs/source/tuning_guide.rst | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/docs/source/tuning_guide.rst b/docs/source/tuning_guide.rst index adba53958..7a0987a44 100644 --- a/docs/source/tuning_guide.rst +++ b/docs/source/tuning_guide.rst @@ -4,25 +4,17 @@ Tuning Guide ~~~~~~~~~~~~ -A Method for tuning and evaluating Vector Search Indexes At Scale in Locally Indexed Vector Databases +A Method for tuning and evaluating Vector Search Indexes At Scale in Locally Indexed Vector Databases. For more information on the differences between locally and globally indexed vector databases, please see :doc:`this guide `. The goal of this guide is to give users a scalable and effective approach for tuning a vector search index, no matter how large. Evaluation of a vector search index “model” that measures recall in proportion to build time so that it penalizes the recall when the build time is really high (should ultimately optimize for finding a lower build time and higher recall). -Objective -========= +For more information on the various different types of vector search indexes, please see our :doc:`guide to choosing vector search indexes ` -Give uswrs an approach for tuning a vector search index. Evaluation of a vector search index “model” that measures recall in proportion to build time so that it penalizes the recall when the build time is really high (should ultimately optimize for finding a lower build time and higher recall). +As much as 75% of users have told us they will not be able to tune a vector database beyond one or two simple knobs and we suggest that an ideal “knob” would be to balance training time and search time with search quality. The more time, the higher the quality, and the more needed to find an acceptable search performance. Even the 25% of users that want to tune are still asking for simple tools for doing so. These users also ask for some simple guidelines for setting tuning parameters, like :doc:`this guide `. -Output -====== -An example notebook which can be released in cuVS as an example for tuning an index, especially for CAGRA. +Since vector search indexes are more closely related to machine learning models than traditional databases indexes, one option for easing the parameter tuning burden is to use hyper-parameter optimization tools like `Ray Tune `_ and `Optuna `_. to verify this. -Background -========== +But how would this work when we have an index that's massively large- like 1TB? -Vector databases 101: Configuring Vector Search Indexes - -Many customers (Specifically AWS and Google) have told us that >75% of their users will not be able to tune a vector database beyond one or two simple knobs. They suggest that an ideal “knob” would be to balance training time with search quality. The more time, the higher the quality. For the <25% that wants to tune, they’ve asked for simple tools for tuning. They also ask for some simple guidelines for setting tuning parameters. -Strategy -Ray-tune and our Python APIs could be an option to verify this. We could write a notebook that takes some small subsampling from a dataset and does a parameter search on it. Then we actually evaluate random queries against the ground truth to test that the index params actually generalized well (I'm confident they will). +One benefit to locally indexed vector databases is that they tend to scale by breaking a larger set of vectors down into smaller small subsampling from a dataset and does a parameter search on it. Then we actually evaluate random queries against the ground truth to test that the index params actually generalized well (I'm confident they will). Getting Started with Optuna and RAPIDS for HPO — RAPIDS Deployment Documentation documentation From 59bc6bd3f71357694c1113e3e634a40dac69e7f4 Mon Sep 17 00:00:00 2001 From: Tamas Bela Feher Date: Thu, 3 Oct 2024 05:09:08 +0200 Subject: [PATCH 14/22] Use 64 bit types for dataset size calculation in CAGRA graph optimizer (#380) This PR fixes #375. Authors: - Tamas Bela Feher (https://github.com/tfeher) Approvers: - Corey J. Nolet (https://github.com/cjnolet) URL: https://github.com/rapidsai/cuvs/pull/380 --- cpp/src/neighbors/detail/cagra/graph_core.cuh | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/cpp/src/neighbors/detail/cagra/graph_core.cuh b/cpp/src/neighbors/detail/cagra/graph_core.cuh index 9edbbf5c1..43bf1ba2b 100644 --- a/cpp/src/neighbors/detail/cagra/graph_core.cuh +++ b/cpp/src/neighbors/detail/cagra/graph_core.cuh @@ -475,12 +475,12 @@ void sort_knn_graph( { RAFT_EXPECTS(dataset.extent(0) == knn_graph.extent(0), "dataset size is expected to have the same number of graph index size"); - const uint32_t dataset_size = dataset.extent(0); - const uint32_t dataset_dim = dataset.extent(1); + const uint64_t dataset_size = dataset.extent(0); + const uint64_t dataset_dim = dataset.extent(1); const DataT* dataset_ptr = dataset.data_handle(); const IdxT graph_size = dataset_size; - const uint32_t input_graph_degree = knn_graph.extent(1); + const uint64_t input_graph_degree = knn_graph.extent(1); IdxT* const input_graph_ptr = knn_graph.data_handle(); auto large_tmp_mr = raft::resource::get_large_workspace_resource(res); @@ -528,7 +528,7 @@ void sort_knn_graph( kernel_sort = kern_sort; } else { RAFT_FAIL( - "The degree of input knn graph is too large (%u). " + "The degree of input knn graph is too large (%lu). " "It must be equal to or smaller than %d.", input_graph_degree, 1024); From 562997733b4f007829b46a083b092d4780792ced Mon Sep 17 00:00:00 2001 From: Ben Frederickson Date: Thu, 3 Oct 2024 06:06:28 -0700 Subject: [PATCH 15/22] Add a static library for cuvs (#382) For the cuml integration, we need to be able to statically link to cuvs in order build the python wheels. This change adds a static target that lets us do that Authors: - Ben Frederickson (https://github.com/benfred) Approvers: - Corey J. Nolet (https://github.com/cjnolet) URL: https://github.com/rapidsai/cuvs/pull/382 --- cpp/CMakeLists.txt | 76 ++++++++++++++++++++++++++++++++++++++--- cpp/test/CMakeLists.txt | 4 +++ 2 files changed, 75 insertions(+), 5 deletions(-) diff --git a/cpp/CMakeLists.txt b/cpp/CMakeLists.txt index b05030cef..6f5178251 100644 --- a/cpp/CMakeLists.txt +++ b/cpp/CMakeLists.txt @@ -288,7 +288,7 @@ target_compile_options( ) add_library( - cuvs SHARED + cuvs_objs OBJECT src/cluster/kmeans_balanced_fit_float.cu src/cluster/kmeans_fit_mg_float.cu src/cluster/kmeans_fit_mg_double.cu @@ -436,12 +436,67 @@ add_library( src/stats/trustworthiness_score.cu ) +set_target_properties( + cuvs_objs + PROPERTIES CXX_STANDARD 17 + CXX_STANDARD_REQUIRED ON + CUDA_STANDARD 17 + CUDA_STANDARD_REQUIRED ON + POSITION_INDEPENDENT_CODE ON +) +target_compile_options( + cuvs_objs PRIVATE "$<$:${CUVS_CXX_FLAGS}>" + "$<$:${CUVS_CUDA_FLAGS}>" +) +target_link_libraries( + cuvs_objs PUBLIC raft::raft rmm::rmm ${CUVS_CTK_MATH_DEPENDENCIES} + $ +) + +add_library(cuvs SHARED $) +add_library(cuvs_static STATIC $) + target_compile_options( cuvs INTERFACE $<$:--expt-extended-lambda --expt-relaxed-constexpr> ) add_library(cuvs::cuvs ALIAS cuvs) +add_library(cuvs::cuvs_static ALIAS cuvs_static) + +set_target_properties( + cuvs_static + PROPERTIES BUILD_RPATH "\$ORIGIN" + INSTALL_RPATH "\$ORIGIN" + CXX_STANDARD 17 + CXX_STANDARD_REQUIRED ON + POSITION_INDEPENDENT_CODE ON + INTERFACE_POSITION_INDEPENDENT_CODE ON + EXPORT_NAME cuvs_static +) + +target_compile_options(cuvs_static PRIVATE "$<$:${CUVS_CXX_FLAGS}>") + +target_include_directories( + cuvs_objs + PUBLIC "$" + "$" + INTERFACE "$" +) + +target_include_directories( + cuvs_static + PUBLIC "$" + INTERFACE "$" +) + +# ensure CUDA symbols aren't relocated to the middle of the debug build binaries +target_link_options(cuvs_static PRIVATE $) + +target_include_directories( + cuvs_static PUBLIC "$" + "$" +) target_include_directories( cuvs PUBLIC "$" @@ -471,11 +526,17 @@ if(NOT BUILD_CPU_ONLY) PUBLIC rmm::rmm raft::raft ${CUVS_CTK_MATH_DEPENDENCIES} PRIVATE nvidia::cutlass::cutlass $ cuvs-cagra-search ) + + target_link_libraries( + cuvs_static + PUBLIC rmm::rmm raft::raft ${CUVS_CTK_MATH_DEPENDENCIES} + PRIVATE nvidia::cutlass::cutlass $ cuvs-cagra-search + ) endif() if(BUILD_CAGRA_HNSWLIB) - target_link_libraries(cuvs PRIVATE hnswlib::hnswlib) - target_compile_definitions(cuvs PUBLIC CUVS_BUILD_CAGRA_HNSWLIB) + target_link_libraries(cuvs_objs PRIVATE hnswlib::hnswlib) + target_compile_definitions(cuvs_objs PUBLIC CUVS_BUILD_CAGRA_HNSWLIB) endif() # Endian detection @@ -557,11 +618,16 @@ if(BUILD_C_LIBRARY) src/neighbors/ivf_flat_c.cpp src/neighbors/ivf_pq_c.cpp src/neighbors/cagra_c.cpp - src/neighbors/hnsw_c.cpp + $<$:src/neighbors/hnsw_c.cpp> src/neighbors/refine/refine_c.cpp src/distance/pairwise_distance_c.cpp ) + if(BUILD_CAGRA_HNSWLIB) + target_link_libraries(cuvs_c PRIVATE hnswlib::hnswlib) + target_compile_definitions(cuvs_c PUBLIC CUVS_BUILD_CAGRA_HNSWLIB) + endif() + add_library(cuvs::c_api ALIAS cuvs_c) set_target_properties( @@ -600,7 +666,7 @@ include(GNUInstallDirs) include(CPack) install( - TARGETS cuvs + TARGETS cuvs cuvs_static cuvs-cagra-search DESTINATION ${lib_dir} COMPONENT cuvs EXPORT cuvs-exports diff --git a/cpp/test/CMakeLists.txt b/cpp/test/CMakeLists.txt index 58cfc3862..bd07bebee 100644 --- a/cpp/test/CMakeLists.txt +++ b/cpp/test/CMakeLists.txt @@ -174,6 +174,8 @@ if(BUILD_TESTS) if(BUILD_CAGRA_HNSWLIB) ConfigureTest(NAME NEIGHBORS_HNSW_TEST PATH neighbors/hnsw.cu GPUS 1 PERCENT 100) + target_link_libraries(NEIGHBORS_HNSW_TEST PRIVATE hnswlib::hnswlib) + target_compile_definitions(NEIGHBORS_HNSW_TEST PUBLIC CUVS_BUILD_CAGRA_HNSWLIB) endif() ConfigureTest( @@ -227,6 +229,8 @@ if(BUILD_C_TESTS) if(BUILD_CAGRA_HNSWLIB) ConfigureTest(NAME HNSW_C_TEST PATH neighbors/ann_hnsw_c.cu C_LIB) + target_link_libraries(NEIGHBORS_HNSW_TEST PRIVATE hnswlib::hnswlib) + target_compile_definitions(NEIGHBORS_HNSW_TEST PUBLIC CUVS_BUILD_CAGRA_HNSWLIB) endif() endif() From 31cd39bacab90555f88b1a0afed332b45eff70c1 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Thu, 3 Oct 2024 09:57:25 -0400 Subject: [PATCH 16/22] Finishing tuning guide write-up --- docs/source/tuning_guide.rst | 35 ++++++++++++++++++++++++----------- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/docs/source/tuning_guide.rst b/docs/source/tuning_guide.rst index 7a0987a44..1526f3400 100644 --- a/docs/source/tuning_guide.rst +++ b/docs/source/tuning_guide.rst @@ -12,19 +12,32 @@ As much as 75% of users have told us they will not be able to tune a vector data Since vector search indexes are more closely related to machine learning models than traditional databases indexes, one option for easing the parameter tuning burden is to use hyper-parameter optimization tools like `Ray Tune `_ and `Optuna `_. to verify this. -But how would this work when we have an index that's massively large- like 1TB? +:italic:`But how would this work when we have an index that's massively large- like 1TB?` -One benefit to locally indexed vector databases is that they tend to scale by breaking a larger set of vectors down into smaller small subsampling from a dataset and does a parameter search on it. Then we actually evaluate random queries against the ground truth to test that the index params actually generalized well (I'm confident they will). +One benefit to locally indexed vector databases is that they often scale by breaking the larger set of vectors down into a smaller set by uniformly random subsampling and training smaller vector search index models on the sub-samples. Most often, the same set of tuning parameters are applied to all of the smaller sub-index models, rather than trying to set them individually for each one. During search, the query vectors are often sent to all of the sub-indexes and the resulting neighbors list reduced down to `k` based on the closest distances (or similarities). -Getting Started with Optuna and RAPIDS for HPO — RAPIDS Deployment Documentation documentation +Because many databases use this sub-sampling trick, it's possible to perform an automated parameter tuning on the larger index just by randomly samplnig some number of vectors from it, splitting them into disjoint train/test/eval datasets, computing ground truth with brute-force, and then performing a hyper-parameter optimization on it. This procedure can also be repeated multiple times to simulate a monte-carlo cross validation. -Ray tune / Optune should allow us to plug in cuvs' Python API trivially and then we just specify a bunch of params to tune and let it go to town- this would ideally be done on a multi-node multi-GPU setup where we can try 10's of combinations at once, starting with "empirical heuristics" as defaults and iterate through something like a bayesian optimizer to find the best params. +GPUs are naturally great at performing massively parallel tasks, especially when they are largely independent tasks, such as training and evaluating models with different hyper-parameter settings in parallel. Hyper-parameter optimization also lends itself well to distributed processing, such as multi-node multi-GPU operation. -#. Generate a dataset with a reasonable number of vectors (say 10Mx768) -#. Subsample from the population uniformly, let's say 10% of that (1M vectors) -#. Subsample from the population uniformly, let's say 1% of the 1M vectors from the prior step, this is a validation set. -#. Compute ground truth on the vectors from prior step against all 10M vectors -#. Start tuning process for the 1M vectors from step 2 using the vectors from step 3 as the query set -#. Using the ideal params that provide the target objective (e.g. build vs quality), ingest all 10M vectors into the database and create an index. -#. Query the vectors from the database and calculate the recall. Verify it's close to the recall from the model params chosen in 5 (within some small epsilon). . +More formally, an automated parameter tuning workflow with monte-carlo cross-validaton looks likes something like this: +#. Ingest a large dataset into the vector database of your choice + +#. Choose an index size based on number of vectors. This should usually align with the average number of vectors the database will end up putting in a single ANN sub-index model. + +#. Uniformly random sample the number of vectors specified above from the database for a training set. This is often accomplished by generating some number of random (unique) numbers up to the dataset size. + +#. Uniformly sample some number of vectors for a test set and do this again for an evaluation set. 1-10% of the vectors in the training set. + +#. Use the test set to compute ground truth on the vectors from prior step against all vectors in the training set. + +#. Start the HPO tuning process for the training set, using the test vectors for the query set. It's important to make sure your HPO is multi-objective and optimizes for: a) low build time, b) high throughput or low latency sarch (depending on needs), and c) acceptable recall. + +#. Use the evaluation dataset to test that the optimal hyper-parameters generalize to unseen points that were not used in the optimization process. + +#. Optionally, the above steps multiple times on different uniform sub-samplings. Optimal parameters can then be combined over the multiple monte-optimization iterations. For example, many hyper-parameters can simply be averaged but care might need to be taken for other parameters. + +#. Create a new index in the database using the ideal params from above that meet the target constraints (e.g. build vs search vs quality) + +By the end of this process, you should have a set of parameters that meet your target constraints while demonstrating how well the optimal hyper-parameters generalize across the dataset. The major benefit to this approach is that it breaks a potentially unbounded dataset size down into manageable chunks and accelerates tuning on those chunks. We see this process as a major value add for vector search on the GPU. From d9ef4520b0f8065786d407193de831c2324a5b12 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Thu, 3 Oct 2024 09:59:08 -0400 Subject: [PATCH 17/22] More info --- docs/source/tuning_guide.rst | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/docs/source/tuning_guide.rst b/docs/source/tuning_guide.rst index 1526f3400..5d377ba99 100644 --- a/docs/source/tuning_guide.rst +++ b/docs/source/tuning_guide.rst @@ -1,18 +1,25 @@ -.. _tuning_guide: +~~~~~~~~~~~~~~~~~~~~~~ +Automated tuning Guide +~~~~~~~~~~~~~~~~~~~~~~ -~~~~~~~~~~~~ -Tuning Guide -~~~~~~~~~~~~ +Introduction +============ A Method for tuning and evaluating Vector Search Indexes At Scale in Locally Indexed Vector Databases. For more information on the differences between locally and globally indexed vector databases, please see :doc:`this guide `. The goal of this guide is to give users a scalable and effective approach for tuning a vector search index, no matter how large. Evaluation of a vector search index “model” that measures recall in proportion to build time so that it penalizes the recall when the build time is really high (should ultimately optimize for finding a lower build time and higher recall). For more information on the various different types of vector search indexes, please see our :doc:`guide to choosing vector search indexes ` +Why automated tuning? +===================== + As much as 75% of users have told us they will not be able to tune a vector database beyond one or two simple knobs and we suggest that an ideal “knob” would be to balance training time and search time with search quality. The more time, the higher the quality, and the more needed to find an acceptable search performance. Even the 25% of users that want to tune are still asking for simple tools for doing so. These users also ask for some simple guidelines for setting tuning parameters, like :doc:`this guide `. Since vector search indexes are more closely related to machine learning models than traditional databases indexes, one option for easing the parameter tuning burden is to use hyper-parameter optimization tools like `Ray Tune `_ and `Optuna `_. to verify this. -:italic:`But how would this work when we have an index that's massively large- like 1TB?` +How to tune? +============ + +But how would this work when we have an index that's massively large- like 1TB? One benefit to locally indexed vector databases is that they often scale by breaking the larger set of vectors down into a smaller set by uniformly random subsampling and training smaller vector search index models on the sub-samples. Most often, the same set of tuning parameters are applied to all of the smaller sub-index models, rather than trying to set them individually for each one. During search, the query vectors are often sent to all of the sub-indexes and the resulting neighbors list reduced down to `k` based on the closest distances (or similarities). @@ -20,6 +27,9 @@ Because many databases use this sub-sampling trick, it's possible to perform an GPUs are naturally great at performing massively parallel tasks, especially when they are largely independent tasks, such as training and evaluating models with different hyper-parameter settings in parallel. Hyper-parameter optimization also lends itself well to distributed processing, such as multi-node multi-GPU operation. +Steps to achieve automated tuning +================================= + More formally, an automated parameter tuning workflow with monte-carlo cross-validaton looks likes something like this: #. Ingest a large dataset into the vector database of your choice @@ -40,4 +50,7 @@ More formally, an automated parameter tuning workflow with monte-carlo cross-val #. Create a new index in the database using the ideal params from above that meet the target constraints (e.g. build vs search vs quality) +Conclusion +========== + By the end of this process, you should have a set of parameters that meet your target constraints while demonstrating how well the optimal hyper-parameters generalize across the dataset. The major benefit to this approach is that it breaks a potentially unbounded dataset size down into manageable chunks and accelerates tuning on those chunks. We see this process as a major value add for vector search on the GPU. From 325d718953315a8727800f134ec23556f004de0a Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Thu, 3 Oct 2024 10:18:35 -0400 Subject: [PATCH 18/22] Updating build --- docs/source/build.rst | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/source/build.rst b/docs/source/build.rst index ba0caafca..fbeffc443 100644 --- a/docs/source/build.rst +++ b/docs/source/build.rst @@ -38,21 +38,21 @@ C, C++, and Python through Conda The easiest way to install the pre-compiled C, C++, and Python packages is through conda. You can get a minimal conda installation with `miniforge `__. -Use the following commands, depending on your CUDA version, to install cuVS packages (replace `rapidsai` with `rapidsai-nightly` to install more up-to-date but less stable nightly packages). `mamba` is preferred over the `conda` command. +Use the following commands, depending on your CUDA version, to install cuVS packages (replace `rapidsai` with `rapidsai-nightly` to install more up-to-date but less stable nightly packages). `mamba` is preferred over the `conda` command and can be enabled using `this guide `_. C/C++ Package ~~~~~~~~~~~~~ .. code-block:: bash - mamba install -c rapidsai -c conda-forge -c nvidia libcuvs cuda-version=12.5 + conda install -c rapidsai -c conda-forge -c nvidia libcuvs cuda-version=12.5 Python Package ~~~~~~~~~~~~~~ .. code-block:: bash - mamba install -c rapidsai -c conda-forge -c nvidia cuvs cuda-version=12.5 + conda install -c rapidsai -c conda-forge -c nvidia cuvs cuda-version=12.5 Python through Pip ^^^^^^^^^^^^^^^^^^ @@ -97,15 +97,15 @@ Conda environment scripts are provided for installing the necessary dependencies .. code-block:: bash - mamba env create --name cuvs -f conda/environments/all_cuda-125_arch-x86_64.yaml - mamba activate cuvs + conda env create --name cuvs -f conda/environments/all_cuda-125_arch-x86_64.yaml + conda activate cuvs The process for building from source with CUDA 11 differs slightly in that your host system will also need to have CUDA toolkit installed which is greater than, or equal to, the version you install into you conda environment. Installing CUDA toolkit into your host system is necessary because `nvcc` is not provided with Conda's cudatoolkit dependencies for CUDA 11. The following example will install create and install dependencies for a CUDA 11.8 conda environment .. code-block:: bash - mamba env create --name cuvs -f conda/environments/all_cuda-118_arch-x86_64.yaml - mamba activate cuvs + conda env create --name cuvs -f conda/environments/all_cuda-118_arch-x86_64.yaml + conda activate cuvs The recommended way to build and install cuVS from source is to use the `build.sh` script in the root of the repository. This script can build both the C++ and Python artifacts and provides CMake options for building and installing the headers, tests, benchmarks, and the pre-compiled shared library. From 4d2d305bbc4aa5536979349295e6be682574ca8d Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Thu, 3 Oct 2024 10:34:56 -0400 Subject: [PATCH 19/22] Updating readme --- README.md | 96 +++++++++++++------------------------------------------ 1 file changed, 22 insertions(+), 74 deletions(-) diff --git a/README.md b/README.md index e697c61ed..a803e2373 100755 --- a/README.md +++ b/README.md @@ -1,11 +1,7 @@ #
 cuVS: Vector Search and Clustering on the GPU
> [!note] -> cuVS is a new library mostly derived from the approximate nearest neighbors and clustering algorithms in the [RAPIDS RAFT](https://github.com/rapidsai/raft) library of data mining primitives. RAPIDS RAFT currently contains the most fully-featured versions of the approximate nearest neighbors and clustering algorithms in cuVS. We are in the process of migrating the algorithms from RAFT to cuVS, but if you are unsure of which to use, please consider the following: -> 1. RAFT contains C++ and Python APIs for all of the approximate nearest neighbors and clustering algorithms. -> 2. cuVS contains a growing support for different languages, including C, C++, Python, and Rust. We will be adding more language support to cuVS in the future but will not be improving the language support for RAFT. -> 3. Once all of RAFT's approximate nearest neighbors and clustering algorithms are moved to cuVS, the RAFT APIs will be deprecated and eventually removed altogether. Once removed, RAFT will become a lightweight header-only library. In the meantime, there's no harm in using RAFT if support for additional languages is not needed. - +> cuVS is a new library mostly derived from the approximate nearest neighbors and clustering algorithms in the [RAPIDS RAFT](https://github.com/rapidsai/raft) library of machine learning and data mining primitives. As of version 24.10 (Release in October 2024), cuVS contains the most fully-featured versions of the approximate nearest neighbors and clustering algorithms from RAFT. The algorithms which have been migrated over to cuVS will be removed from RAFT in version 24.12 (released in December 2024). ## Contents @@ -18,10 +14,10 @@ ## Useful Resources +- [Build and Install Guide](https://docs.rapids.ai/api/cuvs/nightly/build): Instructions for installing and building cuVS. +- [Getting Started Guide](https://docs.rapids.ai/api/cuvs/nightly/getting_started): Guide to getting started with cuVS. - [Code Examples](https://github.com/rapidsai/cuvs/tree/HEAD/examples): Self-contained Code Examples. - [API Reference Documentation](https://docs.rapids.ai/api/cuvs/nightly/api_docs): API Documentation. -- [Getting Started Guide](https://docs.rapids.ai/api/cuvs/nightly/getting_started): Getting started with RAFT. -- [Build and Install Guide](https://docs.rapids.ai/api/cuvs/nightly/build): Instructions for installing and building cuVS. - [RAPIDS Community](https://rapids.ai/community.html): Get help, contribute, and collaborate. - [GitHub repository](https://github.com/rapidsai/cuvs): Download the cuVS source code. - [Issue tracker](https://github.com/rapidsai/cuvs/issues): Report issues or request features. @@ -32,32 +28,32 @@ cuVS contains state-of-the-art implementations of several algorithms for running ## Installing cuVS -cuVS comes with pre-built packages that can be installed through [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html#managing-python). Different packages are available for the different languages supported by cuVS: +cuVS comes with pre-built packages that can be installed through [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html#managing-python) and [pip](https://pip.pypa.io/en/stable/). Different packages are available for the different languages supported by cuVS: -| Python | C/C++ | -|--------|-----------------------------| -| `cuvs` | `libcuvs`, `libcuvs-static` | +| Python | C/C++ | +|--------|-----------| +| `cuvs` | `libcuvs` | ### Stable release -It is recommended to use [mamba](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) to install the desired packages. The following command will install the Python package. You can substitute `cuvs` for any of the packages in the table above: +It is recommended to use [mamba](https://conda.github.io/conda-libmamba-solver/user-guide/) to install the desired packages. The following command will install the Python package. You can substitute `cuvs` for any of the packages in the table above: ```bash -mamba install -c conda-forge -c nvidia -c rapidsai cuvs +conda install -c conda-forge -c nvidia -c rapidsai cuvs ``` ### Nightlies If installing a version that has not yet been released, the `rapidsai` channel can be replaced with `rapidsai-nightly`: ```bash -mamba install -c conda-forge -c nvidia -c rapidsai-nightly cuvs=24.10 +conda install -c conda-forge -c nvidia -c rapidsai-nightly cuvs=24.10 ``` -Please see the [Build and Install Guide](https://docs.rapids.ai/api/cuvs/stable/build/) for more information on installing cuVS and building from source. +Please see the [Build and Install Guide](https://docs.rapids.ai/api/cuvs/nightly/build/) for more information on installing cuVS and building from source. ## Getting Started -The following code snippets train an approximate nearest neighbors index for the CAGRA algorithm. +The following code snippets train an approximate nearest neighbors index for the CAGRA algorithm in the various different languages supported by cuVS. ### Python API @@ -85,7 +81,7 @@ cagra::index_params index_params; auto index = cagra::build(res, index_params, dataset); ``` -For more examples of the C++ APIs, refer to the [examples](https://github.com/rapidsai/cuvs/tree/HEAD/examples) directory in the codebase. +For more code examples of the C++ APIs, including drop-in Cmake project templates, please refer to the [C++ examples](https://github.com/rapidsai/cuvs/tree/HEAD/examples) directory in the codebase. ### C API @@ -110,6 +106,8 @@ cuvsCagraIndexParamsDestroy(index_params); cuvsResourcesDestroy(res); ``` +For more code examples of the C APIs, including drop-in Cmake project templates, please refer to the [C examples](https://github.com/rapidsai/cuvs/tree/branch-24.10/examples/c) + ### Rust API ```rust @@ -171,6 +169,7 @@ fn cagra_example() -> Result<()> { } ``` +For more code examples of the Rust APIs, including a drop-in project templates, please refer to the [Rust examples](https://github.com/rapidsai/cuvs/tree/branch-24.10/examples/rust). ## Contributing @@ -178,60 +177,9 @@ If you are interested in contributing to the cuVS library, please read our [Cont ## References -When citing cuVS generally, please consider referencing this Github repository. -```bibtex -@misc{rapidsai, - title={Rapidsai/cuVS: Vector Search and Clustering on the GPU.}, - url={https://github.com/rapidsai/cuvs}, - journal={GitHub}, - publisher={Nvidia RAPIDS}, - author={Rapidsai}, - year={2024} -} -``` - -If citing CAGRA, please consider the following bibtex: -```bibtex -@misc{ootomo2023cagra, - title={CAGRA: Highly Parallel Graph Construction and Approximate Nearest Neighbor Search for GPUs}, - author={Hiroyuki Ootomo and Akira Naruse and Corey Nolet and Ray Wang and Tamas Feher and Yong Wang}, - year={2023}, - eprint={2308.15136}, - archivePrefix={arXiv}, - primaryClass={cs.DS} -} -``` - -If citing the k-selection routines, please consider the following bibtex: -```bibtex -@proceedings{10.1145/3581784, - title = {SC '23: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis}, - year = {2023}, - isbn = {9798400701092}, - publisher = {Association for Computing Machinery}, - address = {New York, NY, USA}, - abstract = {Started in 1988, the SC Conference has become the annual nexus for researchers and practitioners from academia, industry and government to share information and foster collaborations to advance the state of the art in High Performance Computing (HPC), Networking, Storage, and Analysis.}, - location = {, Denver, CO, USA, } -} -``` - -If citing the nearest neighbors descent API, please consider the following bibtex: -```bibtex -@inproceedings{10.1145/3459637.3482344, - author = {Wang, Hui and Zhao, Wan-Lei and Zeng, Xiangxiang and Yang, Jianye}, - title = {Fast K-NN Graph Construction by GPU Based NN-Descent}, - year = {2021}, - isbn = {9781450384469}, - publisher = {Association for Computing Machinery}, - address = {New York, NY, USA}, - url = {https://doi.org/10.1145/3459637.3482344}, - doi = {10.1145/3459637.3482344}, - abstract = {NN-Descent is a classic k-NN graph construction approach. It is still widely employed in machine learning, computer vision, and information retrieval tasks due to its efficiency and genericness. However, the current design only works well on CPU. In this paper, NN-Descent has been redesigned to adapt to the GPU architecture. A new graph update strategy called selective update is proposed. It reduces the data exchange between GPU cores and GPU global memory significantly, which is the processing bottleneck under GPU computation architecture. This redesign leads to full exploitation of the parallelism of the GPU hardware. In the meantime, the genericness, as well as the simplicity of NN-Descent, are well-preserved. Moreover, a procedure that allows to k-NN graph to be merged efficiently on GPU is proposed. It makes the construction of high-quality k-NN graphs for out-of-GPU-memory datasets tractable. Our approach is 100-250\texttimes{} faster than the single-thread NN-Descent and is 2.5-5\texttimes{} faster than the existing GPU-based approaches as we tested on million as well as billion scale datasets.}, - booktitle = {Proceedings of the 30th ACM International Conference on Information \& Knowledge Management}, - pages = {1929–1938}, - numpages = {10}, - keywords = {high-dimensional, nn-descent, gpu, k-nearest neighbor graph}, - location = {Virtual Event, Queensland, Australia}, - series = {CIKM '21} -} -``` +For the interested reader, many of the accelerated implementations in cuVS are also based on research papers which can provide a lot more background. We also ask you to please cite the corresponding algorithms by referencing them in your own research. +- [CAGRA: Highly Parallel Graph Construction and Approximate Nearest Neighbor Search](https://arxiv.org/abs/2308.15136) +- [Top-K Algorithms on GPU: A Comprehensive Study and New Methods](https://dl.acm.org/doi/10.1145/3581784.3607062>) +- [Fast K-NN Graph Construction by GPU Based NN-Descent](https://dl.acm.org/doi/abs/10.1145/3459637.3482344?casa_token=O_nan1B1F5cAAAAA:QHWDEhh0wmd6UUTLY9_Gv6c3XI-5DXM9mXVaUXOYeStlpxTPmV3nKvABRfoivZAaQ3n8FWyrkWw>) +- [cuSLINK: Single-linkage Agglomerative Clustering on the GPU](https://arxiv.org/abs/2306.16354) +- [GPU Semiring Primitives for Sparse Neighborhood Methods](https://arxiv.org/abs/2104.06357) From a84978a676526aaedca18fb6ec619d2ad16ccc69 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Thu, 3 Oct 2024 10:44:05 -0400 Subject: [PATCH 20/22] Apply suggestions from code review Co-authored-by: Micka --- docs/source/choosing_and_configuring_indexes.rst | 2 +- docs/source/comparing_indexes.rst | 2 +- docs/source/cuvs_bench/index.rst | 2 +- docs/source/indexes/ivfflat.rst | 4 ++-- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/source/choosing_and_configuring_indexes.rst b/docs/source/choosing_and_configuring_indexes.rst index e3a0c8467..b4c140f29 100644 --- a/docs/source/choosing_and_configuring_indexes.rst +++ b/docs/source/choosing_and_configuring_indexes.rst @@ -2,7 +2,7 @@ Primer on vector search indexes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Vector search indexes often use approximations to trade-off accuracy of the results for speed, either through lowering latency (end-to-end single query speed) or by increating throughput (the number of query vectors that can be satisfied in a short period of time). Vector search indexes, especially ones that use approximations, are very closely related to machine learning models but they are optimized for fast search and accuracy of results. +Vector search indexes often use approximations to trade-off accuracy of the results for speed, either through lowering latency (end-to-end single query speed) or by increasing throughput (the number of query vectors that can be satisfied in a short period of time). Vector search indexes, especially ones that use approximations, are very closely related to machine learning models but they are optimized for fast search and accuracy of results. When the number of vectors is very small, such as less than 100 thousand vectors, it could be fast enough to use a brute-force (also known as a flat index), which returns exact results but at the expense of exhaustively searching all possible neighbors diff --git a/docs/source/comparing_indexes.rst b/docs/source/comparing_indexes.rst index 62b362b1a..221aab6d7 100644 --- a/docs/source/comparing_indexes.rst +++ b/docs/source/comparing_indexes.rst @@ -40,7 +40,7 @@ We suggest averaging performance within a range of recall. For general guidance, .. image:: images/recall_buckets.png -This allows us to say things like “okay at 95% recall level, model A can be built 3x faster than model B, but model B has 2x lower latency than model A” +This allows us to make observations such as “at 95% recall level, model A can be built 3x faster than model B, but model B has 2x lower latency than model A” .. image:: images/build_benchmarks.png diff --git a/docs/source/cuvs_bench/index.rst b/docs/source/cuvs_bench/index.rst index faddbb9dc..9b64d7153 100644 --- a/docs/source/cuvs_bench/index.rst +++ b/docs/source/cuvs_bench/index.rst @@ -129,7 +129,7 @@ The usage of the script `cuvs_bench.run` is: --build --search --algorithms ALGORITHMS - run only comma separated list of named algorithms. If parameters `groups` and `algo-groups are both undefined, then group `base` is run by default (default: None) + run only comma separated list of named algorithms. If parameters `groups` and `algo-groups` are both undefined, then group `base` is run by default (default: None) --groups GROUPS run only comma separated groups of parameters (default: base) --algo-groups ALGO_GROUPS add comma separated . to run. Example usage: "--algo-groups=raft_cagra.large,hnswlib.large" (default: None) diff --git a/docs/source/indexes/ivfflat.rst b/docs/source/indexes/ivfflat.rst index 76b2f815f..14dd1798c 100644 --- a/docs/source/indexes/ivfflat.rst +++ b/docs/source/indexes/ivfflat.rst @@ -77,8 +77,8 @@ assumption that the number of lists, and thus the max size of the data in the in might not matter. For example, most vector databases build many smaller physical approximate nearest neighbors indexes, each from fixed-size or maximum-sized immutable segments and so the number of lists can be tuned based on the number of vectors in the indexes. -Empirically, we've found :math:`\sqrt{n_index_vectors}` to be a good starting point for the :math:`n_lists` hyper-parameter. Remember, having more -lists means less points to search within each list, but it could also mean more :math:`n_probes` are needed at search time to reach an acceptable +Empirically, we've found :math:`\sqrt{n\_index\_vectors}` to be a good starting point for the :math:`n\_lists` hyper-parameter. Remember, having more +lists means less points to search within each list, but it could also mean more :math:`n\_probes` are needed at search time to reach an acceptable recall. From 35c8029f8e7fc1168fa7cb03e82dc1d4711ef46c Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Thu, 3 Oct 2024 10:44:33 -0400 Subject: [PATCH 21/22] Removing the cpp_tutorial --- README.md | 1 + docs/source/cpp_tutorial.rst | 448 ----------------------------------- 2 files changed, 1 insertion(+), 448 deletions(-) delete mode 100644 docs/source/cpp_tutorial.rst diff --git a/README.md b/README.md index a803e2373..fd8d5393e 100755 --- a/README.md +++ b/README.md @@ -14,6 +14,7 @@ ## Useful Resources +- [Documentation](https://docs.rapids.ai/api/cuvs/): Library documentation. - [Build and Install Guide](https://docs.rapids.ai/api/cuvs/nightly/build): Instructions for installing and building cuVS. - [Getting Started Guide](https://docs.rapids.ai/api/cuvs/nightly/getting_started): Guide to getting started with cuVS. - [Code Examples](https://github.com/rapidsai/cuvs/tree/HEAD/examples): Self-contained Code Examples. diff --git a/docs/source/cpp_tutorial.rst b/docs/source/cpp_tutorial.rst deleted file mode 100644 index 83d2e3335..000000000 --- a/docs/source/cpp_tutorial.rst +++ /dev/null @@ -1,448 +0,0 @@ -======================== -C++ Walkthrough Tutorial -======================== - -Table of Contents -================= - -- `Step 1: Starting off with cuVS`_ - -- `Step 2: Generate some data`_ - -- `Step 3: Using brute-force indexes`_ - -- `Step 4: Using the ANN indexes`_ - -- `Step 5: Evaluate neighborhood quality`_ - -- `Advanced Features`_ - - * `Serialization`_ - - * `Filtering`_ - - * `Stream Pools`_ - - * `Device Resources Manager`_ - - * `Device Memory Resources`_ - - * `Workspace Memory Resource`_ - -cuVS has several important algorithms for performing vector search on the GPU and this tutorial walks through the primary vector search APIs from start to finish to provide a reference for quick setup and C++ API usage. - -This tutorial assumes cuVS has been installed and/or added to your build so that you are able to compile and run RAFT code. If not done already, please follow the [build and install instructions](build.md) and consider taking a look at the [example c++ template project](https://github.com/rapidsai/raft/tree/HEAD/cpp/template) for ready-to-go examples that you can immediately build and start playing with. Also take a look at RAFT's library of [reproducible vector search benchmarks](raft_ann_benchmarks.md) to run benchmarks that compare cuVS against other state-of-the-art nearest neighbors algorithms at scale. - -For more information about the various APIs demonstrated in this tutorial, along with comprehensive usage examples of all the APIs offered by RAFT, please refer to the `cuVS C++ API Documentation <>https://docs.rapids.ai/api/cuvs/nightly/cpp_api/>`_. - -Step 1: Starting off with cuVS -============================== - -CUDA Development? ------------------ - -If you are reading this tuturial then you probably know about CUDA and its relationship to general-purpose GPU computing (GPGPU). You probably also know about Nvidia GPUs but might not necessarily be familiar with the programming model nor GPU computing. The good news is that extensive knowledge of CUDA and GPUs are not needed in order to get started with or build applications with RAFT. RAFT hides away most of the complexities behind simple single-threaded stateless functions that are inherently asynchronous, meaning the result of a computation isn't necessarily read to be used when the function executes and control is given back to the user. The functions are, however, allowed to be chained together in a sequence of calls that don't need to wait for subsequent computations to complete in order to continue execution. In fact, the only time you need to wait for the computation to complete is when you are ready to use the result. - -A common structure you will encounter when using RAFT is a `raft::device_resources` object. This object is a container for important resources for a single GPU that might be needed during computation. If communicating with multiple GPUs, multiple `device_resources` might be needed, one for each GPU. `device_resources` contains several methods for managing its state but most commonly, you'll call the `sync_stream()` to guarantee all recently submitted computation has completed (as mentioned above.) - -A simple example of using `raft::device_resources` in RAFT: - -.. code-block:: c++ - - #include - - raft::device_resources res; - // Call a bunch of RAFT functions in sequence... - res.sync_stream() - -Host vs Device Memory ---------------------- - -We differentiate between two different types of memory. `host` memory is your traditional RAM memory that is primarily accessible by applications on the CPU. `device` memory, on the other hand, is what we call the special memory on the GPU, which is not accessible from the CPU. In order to access host memory from the GPU, it needs to be explicitly copied to the GPU and in order to access device memory by the CPU, it needs to be explicitly copied there. We have several mechanisms available for allocating and managing the lifetime of device memory on the stack so that we don't need to explicitly allocate and free pointers on the heap. For example, instead of a `std::vector` for host memory, we can use `rmm::device_uvector` on the device. The following function will copy an array from host memory to device memory: - -.. code-block:: c++ - - #include - #include - #include - - raft::device_resources res; - - std::vector my_host_vector = {0, 1, 2, 3, 4}; - rmm::device_uvector my_device_vector(my_host_vector.size(), res.get_stream()); - - raft::copy(my_device_vector.data(), my_host_vector.data(), my_host_vector.size(), res.get_stream()); - -Since a stream is involved in the copy operation above, RAFT functions can be invoked immediately so long as the same `device_resources` instances is used (or, more specifically, the same main stream from the `devices_resources`.) As you might notice in the example above, `res.get_stream()` can be used to extract the main stream from a `device_resources` instance. - -Multi-dimensional data representation -------------------------------------- - -`rmm::device_uvector` is a great mechanism for allocating and managing a chunk of device memory. While it's possible to use a single array to represent objects in higher dimensions like matrices, it lacks the means to pass that information along. For example, in addition to knowing that we have a 2d structure, we would need to know the number of rows, the number of columns, and even whether we read the columns or rows first (referred to as column- or row-major respectively). - -For this reason, RAFT relies on the `mdspan` standard, which was composed specifically for this purpose. To be even more, `mdspan` itself doesn't actually allocate or own any data on host or device because it's just a view over an existing memory on host device. The `mdspan` simply gives us a way to represent multi-dimensional data so we can pass along the needed metadata to our APIs. Even more powerful is that we can design functions that only accept a matrix of `float` in device memory that is laid out in row-major format. - -The memory-owning counterpart to the `mdspan` is the `mdarray` and the `mdarray` can allocate memory on device or host and carry along with it the metadata about its shape and layout. An `mdspan` can be produced from an `mdarray` for invoking RAFT APIs with `mdarray.view()`. They also follow similar paradigms to the STL, where we represent an immutable `mdspan` of `int` using `mdspan` instead of `const mdspan` to ensure it's the type carried along by the `mdspan` that's not allowed to change. - -Many RAFT functions require `mdspan` to represent immutable input data and there's no implicit conversion between `mdspan` and `mdspan` we use `raft::make_const_mdspan()` to alleviate the pain of constructing a new `mdspan` to invoke these functions. - -The following example demonstrates how to create `mdarray` matrices in both device and host memory, copy one to the other, and create mdspans out of them: - -.. code-block:: c++ - - #include - #include - #include - - raft::device_resources res; - - int n_rows = 10; - int n_cols = 10; - - auto device_matrix = raft::make_device_matrix(res, n_rows, n_cols); - auto host_matrix = raft::make_host_matrix(res, n_rows, n_cols); - - // Set the diagonal to 1 - for(int i = 0; i < n_rows; i++) { - host_matrix(i, i) = 1; - } - - raft::copy(res, device_matrix.view(), host_matrix.view()); - -Step 2: Generate some data -========================== - -Let's build upon the fundamentals from the prior section and actually invoke some of RAFT's computational APIs on the device. A good starting point is data generation. - -.. code-block: c++ - - #include - #include - - raft::device_resources res; - - int n_rows = 10000; - int n_cols = 10000; - - auto dataset = raft::make_device_matrix(res, n_rows, n_cols); - auto labels = raft::make_device_vector(res, n_rows); - - raft::random::make_blobs(res, dataset.view(), labels.view()); - -That's it. We've now generated a random 10kx10k matrix with points that cleanly separate into Gaussian clusters, along with a vector of cluster labels for each of the data points. Notice the `cuh` extension in the header file include for `make_blobs`. This signifies to us that this file contains CUDA device functions like kernel code so the CUDA compiler, `nvcc` is needed in order to compile any code that uses it. Generally, any source files that include headers with a `cuh` extension use the `.cu` extension instead of `.cpp`. The rule here is that `cpp` source files contain code which can be compiled with a C++ compiler like `g++` while `cu` files require the CUDA compiler. - -Since the `make_blobs` code generates the random dataset on the GPU device, we didn't need to do any host to device copies in this one. `make_blobs` is also asynchronous, so if we don't need to copy and use the data in host memory right away, we can continue calling RAFT functions with the `device_resources` instance and the data transformations will all be scheduled on the same stream. - -Step 3: Using brute-force indexes -================================= - -Build brute-force index ------------------------ - -Consider the `(10k, 10k)` shaped random matrix we generated in the previous step. We want to be able to find the k-nearest neighbors for all points of the matrix, or what we refer to as the all-neighbors graph, which means finding the neighbors of all data points within the same matrix. -.. code-block:: c++ - - #include - - raft::device_resources res; - - // set number of neighbors to search for - int const k = 64; - - auto bfknn_index = raft::neighbors::brute_force::build(res, - raft::make_const_mdspan(dataset.view())); - -Query brute-force index ------------------------ - -.. code-block:: c++ - - // using matrix `dataset` from previous example - auto search = raft::make_const_mdspan(dataset.view()); - - // Indices and Distances are of dimensions (n, k) - // where n is number of rows in the search matrix - auto reference_indices = raft::make_device_matrix(res, search.extent(0), k); // stores index of neighbors - auto reference_distances = raft::make_device_matrix(res, search.extent(0), k); // stores distance to neighbors - - raft::neighbors::brute_force::search(res, - bfknn_index, - search, - reference_indices.view(), - reference_distances.view()); - -We have established several things here by building a flat index. Now we know the exact 64 neighbors of all points in the matrix, and this algorithm can be generally useful in several ways: -1. Creating a baseline to compare against when building an approximate nearest neighbors index. -2. Directly using the brute-force algorithm when accuracy is more important than speed of computation. Don't worry, our implementation is still the best in-class and will provide not only significant speedups over other brute force methods, but also be quick relatively when the matrices are small! - - -Step 4: Using the ANN indexes -============================= - -Build a CAGRA index -------------------- - -Next we'll train an ANN index. We'll use our graph-based CAGRA algorithm for this example but the other index types use a very similar pattern. - -.. code-block:: c++ - - #include - - raft::device_resources res; - - // use default index parameters - raft::neighbors::cagra::index_params index_params; - - auto index = raft::neighbors::cagra::build(res, index_params, raft::make_const_mdspan(dataset.view())); - -Query the CAGRA index ---------------------- - -Now that we've trained a CAGRA index, we can query it by first allocating our output `mdarray` objects and passing the trained index model into the search function. - -.. code-block:: c++ - - // create output arrays - auto indices = raft::make_device_matrix(res, n_rows, k); - auto distances = raft::make_device_matrix(res, n_rows, k); - - // use default search parameters - raft::neighbors::cagra::search_params search_params; - - // search K nearest neighbors - raft::neighbors::cagra::search( - res, search_params, index, search, indices.view(), distances.view()); - -Step 5: Evaluate neighborhood quality -===================================== - -In step 3 we built a flat index and queried for exact neighbors while in step 4 we build an ANN index and queried for approximate neighbors. How do you quickly figure out the quality of our approximate neighbors and whether it's in an acceptable range based on your needs? Just compute the `neighborhood_recall` which gives a single value in the range [0, 1]. Closer the value to 1, higher the quality of the approximation. - -.. code-block:: c++ - - #include - - raft::device_resources res; - - // Assuming matrices as type raft::device_matrix_view and variables as - // indices : approximate neighbor indices - // reference_indices : exact neighbor indices - // distances : approximate neighbor distances - // reference_distances : exact neighbor distances - - // We want our `neighborhood_recall` value in host memory - float const recall_scalar = 0.0; - auto recall_value = raft::make_host_scalar(recall_scalar); - - raft::stats::neighborhood_recall(res, - raft::make_const_mdspan(indices.view()), - raft::make_const_mdspan(reference_indices.view()), - recall_value.view(), - raft::make_const_mdspan(distances.view()), - raft::make_const_mdspan(reference_distances.view())); - - res.sync_stream(); - -Notice we can run invoke the functions for index build and search for both algorithms, one right after the other, because we don't need to access any outputs from the algorithms in host memory. We will need to synchronize the stream on the `raft::device_resources` instance before we can read the result of the `neighborhood_recall` computation, though. - -Similar to a Numpy array, when we use a `host_scalar`, we are really using a multi-dimensional structure that contains only a single dimension, and further a single element. We can use element indexing to access the resulting element directly. -.. code-block:: c++ - std::cout << recall_value(0) << std::endl; - -While it may seem like unnecessary additional work to wrap the result in a `host_scalar` mdspan, this API choice is made intentionally to support the possibility of also receiving the result as a `device_scalar` so that it can be used directly on the device for follow-on computations without having to incur the synchronization or transfer cost of bringing the result to host. This pattern becomes even more important when the result is being computed in a loop, such as an iterative solver, and the cost of synchronization and device-to-host (d2h) transfer becomes very expensive. - -Advanced features -================= - -The following sections present some advanced features that we have found can be useful for squeezing more utilization out of GPU hardware. As you've seen in this tutorial, RAFT provides several very useful tools and building blocks for developing accelerated applications beyond vector search capabilities. - -Serialization -------------- - -Most of the indexes in `raft::neighbors` can be serialized to/from streams and files on disk. The index types that support this feature have include files with the naming convention `_serialize.cuh`. The serialization functions are similar across the different index types, with the primary difference being that some index types require a pointer to all the training data for search. Since the original training dataset can be quite large, the `serialize()` function for these index types includes an argument `include_dataset`, which allows the user to specify whether the dataset should be included in the serialized form. The index types that allow for this also include a method `update_datasets()` to allow for the dataset to be re-attached to the index after it is deserialized. - -The following example demonstrates serializing and deserializing a CAGRA index to and from a file. For index types that don't require the training data, you can remove the `include_dataset` and `update_dataset()` parts. We will assume the CAGRA index has been built using the code from [Step 4](#build-a-cagra-index) above: - -.. code-block:: c++ - - #include - #include - - using namespace raft::neighbors; - - raft::neighbors::cagra::serialize(res, "cagra_serialized.dat", index, false); - - auto index_deser = raft::neighbors::cagra::deserialize(res, "cagra_serialized.dat"); - index_deser.update_dataset(dataset); - -Filtering ---------- - -As of RAFT 23.10, support for pre-filtering of neighbors has been added to ANN index. This search feature can enable multiple use-cases, such as filtering a vector based on it's attributes (hybrid searches), the removal of vectors already added to the index, or the control of access in searches for security purposes. -The filtering is available through the `search_with_filtering()` function of the ANN index, and is done by applying a predicate function on the GPU, which usually have the signature `(uint32_t query_ix, uint32_t sample_ix) -> bool`. - -One of the most commonly used mechanism for filtering is the bitset: the bitset is a data structure that allows to test the presence of a value in a set through a fast lookup, and is implemented as a bit array so that every element contains a `0` or a `1` (respectively `false` and `true` in boolean logic). RAFT provides a `raft::core::bitset` class that can be used to create and manipulate bitsets on the GPU, and a `raft::core::bitset_view` class that can be used to pass bitsets to filtering functions. - -The following example demonstrates how to use the filtering API (assume the CAGRA index is built using the code from [Step 4](#build-a-cagra-index) above: - -.. code-block:: c++ - - #include - #include - - using namespace raft::neighbors; - - cagra::search_params search_params; - - // create a bitset to filter the search - auto removed_indices = raft::make_device_vector(res, n_removed_indices); - raft::core::bitset removed_indices_bitset( - res, removed_indices.view(), dataset.extent(0)); - - // ... Populate the bitset ... - - // search K nearest neighbours according to a bitset filter - auto neighbors = raft::make_device_matrix(res, n_queries, k); - auto distances = raft::make_device_matrix(res, n_queries, k); - cagra::search_with_filtering(res, search_params, index, queries, neighbors, distances, - filtering::bitset_filter(removed_indices_bitset.view())); - - -Stream pools ------------- - -Within each CPU thread, CUDA uses `streams` to submit asynchronous work. You can think of a stream as a queue. Each stream can submit work to the GPU independently of other streams but work submitted within each stream is queued and executed in the order in which it was submitted. Similar to how we can use thread pools to bound the parallelism of CPU threads, we can use CUDA stream pools to bound the amount of concurrent asynchronous work that can be scheduled on a GPU. Each instance of `device_resources` has a main stream, but can also create a stream pool. For a single CPU thread, multiple different instances of `device_resources` can be created with different main streams and used to invoke a series of RAFT functions concurrently on the same or different GPU devices, so long as the target devices have available resources to perform the work. Once a device is saturated, queued work on streams will be scheduled and wait for a chance to do more work. During this time the streams are waiting, the CPU thread will still continue its own execution asynchronously unless `sync_stream_pool()` is called, causing the thread to block and wait for the thread pools to complete. - -Also, beware that before splitting GPU work onto multiple different concurrent streams, it can often be important to wait for the main stream in the `device_resources`. This can be done with `wait_stream_pool_on_stream()`. - -To summarize, if wanting to execute multiple different streams in parallel, we would often use a stream pool like this: - -.. code-block:: c++ - - #include - - #include - #include - - int n_streams = 5; - - rmm::cuda_stream stream; - std::shared_ptr stream_pool(5) - raft::device_resources res(stream.view(), stream_pool); - - // Submit some work on the main stream... - - res.wait_stream_pool_on_stream() - for(int i = 0; i < n_streams; ++i) { - rmm::cuda_stream_view stream_from_pool = res.get_next_usable_stream(); - raft::device_resources pool_res(stream_from_pool); - // Submit some work with pool_res... - } - - res.sync_stream_pool(); - -Device resources manager ------------------------- - -In multi-threaded applications, it is often useful to create a set of -`raft::device_resources` objects on startup to avoid the overhead of -re-initializing underlying resources every time a `raft::device_resources` object -is needed. To help simplify this common initialization logic, RAFT -provides a `raft::device_resources_manager` to handle this for downstream -applications. On startup, the application can specify certain limits on the -total resource consumption of the `raft::device_resources` objects that will be -generated: - -.. code-block:: c++ - - #include - - void initialize_application() { - // Set the total number of CUDA streams to use on each GPU across all CPU - // threads. If this method is not called, the default stream per thread - // will be used. - raft::device_resources_manager::set_streams_per_device(16); - - // Create a memory pool with given max size in bytes. Passing std::nullopt will allow - // the pool to grow to the available memory of the device. - raft::device_resources_manager::set_max_mem_pool_size(std::nullopt); - - // Set the initial size of the memory pool in bytes. - raft::device_resources_manager::set_init_mem_pool_size(16000000); - - // If neither of the above methods are called, no memory pool will be used - } - -While this example shows some commonly used settings, -`raft::device_resources_manager` provides support for several other -resource options and constraints, including options to initialize entire -stream pools that can be used by an individual `raft::device_resources` object. After -this initialization method is called, the following function could be called -from any CPU thread: - -.. code-block:: c++ - - void foo() { - raft::device_resources const& res = raft::device_resources_manager::get_device_resources(); - // Submit some work with res - res.sync_stream(); - } - -If any `raft::device_resources_manager` setters are called _after_ the first -call to `raft::device_resources_manager::get_device_resources()`, these new -settings are ignored, and a warning will be logged. If a thread calls -`raft::device_resources_manager::get_device_resources()` multiple times, it is -guaranteed to access the same underlying `raft::device_resources` object every -time. This can be useful for chaining work in different calls on the same -thread without keeping a persistent reference to the resources object. - -Device memory resources ------------------------ - -The RAPIDS software ecosystem makes heavy use of the [RAPIDS Memory Manager](https://github.com/rapidsai/rmm) (RMM) to enable zero-copy sharing of device memory across various GPU-enabled libraries such as PyTorch, Jax, Tensorflow, and FAISS. A really powerful feature of RMM is the ability to set a memory resource, such as a pooled memory resource that allocates a block of memory up front to speed up subsequent smaller allocations, and have all the libraries in the GPU ecosystem recognize and use that same memory resource for all of their memory allocations. - -As an example, the following code snippet creates a `pool_memory_resource` and sets it as the default memory resource, which means all other libraries that use RMM will now allocate their device memory from this same pool: - -.. code-block:: c++ - - #include - - rmm::mr::cuda_memory_resource cuda_mr; - // Construct a resource that uses a coalescing best-fit pool allocator - // set the initial size to half of the free device memory - auto init_size = rmm::percent_of_free_device_memory(50); - rmm::mr::pool_memory_resource pool_mr{&cuda_mr, init_size}; - rmm::mr::set_current_device_resource(&pool_mr); // Updates the current device resource pointer to `pool_mr` - -The `raft::device_resources` object will now also use the `rmm::current_device_resource`. This isn't limited to C++, however. Often a user will be interacting with PyTorch, RAPIDS, or Tensorflow through Python and so they can set and use RMM's `current_device_resource` [right in Python](https://github.com/rapidsai/rmm#using-rmm-in-python-code). - -Workspace memory resource -------------------------- - -As mentioned above, `raft::device_resources` will use `rmm::current_device_resource` by default for all memory allocations. However, there are times when a particular algorithm might benefit from using a different memory resource such as a `managed_memory_resource`, which creates a unified memory space between device and host memory, paging memory in and out of device as needed. Most of RAFT's algorithms allocate temporary memory as needed to perform their computations and we can control the memory resource used for these temporary allocations through the `workspace_resource` in the `raft::device_resources` instance. - -For some applications, the `managed_memory_resource`, can enable a memory space that is larger than the GPU, thus allowing a natural spilling to host memory when needed. This isn't always the best way to use managed memory, though, as it can quickly lead to thrashing and severely impact performance. Still, when it can be used, it provides a very powerful tool that can also avoid out of memory errors when enough host memory is available. - -The following creates a managed memory allocator and set it as the `workspace_resource` of the `raft::device_resources` instance: - -.. code-block:: c++ - - #include - #include - - std::shared_ptr managed_resource; - raft::device_resource res(managed_resource);``` - -The `workspace_resource` uses an `rmm::mr::limiting_resource_adaptor`, which limits the total amount of allocation possible. This allows RAFT algorithms to work within the confines of the memory constraints imposed by the user so that things like batch sizes can be automatically set to reasonable values without exceeding the allotted memory. By default, this limit restricts the memory allocation space for temporary workspace buffers to the memory available on the device. - -The below example specifies the total number of bytes that RAFT can use for temporary workspace allocations to 3GB: - -.. code-block:: c++ - - #include - #include - - #include - - std::shared_ptr managed_resource; - raft::device_resource res(managed_resource, std::make_optional(3 * 1024^3)); From ae24662f6afbe33ae0b1824f6a15ef7e9fd48aa5 Mon Sep 17 00:00:00 2001 From: "Corey J. Nolet" Date: Thu, 3 Oct 2024 10:44:59 -0400 Subject: [PATCH 22/22] Adding cpp tutorial back --- docs/source/cpp_tutorial.rst | 448 +++++++++++++++++++++++++++++++++++ 1 file changed, 448 insertions(+) create mode 100644 docs/source/cpp_tutorial.rst diff --git a/docs/source/cpp_tutorial.rst b/docs/source/cpp_tutorial.rst new file mode 100644 index 000000000..83d2e3335 --- /dev/null +++ b/docs/source/cpp_tutorial.rst @@ -0,0 +1,448 @@ +======================== +C++ Walkthrough Tutorial +======================== + +Table of Contents +================= + +- `Step 1: Starting off with cuVS`_ + +- `Step 2: Generate some data`_ + +- `Step 3: Using brute-force indexes`_ + +- `Step 4: Using the ANN indexes`_ + +- `Step 5: Evaluate neighborhood quality`_ + +- `Advanced Features`_ + + * `Serialization`_ + + * `Filtering`_ + + * `Stream Pools`_ + + * `Device Resources Manager`_ + + * `Device Memory Resources`_ + + * `Workspace Memory Resource`_ + +cuVS has several important algorithms for performing vector search on the GPU and this tutorial walks through the primary vector search APIs from start to finish to provide a reference for quick setup and C++ API usage. + +This tutorial assumes cuVS has been installed and/or added to your build so that you are able to compile and run RAFT code. If not done already, please follow the [build and install instructions](build.md) and consider taking a look at the [example c++ template project](https://github.com/rapidsai/raft/tree/HEAD/cpp/template) for ready-to-go examples that you can immediately build and start playing with. Also take a look at RAFT's library of [reproducible vector search benchmarks](raft_ann_benchmarks.md) to run benchmarks that compare cuVS against other state-of-the-art nearest neighbors algorithms at scale. + +For more information about the various APIs demonstrated in this tutorial, along with comprehensive usage examples of all the APIs offered by RAFT, please refer to the `cuVS C++ API Documentation <>https://docs.rapids.ai/api/cuvs/nightly/cpp_api/>`_. + +Step 1: Starting off with cuVS +============================== + +CUDA Development? +----------------- + +If you are reading this tuturial then you probably know about CUDA and its relationship to general-purpose GPU computing (GPGPU). You probably also know about Nvidia GPUs but might not necessarily be familiar with the programming model nor GPU computing. The good news is that extensive knowledge of CUDA and GPUs are not needed in order to get started with or build applications with RAFT. RAFT hides away most of the complexities behind simple single-threaded stateless functions that are inherently asynchronous, meaning the result of a computation isn't necessarily read to be used when the function executes and control is given back to the user. The functions are, however, allowed to be chained together in a sequence of calls that don't need to wait for subsequent computations to complete in order to continue execution. In fact, the only time you need to wait for the computation to complete is when you are ready to use the result. + +A common structure you will encounter when using RAFT is a `raft::device_resources` object. This object is a container for important resources for a single GPU that might be needed during computation. If communicating with multiple GPUs, multiple `device_resources` might be needed, one for each GPU. `device_resources` contains several methods for managing its state but most commonly, you'll call the `sync_stream()` to guarantee all recently submitted computation has completed (as mentioned above.) + +A simple example of using `raft::device_resources` in RAFT: + +.. code-block:: c++ + + #include + + raft::device_resources res; + // Call a bunch of RAFT functions in sequence... + res.sync_stream() + +Host vs Device Memory +--------------------- + +We differentiate between two different types of memory. `host` memory is your traditional RAM memory that is primarily accessible by applications on the CPU. `device` memory, on the other hand, is what we call the special memory on the GPU, which is not accessible from the CPU. In order to access host memory from the GPU, it needs to be explicitly copied to the GPU and in order to access device memory by the CPU, it needs to be explicitly copied there. We have several mechanisms available for allocating and managing the lifetime of device memory on the stack so that we don't need to explicitly allocate and free pointers on the heap. For example, instead of a `std::vector` for host memory, we can use `rmm::device_uvector` on the device. The following function will copy an array from host memory to device memory: + +.. code-block:: c++ + + #include + #include + #include + + raft::device_resources res; + + std::vector my_host_vector = {0, 1, 2, 3, 4}; + rmm::device_uvector my_device_vector(my_host_vector.size(), res.get_stream()); + + raft::copy(my_device_vector.data(), my_host_vector.data(), my_host_vector.size(), res.get_stream()); + +Since a stream is involved in the copy operation above, RAFT functions can be invoked immediately so long as the same `device_resources` instances is used (or, more specifically, the same main stream from the `devices_resources`.) As you might notice in the example above, `res.get_stream()` can be used to extract the main stream from a `device_resources` instance. + +Multi-dimensional data representation +------------------------------------- + +`rmm::device_uvector` is a great mechanism for allocating and managing a chunk of device memory. While it's possible to use a single array to represent objects in higher dimensions like matrices, it lacks the means to pass that information along. For example, in addition to knowing that we have a 2d structure, we would need to know the number of rows, the number of columns, and even whether we read the columns or rows first (referred to as column- or row-major respectively). + +For this reason, RAFT relies on the `mdspan` standard, which was composed specifically for this purpose. To be even more, `mdspan` itself doesn't actually allocate or own any data on host or device because it's just a view over an existing memory on host device. The `mdspan` simply gives us a way to represent multi-dimensional data so we can pass along the needed metadata to our APIs. Even more powerful is that we can design functions that only accept a matrix of `float` in device memory that is laid out in row-major format. + +The memory-owning counterpart to the `mdspan` is the `mdarray` and the `mdarray` can allocate memory on device or host and carry along with it the metadata about its shape and layout. An `mdspan` can be produced from an `mdarray` for invoking RAFT APIs with `mdarray.view()`. They also follow similar paradigms to the STL, where we represent an immutable `mdspan` of `int` using `mdspan` instead of `const mdspan` to ensure it's the type carried along by the `mdspan` that's not allowed to change. + +Many RAFT functions require `mdspan` to represent immutable input data and there's no implicit conversion between `mdspan` and `mdspan` we use `raft::make_const_mdspan()` to alleviate the pain of constructing a new `mdspan` to invoke these functions. + +The following example demonstrates how to create `mdarray` matrices in both device and host memory, copy one to the other, and create mdspans out of them: + +.. code-block:: c++ + + #include + #include + #include + + raft::device_resources res; + + int n_rows = 10; + int n_cols = 10; + + auto device_matrix = raft::make_device_matrix(res, n_rows, n_cols); + auto host_matrix = raft::make_host_matrix(res, n_rows, n_cols); + + // Set the diagonal to 1 + for(int i = 0; i < n_rows; i++) { + host_matrix(i, i) = 1; + } + + raft::copy(res, device_matrix.view(), host_matrix.view()); + +Step 2: Generate some data +========================== + +Let's build upon the fundamentals from the prior section and actually invoke some of RAFT's computational APIs on the device. A good starting point is data generation. + +.. code-block: c++ + + #include + #include + + raft::device_resources res; + + int n_rows = 10000; + int n_cols = 10000; + + auto dataset = raft::make_device_matrix(res, n_rows, n_cols); + auto labels = raft::make_device_vector(res, n_rows); + + raft::random::make_blobs(res, dataset.view(), labels.view()); + +That's it. We've now generated a random 10kx10k matrix with points that cleanly separate into Gaussian clusters, along with a vector of cluster labels for each of the data points. Notice the `cuh` extension in the header file include for `make_blobs`. This signifies to us that this file contains CUDA device functions like kernel code so the CUDA compiler, `nvcc` is needed in order to compile any code that uses it. Generally, any source files that include headers with a `cuh` extension use the `.cu` extension instead of `.cpp`. The rule here is that `cpp` source files contain code which can be compiled with a C++ compiler like `g++` while `cu` files require the CUDA compiler. + +Since the `make_blobs` code generates the random dataset on the GPU device, we didn't need to do any host to device copies in this one. `make_blobs` is also asynchronous, so if we don't need to copy and use the data in host memory right away, we can continue calling RAFT functions with the `device_resources` instance and the data transformations will all be scheduled on the same stream. + +Step 3: Using brute-force indexes +================================= + +Build brute-force index +----------------------- + +Consider the `(10k, 10k)` shaped random matrix we generated in the previous step. We want to be able to find the k-nearest neighbors for all points of the matrix, or what we refer to as the all-neighbors graph, which means finding the neighbors of all data points within the same matrix. +.. code-block:: c++ + + #include + + raft::device_resources res; + + // set number of neighbors to search for + int const k = 64; + + auto bfknn_index = raft::neighbors::brute_force::build(res, + raft::make_const_mdspan(dataset.view())); + +Query brute-force index +----------------------- + +.. code-block:: c++ + + // using matrix `dataset` from previous example + auto search = raft::make_const_mdspan(dataset.view()); + + // Indices and Distances are of dimensions (n, k) + // where n is number of rows in the search matrix + auto reference_indices = raft::make_device_matrix(res, search.extent(0), k); // stores index of neighbors + auto reference_distances = raft::make_device_matrix(res, search.extent(0), k); // stores distance to neighbors + + raft::neighbors::brute_force::search(res, + bfknn_index, + search, + reference_indices.view(), + reference_distances.view()); + +We have established several things here by building a flat index. Now we know the exact 64 neighbors of all points in the matrix, and this algorithm can be generally useful in several ways: +1. Creating a baseline to compare against when building an approximate nearest neighbors index. +2. Directly using the brute-force algorithm when accuracy is more important than speed of computation. Don't worry, our implementation is still the best in-class and will provide not only significant speedups over other brute force methods, but also be quick relatively when the matrices are small! + + +Step 4: Using the ANN indexes +============================= + +Build a CAGRA index +------------------- + +Next we'll train an ANN index. We'll use our graph-based CAGRA algorithm for this example but the other index types use a very similar pattern. + +.. code-block:: c++ + + #include + + raft::device_resources res; + + // use default index parameters + raft::neighbors::cagra::index_params index_params; + + auto index = raft::neighbors::cagra::build(res, index_params, raft::make_const_mdspan(dataset.view())); + +Query the CAGRA index +--------------------- + +Now that we've trained a CAGRA index, we can query it by first allocating our output `mdarray` objects and passing the trained index model into the search function. + +.. code-block:: c++ + + // create output arrays + auto indices = raft::make_device_matrix(res, n_rows, k); + auto distances = raft::make_device_matrix(res, n_rows, k); + + // use default search parameters + raft::neighbors::cagra::search_params search_params; + + // search K nearest neighbors + raft::neighbors::cagra::search( + res, search_params, index, search, indices.view(), distances.view()); + +Step 5: Evaluate neighborhood quality +===================================== + +In step 3 we built a flat index and queried for exact neighbors while in step 4 we build an ANN index and queried for approximate neighbors. How do you quickly figure out the quality of our approximate neighbors and whether it's in an acceptable range based on your needs? Just compute the `neighborhood_recall` which gives a single value in the range [0, 1]. Closer the value to 1, higher the quality of the approximation. + +.. code-block:: c++ + + #include + + raft::device_resources res; + + // Assuming matrices as type raft::device_matrix_view and variables as + // indices : approximate neighbor indices + // reference_indices : exact neighbor indices + // distances : approximate neighbor distances + // reference_distances : exact neighbor distances + + // We want our `neighborhood_recall` value in host memory + float const recall_scalar = 0.0; + auto recall_value = raft::make_host_scalar(recall_scalar); + + raft::stats::neighborhood_recall(res, + raft::make_const_mdspan(indices.view()), + raft::make_const_mdspan(reference_indices.view()), + recall_value.view(), + raft::make_const_mdspan(distances.view()), + raft::make_const_mdspan(reference_distances.view())); + + res.sync_stream(); + +Notice we can run invoke the functions for index build and search for both algorithms, one right after the other, because we don't need to access any outputs from the algorithms in host memory. We will need to synchronize the stream on the `raft::device_resources` instance before we can read the result of the `neighborhood_recall` computation, though. + +Similar to a Numpy array, when we use a `host_scalar`, we are really using a multi-dimensional structure that contains only a single dimension, and further a single element. We can use element indexing to access the resulting element directly. +.. code-block:: c++ + std::cout << recall_value(0) << std::endl; + +While it may seem like unnecessary additional work to wrap the result in a `host_scalar` mdspan, this API choice is made intentionally to support the possibility of also receiving the result as a `device_scalar` so that it can be used directly on the device for follow-on computations without having to incur the synchronization or transfer cost of bringing the result to host. This pattern becomes even more important when the result is being computed in a loop, such as an iterative solver, and the cost of synchronization and device-to-host (d2h) transfer becomes very expensive. + +Advanced features +================= + +The following sections present some advanced features that we have found can be useful for squeezing more utilization out of GPU hardware. As you've seen in this tutorial, RAFT provides several very useful tools and building blocks for developing accelerated applications beyond vector search capabilities. + +Serialization +------------- + +Most of the indexes in `raft::neighbors` can be serialized to/from streams and files on disk. The index types that support this feature have include files with the naming convention `_serialize.cuh`. The serialization functions are similar across the different index types, with the primary difference being that some index types require a pointer to all the training data for search. Since the original training dataset can be quite large, the `serialize()` function for these index types includes an argument `include_dataset`, which allows the user to specify whether the dataset should be included in the serialized form. The index types that allow for this also include a method `update_datasets()` to allow for the dataset to be re-attached to the index after it is deserialized. + +The following example demonstrates serializing and deserializing a CAGRA index to and from a file. For index types that don't require the training data, you can remove the `include_dataset` and `update_dataset()` parts. We will assume the CAGRA index has been built using the code from [Step 4](#build-a-cagra-index) above: + +.. code-block:: c++ + + #include + #include + + using namespace raft::neighbors; + + raft::neighbors::cagra::serialize(res, "cagra_serialized.dat", index, false); + + auto index_deser = raft::neighbors::cagra::deserialize(res, "cagra_serialized.dat"); + index_deser.update_dataset(dataset); + +Filtering +--------- + +As of RAFT 23.10, support for pre-filtering of neighbors has been added to ANN index. This search feature can enable multiple use-cases, such as filtering a vector based on it's attributes (hybrid searches), the removal of vectors already added to the index, or the control of access in searches for security purposes. +The filtering is available through the `search_with_filtering()` function of the ANN index, and is done by applying a predicate function on the GPU, which usually have the signature `(uint32_t query_ix, uint32_t sample_ix) -> bool`. + +One of the most commonly used mechanism for filtering is the bitset: the bitset is a data structure that allows to test the presence of a value in a set through a fast lookup, and is implemented as a bit array so that every element contains a `0` or a `1` (respectively `false` and `true` in boolean logic). RAFT provides a `raft::core::bitset` class that can be used to create and manipulate bitsets on the GPU, and a `raft::core::bitset_view` class that can be used to pass bitsets to filtering functions. + +The following example demonstrates how to use the filtering API (assume the CAGRA index is built using the code from [Step 4](#build-a-cagra-index) above: + +.. code-block:: c++ + + #include + #include + + using namespace raft::neighbors; + + cagra::search_params search_params; + + // create a bitset to filter the search + auto removed_indices = raft::make_device_vector(res, n_removed_indices); + raft::core::bitset removed_indices_bitset( + res, removed_indices.view(), dataset.extent(0)); + + // ... Populate the bitset ... + + // search K nearest neighbours according to a bitset filter + auto neighbors = raft::make_device_matrix(res, n_queries, k); + auto distances = raft::make_device_matrix(res, n_queries, k); + cagra::search_with_filtering(res, search_params, index, queries, neighbors, distances, + filtering::bitset_filter(removed_indices_bitset.view())); + + +Stream pools +------------ + +Within each CPU thread, CUDA uses `streams` to submit asynchronous work. You can think of a stream as a queue. Each stream can submit work to the GPU independently of other streams but work submitted within each stream is queued and executed in the order in which it was submitted. Similar to how we can use thread pools to bound the parallelism of CPU threads, we can use CUDA stream pools to bound the amount of concurrent asynchronous work that can be scheduled on a GPU. Each instance of `device_resources` has a main stream, but can also create a stream pool. For a single CPU thread, multiple different instances of `device_resources` can be created with different main streams and used to invoke a series of RAFT functions concurrently on the same or different GPU devices, so long as the target devices have available resources to perform the work. Once a device is saturated, queued work on streams will be scheduled and wait for a chance to do more work. During this time the streams are waiting, the CPU thread will still continue its own execution asynchronously unless `sync_stream_pool()` is called, causing the thread to block and wait for the thread pools to complete. + +Also, beware that before splitting GPU work onto multiple different concurrent streams, it can often be important to wait for the main stream in the `device_resources`. This can be done with `wait_stream_pool_on_stream()`. + +To summarize, if wanting to execute multiple different streams in parallel, we would often use a stream pool like this: + +.. code-block:: c++ + + #include + + #include + #include + + int n_streams = 5; + + rmm::cuda_stream stream; + std::shared_ptr stream_pool(5) + raft::device_resources res(stream.view(), stream_pool); + + // Submit some work on the main stream... + + res.wait_stream_pool_on_stream() + for(int i = 0; i < n_streams; ++i) { + rmm::cuda_stream_view stream_from_pool = res.get_next_usable_stream(); + raft::device_resources pool_res(stream_from_pool); + // Submit some work with pool_res... + } + + res.sync_stream_pool(); + +Device resources manager +------------------------ + +In multi-threaded applications, it is often useful to create a set of +`raft::device_resources` objects on startup to avoid the overhead of +re-initializing underlying resources every time a `raft::device_resources` object +is needed. To help simplify this common initialization logic, RAFT +provides a `raft::device_resources_manager` to handle this for downstream +applications. On startup, the application can specify certain limits on the +total resource consumption of the `raft::device_resources` objects that will be +generated: + +.. code-block:: c++ + + #include + + void initialize_application() { + // Set the total number of CUDA streams to use on each GPU across all CPU + // threads. If this method is not called, the default stream per thread + // will be used. + raft::device_resources_manager::set_streams_per_device(16); + + // Create a memory pool with given max size in bytes. Passing std::nullopt will allow + // the pool to grow to the available memory of the device. + raft::device_resources_manager::set_max_mem_pool_size(std::nullopt); + + // Set the initial size of the memory pool in bytes. + raft::device_resources_manager::set_init_mem_pool_size(16000000); + + // If neither of the above methods are called, no memory pool will be used + } + +While this example shows some commonly used settings, +`raft::device_resources_manager` provides support for several other +resource options and constraints, including options to initialize entire +stream pools that can be used by an individual `raft::device_resources` object. After +this initialization method is called, the following function could be called +from any CPU thread: + +.. code-block:: c++ + + void foo() { + raft::device_resources const& res = raft::device_resources_manager::get_device_resources(); + // Submit some work with res + res.sync_stream(); + } + +If any `raft::device_resources_manager` setters are called _after_ the first +call to `raft::device_resources_manager::get_device_resources()`, these new +settings are ignored, and a warning will be logged. If a thread calls +`raft::device_resources_manager::get_device_resources()` multiple times, it is +guaranteed to access the same underlying `raft::device_resources` object every +time. This can be useful for chaining work in different calls on the same +thread without keeping a persistent reference to the resources object. + +Device memory resources +----------------------- + +The RAPIDS software ecosystem makes heavy use of the [RAPIDS Memory Manager](https://github.com/rapidsai/rmm) (RMM) to enable zero-copy sharing of device memory across various GPU-enabled libraries such as PyTorch, Jax, Tensorflow, and FAISS. A really powerful feature of RMM is the ability to set a memory resource, such as a pooled memory resource that allocates a block of memory up front to speed up subsequent smaller allocations, and have all the libraries in the GPU ecosystem recognize and use that same memory resource for all of their memory allocations. + +As an example, the following code snippet creates a `pool_memory_resource` and sets it as the default memory resource, which means all other libraries that use RMM will now allocate their device memory from this same pool: + +.. code-block:: c++ + + #include + + rmm::mr::cuda_memory_resource cuda_mr; + // Construct a resource that uses a coalescing best-fit pool allocator + // set the initial size to half of the free device memory + auto init_size = rmm::percent_of_free_device_memory(50); + rmm::mr::pool_memory_resource pool_mr{&cuda_mr, init_size}; + rmm::mr::set_current_device_resource(&pool_mr); // Updates the current device resource pointer to `pool_mr` + +The `raft::device_resources` object will now also use the `rmm::current_device_resource`. This isn't limited to C++, however. Often a user will be interacting with PyTorch, RAPIDS, or Tensorflow through Python and so they can set and use RMM's `current_device_resource` [right in Python](https://github.com/rapidsai/rmm#using-rmm-in-python-code). + +Workspace memory resource +------------------------- + +As mentioned above, `raft::device_resources` will use `rmm::current_device_resource` by default for all memory allocations. However, there are times when a particular algorithm might benefit from using a different memory resource such as a `managed_memory_resource`, which creates a unified memory space between device and host memory, paging memory in and out of device as needed. Most of RAFT's algorithms allocate temporary memory as needed to perform their computations and we can control the memory resource used for these temporary allocations through the `workspace_resource` in the `raft::device_resources` instance. + +For some applications, the `managed_memory_resource`, can enable a memory space that is larger than the GPU, thus allowing a natural spilling to host memory when needed. This isn't always the best way to use managed memory, though, as it can quickly lead to thrashing and severely impact performance. Still, when it can be used, it provides a very powerful tool that can also avoid out of memory errors when enough host memory is available. + +The following creates a managed memory allocator and set it as the `workspace_resource` of the `raft::device_resources` instance: + +.. code-block:: c++ + + #include + #include + + std::shared_ptr managed_resource; + raft::device_resource res(managed_resource);``` + +The `workspace_resource` uses an `rmm::mr::limiting_resource_adaptor`, which limits the total amount of allocation possible. This allows RAFT algorithms to work within the confines of the memory constraints imposed by the user so that things like batch sizes can be automatically set to reasonable values without exceeding the allotted memory. By default, this limit restricts the memory allocation space for temporary workspace buffers to the memory available on the device. + +The below example specifies the total number of bytes that RAFT can use for temporary workspace allocations to 3GB: + +.. code-block:: c++ + + #include + #include + + #include + + std::shared_ptr managed_resource; + raft::device_resource res(managed_resource, std::make_optional(3 * 1024^3));