From 499635a19013319367358dcf2a0211d30073ebd0 Mon Sep 17 00:00:00 2001 From: Daniel Han Date: Tue, 2 Jul 2024 22:51:01 -0700 Subject: [PATCH] Gemma2 (#709) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update mapper.py * Update loader.py * Update llama.py * Update tokenizer_utils.py * info * edits * Create chat template * Fix tokenizer * Update tokenizer_utils.py * fix case where gguf saving fails due to first_conversion dtype (#630) * Support revision parameter in FastLanguageModel.from_pretrained (#629) * support `revision` parameter * match unsloth formatting of named parameters * clears any selected_adapters before calling internal_model.save_pretrained (#609) * Update __init__.py (#602) Check for incompatible modules before importing unsloth * Fixed unsloth/tokenizer_utils.py for chat training (#604) * Add GGML saving option to Unsloth for easier Ollama model creation and testing. (#345) * Add save to llama.cpp GGML to save.py. * Fix conversion command and path of convert to GGML function. * Add autosaving lora to the GGML function * Create lora save function for conversion to GGML * Test fix #2 for saving lora * Test fix #3 to save the lora adapters to convert to GGML * Remove unwated tokenizer saving for conversion to ggml and added a few print statements. * Needed tokenizer for saving, added it back, also made it more unslothy style by having positional arguments, and added a few messages. * Positional arguments didn't work out, so reverted to older version of the code, and added a few comments. * Test fix 1 for arch * Test fix 2 new Mistral error. * Test fix 3 * Revert to old version for testing. * Upload issue test fix 1 * Fix 2 uploading ggml * Positional ags added. * Temporray remove positional args * Fix upload again!!! * Add print statements and fix link * Make the calling name better * Create local saving for GGML * Add choosing directory to save local GGML. * Fix lil variable error in the save_to_custom_dir func * docs: Add LoraConfig parameters documentation (#619) * llama.cpp failing (#371) llama.cpp is failing to generate quantize versions for the trained models. Error: ```bash You might have to compile llama.cpp yourself, then run this again. You do not need to close this Python program. Run the following commands in a new terminal: You must run this in the same folder as you're saving your model. git clone https://github.com/ggerganov/llama.cpp cd llama.cpp && make clean && LLAMA_CUDA=1 make all -j Once that's done, redo the quantization. ``` But when i do clone this with recursive it works. Co-authored-by: Daniel Han * fix libcuda_dirs import for triton 3.0 (#227) * fix libcuda_dirs import for triton 3.0 * Update __init__.py * Update __init__.py --------- Co-authored-by: Daniel Han * Update save.py * Update __init__.py * Update fast_lora.py * Update save.py * Update save.py * Update save.py * Update loader.py * Update save.py * Update save.py * quantize now llama-quantize * Update chat_templates.py * Update loader.py * Update mapper.py * Update __init__.py * embedding size * Update qwen2.py * docs * Update README.md * Update qwen2.py * README: Fix minor typo. (#559) * README: Fix minor typo. One-character typo fix while reading. * Update README.md --------- Co-authored-by: Daniel Han * Update mistral.py * Update qwen2.py * Update qwen2.py * Update qwen2.py * Update llama.py * Update llama.py * Update llama.py * Update README.md * FastMistralModel * Update mistral.py * Update mistral.py * Update mistral.py * Update mistral.py * Update mistral.py * Auto check rope scaling * Update llama.py * Update llama.py * Update llama.py * GPU support * Typo * Update gemma.py * gpu * Multiple GGUF saving * Update save.py * Update save.py * check PEFT and base * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update chat_templates.py * Fix breaking bug in save.py with interpreting quantization_method as a string when saving to gguf (#651) * Nightly (#649) * Update llama.py * offload * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * continued pretraining trainer * Update trainer.py * Update trainer.py * Update trainer.py * Update trainer.py * is_bfloat16_supported * Update __init__.py * Update README.md * Update llama.py * is_bfloat16_supported * Update __init__.py * Mistral v3 * Phi 3 medium * Update chat_templates.py * Update chat_templates.py * Phi-3 * Update save.py * Update README.md Mistral v3 to Mistral v0.3 * Untrained tokens * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update llama.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update save.py * Update save.py * Update save.py * checkpoint * Update _utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update tokenizer_utils.py * Update llama.py * accelerate * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update tokenizer_utils.py * train_dataloader * Update llama.py * Update llama.py * Update llama.py * use_fast_convert * Update save.py * Update save.py * Update save.py * Update save.py * remove_special_tokens * Ollama * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update llama.py * Update chat_templates.py * Support bfloat16 GGUF * Update save.py * Update llama.py * fast_forward_inference * Update mapper.py * Update loader.py * Update llama.py * Update tokenizer_utils.py * info * edits * Create chat template * Fix tokenizer * Update tokenizer_utils.py * fix case where gguf saving fails due to first_conversion dtype (#630) * Support revision parameter in FastLanguageModel.from_pretrained (#629) * support `revision` parameter * match unsloth formatting of named parameters * clears any selected_adapters before calling internal_model.save_pretrained (#609) * Update __init__.py (#602) Check for incompatible modules before importing unsloth * Fixed unsloth/tokenizer_utils.py for chat training (#604) * Add GGML saving option to Unsloth for easier Ollama model creation and testing. (#345) * Add save to llama.cpp GGML to save.py. * Fix conversion command and path of convert to GGML function. * Add autosaving lora to the GGML function * Create lora save function for conversion to GGML * Test fix #2 for saving lora * Test fix #3 to save the lora adapters to convert to GGML * Remove unwated tokenizer saving for conversion to ggml and added a few print statements. * Needed tokenizer for saving, added it back, also made it more unslothy style by having positional arguments, and added a few messages. * Positional arguments didn't work out, so reverted to older version of the code, and added a few comments. * Test fix 1 for arch * Test fix 2 new Mistral error. * Test fix 3 * Revert to old version for testing. * Upload issue test fix 1 * Fix 2 uploading ggml * Positional ags added. * Temporray remove positional args * Fix upload again!!! * Add print statements and fix link * Make the calling name better * Create local saving for GGML * Add choosing directory to save local GGML. * Fix lil variable error in the save_to_custom_dir func * docs: Add LoraConfig parameters documentation (#619) * llama.cpp failing (#371) llama.cpp is failing to generate quantize versions for the trained models. Error: ```bash You might have to compile llama.cpp yourself, then run this again. You do not need to close this Python program. Run the following commands in a new terminal: You must run this in the same folder as you're saving your model. git clone https://github.com/ggerganov/llama.cpp cd llama.cpp && make clean && LLAMA_CUDA=1 make all -j Once that's done, redo the quantization. ``` But when i do clone this with recursive it works. Co-authored-by: Daniel Han * fix libcuda_dirs import for triton 3.0 (#227) * fix libcuda_dirs import for triton 3.0 * Update __init__.py * Update __init__.py --------- Co-authored-by: Daniel Han * Update save.py * Update __init__.py * Update fast_lora.py * Update save.py * Update save.py * Update save.py * Update loader.py * Update save.py * Update save.py * quantize now llama-quantize * Update chat_templates.py * Update loader.py * Update mapper.py * Update __init__.py * embedding size * Update qwen2.py * docs * Update README.md * Update qwen2.py * README: Fix minor typo. (#559) * README: Fix minor typo. One-character typo fix while reading. * Update README.md --------- Co-authored-by: Daniel Han * Update mistral.py * Update qwen2.py * Update qwen2.py * Update qwen2.py * Update llama.py * Update llama.py * Update llama.py * Update README.md * FastMistralModel * Update mistral.py * Update mistral.py * Update mistral.py * Update mistral.py * Update mistral.py * Auto check rope scaling * Update llama.py * Update llama.py * Update llama.py * GPU support * Typo * Update gemma.py * gpu * Multiple GGUF saving * Update save.py * Update save.py * check PEFT and base * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update chat_templates.py --------- Co-authored-by: Michael Han <107991372+shimmyshimmer@users.noreply.github.com> Co-authored-by: Eliot Hall <60240707+chrehall68@users.noreply.github.com> Co-authored-by: Rickard Edén Co-authored-by: XiaoYang Co-authored-by: Oseltamivir <58582368+Oseltamivir@users.noreply.github.com> Co-authored-by: mahiatlinux <110882203+mahiatlinux@users.noreply.github.com> Co-authored-by: Sébastien De Greef Co-authored-by: Alberto Ferrer Co-authored-by: Thomas Viehmann Co-authored-by: Walter Korman * Fix bug in save.py with interpreting quantization_method as a string that prevents GGUF from saving * Implemented better list management and then forgot to actually call the new list variable, fixed * Check type of given quantization method and return type error if not list or string * Update save.py --------- Co-authored-by: Daniel Han Co-authored-by: Michael Han <107991372+shimmyshimmer@users.noreply.github.com> Co-authored-by: Eliot Hall <60240707+chrehall68@users.noreply.github.com> Co-authored-by: Rickard Edén Co-authored-by: XiaoYang Co-authored-by: Oseltamivir <58582368+Oseltamivir@users.noreply.github.com> Co-authored-by: mahiatlinux <110882203+mahiatlinux@users.noreply.github.com> Co-authored-by: Sébastien De Greef Co-authored-by: Alberto Ferrer Co-authored-by: Thomas Viehmann Co-authored-by: Walter Korman * Revert "Fix breaking bug in save.py with interpreting quantization_method as …" (#652) This reverts commit 30605dec2322435eec9753c7f566a0ff610ab52c. * Revert "Revert "Fix breaking bug in save.py with interpreting quantization_me…" (#653) This reverts commit e2b2083b621208b15923595cd7f509584ff566bc. * Update llama.py * peft * patch * Update loader.py * retrain * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update llama.py * offload * Update llama.py * Create a starter script for command-line training to integrate in ML ops pipelines. (#623) * Update chat_templates.py * Ollama * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Ollama * Update chat_templates.py * ollama * Update mapper.py * Update chat_templates.py * Update save.py * Update save.py * Update save.py * Update save.py * Update save.py * Update save.py * Update save.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update chat_templates.py * Update llama.py * Fixes * clearer messages * Update tokenizer_utils.py * Update tokenizer_utils.py * Update llama.py * Update llama.py * Update llama.py * log * Update __init__.py * Update llama.py * Update __init__.py * Create Merge.png * Create ollama.png * Gemma2 * Update llama.py * Update loader.py * Update pyproject.toml * Update pyproject.toml * Update llama.py * Update llama.py * Update llama.py * Update llama.py * Update _utils.py * Revert Gemma2 * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update rms_layernorm.py * Update gemma2.py * logit softcapping * Update cross_entropy_loss.py * Update llama.py * Update llama.py * Update gemma2.py * Update gemma2.py * Update cross_entropy_loss.py * Update llama.py * Update llama.py * Update cross_entropy_loss.py * Update cross_entropy_loss.py * Update llama.py * Update cross_entropy_loss.py * Update cross_entropy_loss.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update llama.py * Update gemma2.py * Update llama.py * Update llama.py * Update gemma2.py * Update gemma2.py * Update llama.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update gemma2.py * Update _utils.py * Update _utils.py * Update gemma2.py * compile flags * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update _utils.py * Update gemma2.py * Update gemma2.py * fixes * Update _utils.py * Fix generation * Update llama.py * Update llama.py * Update _utils.py * Update _utils.py * Update _utils.py * pad token * Update gemma2.py * pad token * Update _utils.py * Update llama.py * Update gemma2.py * edit warning * Update tokenizer_utils.py --------- Co-authored-by: Eliot Hall <60240707+chrehall68@users.noreply.github.com> Co-authored-by: Rickard Edén Co-authored-by: XiaoYang Co-authored-by: Oseltamivir <58582368+Oseltamivir@users.noreply.github.com> Co-authored-by: mahiatlinux <110882203+mahiatlinux@users.noreply.github.com> Co-authored-by: Sébastien De Greef Co-authored-by: Alberto Ferrer Co-authored-by: Thomas Viehmann Co-authored-by: Walter Korman Co-authored-by: ArcadaLabs-Jason <52756218+ArcadaLabs-Jason@users.noreply.github.com> Co-authored-by: Michael Han <107991372+shimmyshimmer@users.noreply.github.com> --- images/Merge.png | Bin 0 -> 31406 bytes images/ollama.png | Bin 0 -> 67156 bytes pyproject.toml | 6 +- unsloth/kernels/cross_entropy_loss.py | 102 +++-- unsloth/kernels/geglu.py | 4 +- unsloth/kernels/rms_layernorm.py | 6 +- unsloth/kernels/swiglu.py | 2 +- unsloth/kernels/utils.py | 8 +- unsloth/models/_utils.py | 49 ++- unsloth/models/gemma.py | 2 + unsloth/models/gemma2.py | 538 ++++++++++++++++++++++++++ unsloth/models/llama.py | 81 +++- unsloth/models/loader.py | 14 +- unsloth/models/mapper.py | 8 + unsloth/models/mistral.py | 3 +- unsloth/models/qwen2.py | 1 + unsloth/tokenizer_utils.py | 8 +- 17 files changed, 772 insertions(+), 60 deletions(-) create mode 100644 images/Merge.png create mode 100644 images/ollama.png create mode 100644 unsloth/models/gemma2.py diff --git a/images/Merge.png b/images/Merge.png new file mode 100644 index 0000000000000000000000000000000000000000..a2df04874bfc879182cb66c789341d49700227ea GIT binary patch literal 31406 zcmcG$WmKHqlJ_0l5?q42TX2Wq?!kh)LvVKp?(XjH?!kk*ySqF6Ho5O}=FH4HXP)Q7 zvli05R`4&f&W14 z6hs9+RE*&q0-t~z^Gox8_)r}Q_o52{d=B$NOwI1Y2l(FiUyuRoB7+YfTm{92_`f-8 zowi$>Xo+T=Kel0JU?7M*tUdprH)$jMtPiPsL*z<`%GjM_;{*+jney4s-i<%FVyZ47 zSQV=O(-ljZPDV!m@!=r$?Px>W2G^x)UixH|>MAV}Yk13cYx)8g(b;j`ss!gGqL5Zl z0n$xvWsOp2iDKbfrfbIj!NJR;hx1q0v0}yj`_d<+^e0}W{j{ZdF<1Au$NiU12?gH! zQjKhgtM-Xm|Nea+cTQi+stKLruZZCMp#1z9WN4jM@#C*lDkeVe^!6;-mo5aE5jxm3 z%nO{rS44f4iz*cFt5n~$>l{Y=POuKADQ)78H&{r#015Z00Un$mkpvNS$5i3+Sov|K z(l!8n@LoU#1uGNk>JYl2Tfnazj2{sk1{}gA<=ab((*NtJM5#vcS$vwaKk|45%e5os*&!W%A`#1OS#A9$Zj!4&ay+N6VE zwn+5&9D2g?$7>t~294zI&8&KK@Na~tgKlB_pVIJ3mR>FP;dX4A9M-H-hLbfzoUUqT zVB?yw5^y2+^`eBHpWm&q;7ODohMftju;JNWAY5WnJ>f;vyTXl@rp4c@Z#z3uXW~6m@G@{2H#W=<;1oVW00LBh^ z=Vi#RzRAKvv`(A1J3Lt>+cpf-mTA7IDvida<#IyX-SYc%toFf>8`O5>`RBs9DVphu z0eLzc!H@I>&y?Uww#E2FbdeS@+@l~`co!dgitzZ>EauF~E8Tyw;ZkZgaOF>u)Md5T z50Qm0?Ol8px?K1Yl-)mAe~QO)m0g!IcED@zu+8Tlx-9u{$24GGzdi1r>a~mQyJSmm zS=64N=CV5F{)GSQ&lF0_lcw(65I7HCC1e7^&w%a*1PUc}6^tOx_y!L-B z_!~o|`g7y>y!mZ*0=Jp<`C9W=k%C7$#R0_L(Esoqv4owYVzizq{Kb{G8{*w$d~E&F zLO6cx1n=8XfA;T1M-i-3c&a%}lZ_fOXZSOKv8~wai=}?GkSNYy{+q`o5hp#$Pi&ac z4AiwDf%>$hr{w15BeFWt<6)@aV110d>h4JCGCI8Z5HAf&P-XYc(aX*$!9L|J?J9J{sw)PU(R&!K`K z>#5LwHW~W1#bLf(RlP8Lpmyxx8K%o68OtGN!@jBBSc0pA)S#2@?b7wmg6n$MEJ9r< zGfUE|=j#3Mm3};?Tqgdr&v9$5?m5a{Z_>QZjP(0g=(EF~M?=9Q_c|DXN}Bc=qiVJe z_^qU+{&!0xvogn-?f7VfuK`+`AK)AYj9xSFw3pUcEhPFkJ4ms4id4V3P-&y;%x~^T zN<&q^w%2_Plq(^b*B>w?zYhAzB;^{jku$lnrkRPdOV1%}a_)L`I`IR($K+yejLxGQ zG7+6eBE&U9`ham=Ii89e+}hRDNuKD=aI>(KX(AqRdIwLUd?n~rUBCYO`aSET%Aj7% zgZXHW@O?QMWq({nxJqdKMzWLB8NKLg(Oh?n4J1r=5txTy4FAgH-Ccb&q&!1nzR2dm zjM|XR@1uJin_pV{u()&H&k8$Tb+B0Jw_4G6{iUy?J}PgAk`DPfM_wj|1L@fc zq8onuJ=D{hAT`&0mSjgS@MrWPl9lcfm8aY}JFOC1`isE{^F{|g;d_0Zos{bH{9zzJ zSb|;6_h+Xj>qfi2J(GqU?{G;Pix-P=bp3pqsE5~Uw{suG0VX_gKLdXF#?ZOKv!2Ae z_7UQMpv1;_w!_t^GAi+-VXL`K`jKWTbrCa>o^$&qH7lILI`o5 zC~KWu)7p`Y9g8Vhk{7@HQQKkhty!^O$48^ftbg-}Vw?y=)w1X~EI>icrAiYeG!X|s z&K=+GHB++%(_62Zj|c|c?a?@70fcK}@fO&a4cZfiljdm|C5Q|e?H9S|jP)U!cXnx0 z;;~1Bz$rq~W^sJ5p?t-accdaSJ)+?`R57mv!n{h1=O6o%F}1p01qDe^n6TQ|`|%Sj z;}TxVZNtMq zuxfjiL~3AeZGL#}&cGF}`~e2HT$Mht)?%MCnWptj?Cy6$UluYlv3A9F?9t2jf^ipK z0bL$RE2%=+)N}>Mpuribt%{Z?X*$y-GJM(eGxEAg9ON;hsSfQtoPNrv!#+QYv4go< zv+<7A=r9R%&zlaJr|EHn+CvWEjDnQjda~S~C=rz-kH^M044Ps*Wc^Kj z>pq{>l+`CWWAHQ(gp?yG(F1KF6HLgNsv( zjWhV&K@4{UZf818HtN^~n1tmILMo5X_M$Ct5%#mo*?Rc?3`k>sBRPYka(ZmqwN%_B ze^P)&wOo8pRB?)llbhQi&aK~=N_hKR%i|eT=Q0M!h_2dTI;25#mPM-t3kXAj6V`3D zdJbHE!K8~ol5D`tm6l12f}4#V0`gCVHdGfuZ~kBL?JTa1b>DJOy$az6hK)b7J+{5z zO<=-~s%tk>XE_VL^hXwapXutl=C$$N@4ByEl2oBTj+pL3)Jk}MTp%&j&2ClJ2Z4{( zm!g_!RgAW%&K5Ng_#(+Jf%hdua1Bv!0`%4LcA@dSh>qO;=ttRi7(!JX5Mn9r{H^BHJ5MCw5*W zirGbU$&ZNodnywm=AtL)A^c&*u8h3AG#{IHE-qN?-fPlN_WTuEy!+M*WWGz!fTsgL zCl#v(t?fwY@CsBnY&&}!w0wT;JsW++j&^f57-a$EgHcVCa<7N_Ch+e1NyyfT9;;nj z@`kQHNleBcvIH*oj{wxL!*FsU=sjv6xw;~_-rI+u-a1XFR-{MkI+k^yvC zKk+let&kGgAC4xW09^?&5{iZuG&h!1tMynSCSedS2l-gZ7OcH0R}z;Bd7lU&ZEjcq zSG*7v!v~&DV{|kTUBOa3#k_hvb&xS-4f?7O!k=sG3A-7~y$RHtJ_#2yk3#u2llCZi z!|)q`2irvHRa^xiEyLru-#+?Dx5)6hB7u(o_9T)KWe{=tsY&Zm4pwz1xfUzx&9z4l z6DXAPUi^qflB^3>5ELJYTo)Jq9MDR|RrK1mKKE*KA)t=mT@%PoYl~5 zNEy?k$=>FZ+RPE^Vs86_EVTv zT4tEa`hx6?bl7CNu8O4gv7Fo!>G=>x)z9>(K34~Usy~)Zv?m+X{p#s{(&>&PJL^p} zM;9IDH~vOFOLb{I&v^O@5Lw6kRhUVDaz8lVCJ;>9KzjVe$Y*DDKXoeRHgh}*#Mklx zoD|a#Qn)fMmfPH-l~*(*4G#v6FMJbHWyh`V%b??zHHN6|-Mlk@=_R$;FZ9cV{D#Ri z#Hn%1;*V=LJ>#v4J)6=# zhLFCs1k&SbufSbt`cs+597u4Q&g@iP7kwcY3&G$0r2|_nbq2bNdDEJuhhO?>sDGWy z1VgbTX@-?gSL8j$0Q}_XM{E!(}fzpZ9 zk7dQVWRYAqP#VQ~kFFN<2JvF%TJROaDHjk>k*Lp5lb?t{Ubm{!Y3DH8Zq|(T4pEM+ zRgE%G96sw;F&&th#)->y#^F$%^gmX2K65?9`jFiJolQ7Vc7A+{-Ooj6pqrdz*ARc( z4IF2|z7q4{<)Fq2f3vY{>8zw&f2nVL8GTrS74+D3q@c1p@>qwIH`2(>r~N@1yU+P( zJwOA@Bc7MBcEg{(zv$Xrp{@REvZ~&Uy|g#Fn^LWw!0=RxIt$ZBrmy*37Xx~{PfiZ? zXw&}#jImrhsZGu}m(580r%K`4chj$ODcE%x6Nk#O= z&epn?r2c&XPi%f{zG;b!Po;gDTN24d(%fI@<$KQDwr2Hfj0j4tU8tN0XVO3EQ@kKU zKRiu-N$Ia48=$HbWTZctO1NvM_5vvkV5|L|W;%0xsnnF{zNtXJD|uI>VvjQ75Q^P} zS0Z7}A7*tB>m{VKyMz;=MK|rOQf4zEMGcY^f~sPp-_&i$>vIh977FS{ic;*&a1{N; zH+e{_u1DKV=*##~pFJN*QBLf)yyC= zs}Rh~tz(c$M=cD++i7sShvP;6u5jjU&x_XE*$F?3nz*0UW~(6NeNk`Md=qa z_{hnV9IkJ>>v#lkDI#VfSDsv}^(pZBG`8Ni4AG|Y_vVfa5i$c^3Rj@{&XmJH;Xi_E z3(}*P>2q_qY*%T!&6$3BC~wfGU1Rmy@Umyvb?HlnP$g@3ly@sm&=56(X&U*ZFlPuu1Jk3KDP?Sw=a8E%*+4+{ z#$2|ftu_;%dj?=mbf>qI0NOV$4*r}F;A&mUGDb&mGi1Ru(z&)uu$_5;KIQj3f-;HQ zvJ;ZaJ@s&WclOlgTtk65PlZ51c4A2@b1_i$b?wq>8>`$hI6NnkdiA?!x z9KMkpZ0LY&d;i*ZI0=F$AXkkN8gfe08K9IAkRMq)X<2$m(NCe!E$DTpnq*br=)lQ& z7h_D(CppRHkNwM2Hdd?g_~_o}&zPCP+hOGS@P$9328Tb&76tBVO2=cujuq$B;dqcT z^I554V#d9(9fnZTRzbQyi7!ZCK-CmWj2D+u?lOT4Spi&Klxom7&Eo+G2&0hYo%uUB zmQRiWYnrCE-fut4c<3mosdy>=8tBX&pm%7I`xwrqCtt#gwDDt5oF$upp6<~;?`xlW zikKLZ1#WRxvCZXXD-t-r1pg)kN2^`f>X`634qP+S`9#q}R-XHrO0z)gP5Nrs5MH~^ zO17)mPF{*!6h@*ur$W{itu#p4Stq-DE*duF;?C=V0aytM=N32^KXfB_Uijg^Y(Vh=A3marMb#qkA@d{Vv2T0? z@=^DMtUv4cFXK>(CL?;d)kO?SwB{o4;Q?O`JtL+P3fxNXKa4_)SAYnygb}RPP=s*^ zVClR5J{Zm{8Z~*`JJvlo_X4=RX`fO4gYBXgIZL9zP%1<)5~kdl;6~3n%;RTZ0E;!P zKQtIW?$6sOV6QB`ZeM@>r%Us23BBWTA|MEkncOs(iec_pSd~J3NU*D1Q|9Fug!<tHa&Ov)M@9kK!+-lw;SaHfBKq8kwV`h=gfS2<1v2H}eT}SAz(kr*JdqAl_-V zyWC%3ono*S8#UqUjgj|G8(Yq_+ z=g-CuUUYM0-6SZ5(Q6CsToB8KQi5<63T#gCyZR3fj+vw5qCy1GAcdD8 zP+b7f1ve3elKlK@FsG}xi_`WN@(lcIYiX7-n={G5bEEOtcH_72*w%cHqEYp%m!7(Q z?_wn!ZS$MCI__*LUiX)S2c~sT;U6}rB$BTuF^bLxVk*8X1%u0ghxy!>^eMYQy##;F9kepyB?2&L;82bMJGwsHCiz zSdgZPD|u(Pot}orJzRsLTo&G}9u+m{liSdN@R9b6;+?E1i;^;Y{gYgj@!h7dA233g z{zM4D>1*Wq?=&A6Oo4iE4k?H9Hh20dez#JLi20*^>+|b#qySqlySpYvlJ!;=k^XT- zr$?2ahe#j&ZU}i%#Vik&$X}k3rkV-xKjW08hP1Wc&F5l5+m;NO4bYeMgV`%1AZo0o zGjhPU`Hiocw~N)$$Wid)%^a=vd)eO_CCKmI-jVRdc>A)W}i6{d*OoRvK3~T2n6x1D`F5f;^K&WIX+}W{?ProegMcjffO#GNsQ%|tv zF6Q`KI+6IP!6H6@5ncicE~`=UuXc_VDy4I`pN@OLc;}c59GHgyT~X>g|4mT*oZo>T zq5@0U6zr!-e30UAWt|hFMB!P7K%X1oZ0iATX1()-(m|>pvErR;;DO~Hna;+#5DZ(+ z+CJ9kkEcdPh07TjSV+##k2kn_&eMAaGE?ct2!X}t@L}b&$W_YvjhWmT_V}$^1>sC8 zy<$E^(Mf3&k#RWBoSaOSN>YVTzI~0I$3cJB=-7JBap5DLwqC2P{u@L~zcP)jEb7ic z>L{f8^Kb(E0FxAuVh_y&NpDA`mq`$G_D(dfA6ISr<&lk2v?3!3X(@-K6OE4b? zP_eO)yx!&Z(9ho4zF79RY2ml?U30EC=CFr~zhJkb)w7kz9mldM8)x_^jgGbU;C9(} zBw0Bnw6t<^mIl1m4{fKfP+9$SW%xli)1!0X?HDCN2%=F{ z$t{iDc+H?r(c|l?i|!XnhIj`e3?ty*qh?ZTzXqLmcsnAlve#eFsn%u-(YLDjS^A8n zvWYaMrM1N(01PE8Xe%ue}-F1IPNyX#8RmVSabaf=gG7=U@ z^r{CDZ!%y|@~eZ8_^$ShmB{mPe@jgS*231m(5FgKX~*8#>1Ea_)8KWf_Rmcp?Sey+ zwJ{JK0V9Vu9KqsNd7B^am!7xuCv)56ilD?AKq5Z;J$C4MFb7jjyGJgSyjB zMDpV!>=iL2#ACM34ITrEG7QWjVmzIHb5^%1HnXM`l5x6X!h1V&v0z@SPcDS4$RehGLzvbU z@Kz{}UW6&?j3wsSACh)_d3RomvJ+hqgO2&j*@-a((b|Yw&~pHvkI)Cb9E!bfcA;P`LUOo&dt_~0rx3wKC|s1h?>im=<@|*O%%;w z@LZg-erzT2^zqN%!DktJVy0;VQyxY#RUlPb?m{x4Pwd?vLhT*gkc0yDl9ERwbi$yq zN6U!VkR^J#Cu)VqspyzNXpwL2C#!92o^Wh7Dj6k~M803D;i+zPp{#-u$MY=Fs>i}H zQA)HHMSSJvfq}h}k*`R?dQaQl+b$zZO#U#y<8Z%{1$)U?bprb+bX+P@Lm$7Az&J7u z>zN+^JoLJI$~MD4VruK24~e&#HSgrxHtkx>Qj7yA#Z{~dZ^+HVyc*2a-5s(Ab)BY) zixVu+qAj4PJ2MYe%GOv2jm>l8o|*J%yDJayi&P1jMfBXnUkS`$afud-mlsZlI9>bZS?b-DT!#~wLY}QA=(2-a9 zkm|rmLejV2CqLxF=J*2xsb;jH<_t&8YfIGK`H;OD4kAQO2%L8>l5J=x_^S@KXwWyJ zU)uZ-CfXbYQq0gS8!#qZCDR+=TQBlhogkBQ;Pe8}7vcm{S1-jOL>l{|DQYm4(DJ5b z=E)P5{XQAA;O#nSP61cM!C6u}V7a!_5i>Z|vEK~+b%^_v+vT7o1k%Arz~D57L}k%h zQFhU-vk%l!q;jwykSZvoe`jNGzrNc(y220g-S(hiSrT`6wiDREMv@-8wTE8PVKjp; z=tA$zRw1`}u+C>PYD~vKi#Tw9nKtE)cIovpjMJIGAHxaSzoU^lZv>3)?5S0pu_zoF zh2K=Xgm2g%qQeP{(YX?;T%9CP2`@~_F>M5gN-z1#*S_Ex)ZIy91dOI$V-JaUnWvx| zv}sJ131hM;F*&Q(ud`Nc&^UOAn$UI$HEiR4fbCc?tUZZ&EE`JgQ~u)t-`M)SN5fnB zWMJkit4KhqeY#m&X-X68g?muZit5Q^COF?ytFBRA`z=dcVsmr^_2TxW2DT63`6B01 z7XucRp-$xv^H7+PGM)-02bC3x*m@d}%nMs>NZF+Ad^j`0 zjfH$oh%Mt-(E`Qq30S(OD4fs6veRPJ;t8Qrs9<3zOSMRIq61hz!ItJoV#73GN_fOM zZhrnL@+q$z z!61wG(md`O+-Qv7%wm7C{U|%NmWI{`m%w~DGV&nf0bOXN+v|Dv<0w@n$M90#_F5Hf zR)&?B*i&UtAc|?+oIC>nPd^e=dhWToUJMM^X|ybYKL!7&GrB;-K)T)N2Z68Bv8o-Xqfy#`-$prt3#`w3X7Q>2aC{Iy^a^bvnNv;ri|*LnIkIcr z({TFi{T%80rd*gubSOAid_7sq6yv2w4P}&kz*FOyuY;`$>Zzt9?OPW-S%|$;!#^B( znxGKk9kMk3l5}f}?n@&?{e}e5^LD8Rbj0$(I?m_A!j_B6MNyQ_NEmF}w13Z*91cpZAuhO@X1U zFpiULA#Gd2_UDcjm{t7#WEC;D``0;kY-yI1UX+o0%gRd1l;?M`(Z&9bTp{{sARq;= z+NM#Uy;7TN;j@uPdm|bezBEtm%kgTsnrTP%t8=`%fVOzslXI#;S0tNoeG6#?rnAd6 zW4c#}W6D5@i6bKi+E6$RI7iv1D*o`Rvu_jv8;7uRQTp4!+Qy6ikm>* zI9WeIMg(nTrQ{gXq^hYe1oLQ2VcrRR&UdkfSakqVEefaAbzk3uN)wpG_1WT`Irvy# zP_JWMIlpikdu+Q>GK>hFM~6H6eva&4W#r3MKyv3-Qt*y?him`dkWK^azT>KPscp zqc&D%DzxqdPe+qCS#5DvH7&-P4Z+?Trex9ox-W1M_YzUru=uknY4=qYs8gocvc9$P zlX9iFUSUdL%VMfvrE?4FyUs1^a#~l8_a5$i@~+3bSfOX|BHOB6;lNDbKe3V^Ln0wl z7QQxAk}~X1Cjve>l&}#6uv?d%=@ja_%x88~;Fw>7GHqBGJzk$UEhrtqpV?$a) zkUroc5l;XnWMxaW9DS)a+TN}8Fewe^$Lko1kAZJ$WOyB$vuFjJQ;t!7J{)(Z|11(V zzmlVv$oJ~+cyW!-LQ5GGZ1ox^>XhiQQ$PL=*uN1A#&Ygm|H+RY#Uh?8f^kS@UWA?L zXizk3QzAm)5LGgPEajcvRY#PVerlMqZGefnz4x>8r)=wo6U~}3XvVG-v9ytvNxiRC z7+Nzi<)NJth~h0#=6*{7`Dr+3 zPj?Mb@~Oh^@rU!|{|UvBE6E4^2#kf6JjB}(HRHz}#>OIY{zV@F4hk+11=1iCExr`3 z$CSmH7GlLPTmV+*`lU9cJkgGAncE5nf{$m|2-Kax-kppTbcdRGt~-p^0VLrO^?6|;_vldDdh6r z>6QYeRm9`SFCK_S-VbzZzi~2!QRt#8QG31R=($yg`}>|8o+xS$iq&_dPQ+>;9Mx-L z!i~v7at-)K!B*8{%#KUG6%mm9yiw%##HPA2l-e0YVa@!;385RLw%>Hl8rYRg2M~@+|hd74rY(kND$3F zg_6a=R&ejY_n+(+DKj@_>*HwB1p;3O+te^QEfrJ!cO$BsTv%9JFrYyt=cac|D%C}0 z{wQjp+>R+f>A!u~t;UGj_nnNZGu-M3W^ zdA^9t83?kj&^Ts$(7xY3{V!4R4&xC^mY+6*;ln@5_z;LWT-Z}m3{K~5`Ndo{YwyAe z^JjeeOJ)o~wKgRr5TbJ|29e!fkFf4BXy8Z$&R^pFUVr?X{qkpN5OlomqGSf{oZ$S! z;r}OZ!1W|qdJvNdArMo$e4i1R>3PRG7QPAV!8PcyMl)Nk9hrfmN5O*?(|_pd7t;vVU3{02xfJ%rre5j@LPr0;b<P3Jjq{P}K zaMuGb4K>iUoP!7)`}pQwWip`lMc&I%_cqdfdlOVTw@`9$@=jiqv=5VF5D8oy33vkx zs$cPa8z@%X8y+5=lPnx^F5qZx7;qAOZ~aVgG<7;hp!(UJD2v^<%c${~{So+nF!>jO zFMKr((j=E5#!i_jOW?kk{92M*1geEcQT=G&@Ys|Zg~`&HRRfs`C0pUyS#3Il3qsqN zJ`F-0o3U^q#_VsFQIKh8vM^cCZM5Lfllz4J$7O|Q_NH|#LiOXB8> zK#A~2>mxz9Bm2#@IntKLlPUOhIp-Rroz&Wa_?maz`>S_G^Wkc!o^WcIZas8K(`{QR z=IJDc4%ZgPS+h@-Zb8jmnewH57H+`&_PQ}XQ&4I+Zb&kqt_xs*HH~8Gk()#6kS>2p4ec zC!;byU=cy>w%q=|0qjW~4sXDwv;i)amzXtcuc&NUfgY$5({7~y~ zxSop9Jf+pRK+*cnRg*tbNDvKz0oxQyogNuQKE@cM-6Zm%e`{@*p~cn&1`Pbl%YpLd z-`I1{8OdrMD^}+f*Y#oHg+RiqM?)LAp-NY(hsIXs5gt2LeYW+B$s(9x)IgGxy6n{` z)bDUf$MREYfE4&6xpl|Hx|59ZLe_+#*ML*YowZ=PaUe)=|ay!JL zMxENv6dAt(^%WxIlt4bRvqkslEWNMR<8}P)6zoljIEx#kz}fKAN5$9zHPoeFZ>uwa zp-wfQb(9Eg)%YbW=HGuX_BY2RJ0lVeJ`)jgx^=(NuPZyPCtmJHs z1Ww}dX|~&LKP8-rsjAC9se%=aDf&yuwx|G|v~5w_Pj!-jWwS?4r3DphR$`mhPFC?> zGlv*Dn?;_mT#I}&ryHeEP6cA7K}!9nZYpoS?mV4RUfd=?6_$HPF-?2;p!J3WivHsV zD<);VB#2Hb8tLt117(D~uaTIv9!*Y<{so`GV2*>F1*7*DH}Ac(M;elri%YT5mH0rt0-KnsHe@3_uiH7JX7en8k(|td*GjyWixZk}oZdUL!@+%(Q)_$pAE}R&up;O}FO2`l53M zq?&QMjo?_+bEQ94ufqM{jy;=HeTVXY&1UQ&YOV>4lZ`j{a7G(l>yTiJVD!!!?G zd!E*d6Q=5f(U%t+KBb-KH;WlHHqxb=Fc;3dBcKf{M={Mq4`^w`%yxJC;qy9$of6_; zzfKOI9|CVj;UcTuvxGkq3=s7)XDH>x6tx4mqWtT(W6cQ&7AzhElq#Xcy3{iAKqiaT zT>4mjSQsk=b24k1;IpT1z3%YeL=z@JrS>I5?^%A&W%zr(nt0uv-B9Y`?a$nMiUcrc zkGutL{GQTAU7c<(-tC*J%O?}+E#YmTISLpt!$Xhyazh2&tUwH02kDpyzDxu~Q5PPY z*s!IAiXKby!kIeFONFA5YnvO^tfZ1vX}TXS@R1Hk!dUFD)f&SM<)O9E|pj z!r;$<{DskT^Mn$lT{4zz{&`{**o~%rP}DybOx#^=1V?MKnmleZk5f{;PnW^O!|WxZ zeh)3=$f3_wvb^oKO=(MFl$Ji1$*hMIhINe$qQ*TG7kk;*sp6&IYPDAUm~PM`8ATJlCJ zYGzbZ99R4|_^>UOxMT*sx9qh@z5ObAy{ejPfW4u322X9yW!ijyukw$Y&;DdpvF|Y> z7e$P#NXBm^XSQGS0m6mq;{JwtoHp5RI}BE%F>PB%KzG%N)6jVbyX9=1bA6ugvre}uhMGYj<^$0RnzpqsI^Ux&5wcSqA)(0SUz3IfzQk3z))(KAsf zNyXMLC-Q4!q$8tUuwL289QU3U~Z4_Mdj=FZJ! zr40x@EuIQi6^yZCQDa~3BETh_t+l+pY@()NaW(U46IbAO+ZPJqAQZ;14sBR2Rb+YW zKcdE2ly8*lwD|_9r=2k$p5fM?g2FH!_o28oBvMvd8^NnrR z_L^~J>-W8)|E^{_LlFt)Q}9r=e1wtoSpf{};-8QGbRsKB4PUT|Z)vSF3A z0?~eziZ#OKi`@HeQRCJN;rD8`dcG{b9{qkpqW}3-BFDGgo&!tEfpJCw7&p7L{h-#Y zCjt6^ztol=l6Yrn5qxnZBs|~Ht<9>0l6XSCd(NX$i|LbS11o8gep`?@PqTBvdEM^f z&Oq*#&L#J#4xe^zU6+8%J2*P{UHCNGtv$;)lU%;%THOXgh=PdrQ6cjRW%qdmx`Lg= zIJ;@jTY$qNps6-*?I|?Jyyb*SGCW@@?=0<=BHzjSw!<3WakIzCQS!&+KpNZ9gAezI z#%ha)EtsUWJb}0?LOeElUf?Pz)0LQxN^=L${8JMe!J5jX;bS3F;KF z9hyX1 zTx|$Sg3iIdA9_~`YchD-7aZdegs8E}7oq(d?PnwV*kFDGmlNzMq(2IT5AR)F;gf}A z8&Ao+KkQRx(4lmL-2o!I>dc8OVQE=%e=w+Kj-3h88mSf7S(TNJfoi7FOXr^is>5kq>Rd+r*Z7OWl@QlVI4BZA4Zz7zmrxVF5@Vz2ax~nXv_6rJ`Wg7&HVgifb zW(GdUQY)0Ryr8PYLdCDwPViNUBA1O~ea1^_t`)j&MX8eO)D)xfAfw`{i9cQ!TvB1b zt)&)8XB`8Ro1ucdy<}rS0;Xj0ATF)jLPrra1(rOcYox`^tHMG(o3Xga?HfkrSHTo) zj~01rW-*T1RSEB_rTt)++|W{OlpXgN?&e5^!hsjMcwFwz;P1Y3Ct>1?t z3B$or-XWy@YBVNuBF6Rjo9oNjg(1Z$nDEoN;g@q`V`8z3w2=%nlU3=HUq?td`?ajz z+Trb)T84#wzQuLdUby^p{&Oc8>!Zd^J#~vL)?tN)_m7++6A6#Env9LR!KOd|{1ixz zzM0uy>P!4kp?HGS#fcA+!MOZ-1lu$+d0F-g+EHk)oqfFNi=Z+w91*VZ4Y zN0Nb!R?U#50sE2vtvp>D@aBg%RbtXE0$$>xu$Boj&cHViy!939JL{W9?6@gETW+Zow7i;)1sgoU=3jgw z#(9;B4Ji6r!0Ud|IBoVf)J#71lOF z6WmlmTtz1B*SEXhxC;Tr#IfYZ7A)66?#p&@0-t-%e$9=Yu9`o6Fi9;Z3m|1|Fc%dD zH!|vu&JLp<(v2?x>kYGFln*K#w25dl|4;mND0g(xNE>i})^8_!QlJe0D<*<(pi7_8 zLJRnB+BBVH)x6v5^`%GhO5(j6_Qx!OeIYCgG4+-Sh2Yu+CZ{Np!CZ@gogBsN`T1b0 z(6iqO5qUi{5+W01(yMM<*E8xSK73zpY(=O+#91UM1HYHR(-_%j1P9Zqli>U-k`cZN zS`}MhXX-v1bXfagncL35RmKOv{09$<_=iNWi?JH^Dv}ORI}pv+{;J8t^FE`6-P8ip zlVE!(LR3c_a;dH*g=Sg7OxXVL8g}c&<>W%M5)2|{1f;whyNE-EO1r zcs=c=rzfI)0;DE(wqB2{@?E|%D(sQ;jt}`v$)CVnFtk2f12))>YIi-auMqC|*5f@5 zgS2*xHtIrN#$z82jZTt!5WhUnw&S{#7!^HMdY~Q5-TV;Y z@^qqXF`Zk9t+_mxW$n=v@Qsq9u~$$H0y4dS%bxPQ4T<2c_{7a80ot0cwt0s(ISxv% z7FUlvYVybUR2eQk(1A;Vkwk-Dk6w? zw?)wf0F1r=t!ffvLYMrT!jv+=>@anpVE{NG{Wg~gmVm3v=eBe?5wI85<>9H zOyD&M>wKqJvc!3xJYuPk?j?XCYr+{B5>3~gJb+{a>S9`TG_3sQY!}=WOU}eqhFIvc zI>Omr7;h1z3L_4o9@mm_HM5_>?BwVH8H(HvHwzTJLIg57EPV&8U<>M7CV#jF6Hsyd z{#6RF*AKQNa)9kfr-Pq>{Z4?WDI7j4BMHk$evNt*d9eCx;D-pBhELtUqihQ*r5dxS z96OK>rb>K~VXkUI3&e1#Ivs!_U4mB_!1xE_!D)Yu&IhmfzogGLKmV)D6_`p$iX<<; zH*i6?OiejGYT2tfOJZ`8*yxRctE5AONXqkmGvciH8gmfq89UJ5J00`9VJK8{49DCX4zVEXqmUvERb*hOu? zVgE=pT{v0D>3sqe^p&@)lkbXMs#6UC!UM23gfHZ7E{hXtH7&vsv4cL9c>(zN@0Gp4 zB||*UX9BJVqUj8KntD0*1yOe1jK_qt2%sd`1!Xczjoh|AuZI(G_}`HK4*~VhUKH?z zfaiaEjId>MpfOa!#32|wMOmdoM))|~+?$E;ZBXkCA1Hqu|JQ5={Pv6ghQ@)z(DA=D zUv*erPU1Q2#Wd|*FMx^9{3hfWIcJT9a7j`!pa&oTb;!p5dT0Q}?Eg)N3_*iZNT9Np zoEu3Olm67aJP=PVt0$?hcyDIbN(*K<$NP!VYt2i(!5M(3>NPjiymc^$yRkP zQ#}>^MOHBjuf9}_f}??PaXAUwh(WWn>PfWY@`q0oe4D}pcyrAim-s8zlUhmyS?2lK zi!B?c`@L-EOSiT2n2-4eT3qpSqSwVP{i5Y*RQbXfUkbtb7!`1zTM=r)0=FoH3u2a_ z`5#F{#fpV=8X?Ba-p&#pu#}y6D8_zQEStwf3vqxyt}T(+6mW1m=^O6$uh_ljc?N!8 zE?KH|-kIs{233>!{~9~%u&TPP?JFhSAtBw3G}6r`mF^OdmhSHE5)f&myBjvCgmiZ| z()lg)iT9lIp7;5#i$C{XtUcFUW6pVxd;Er)i_nm!%A;L1o+KY)98bue(2QHv`47|A z<<>)1yRp1=K2wWn6c21eGSQx7p%vyDMoHhDRZPyN;zq;DP#Ir?HI6budA>0%wt$?N zTWwf$Xb1IDzC)vMVo7y^RT#g-&;r1J`W4FZka!@pka$Q(wlx5V@_naLZ zP;3w2ne=R$E|<_ZUocECF25PhC^sHVez_Q9oV1^_;4h8*dYN8B&OC@W#PRW-9G{WS%w)=Nz?fiWUe{3}aj@NAyD z(ZAi#FlmA`P}iILHsH7xO~dzhybeiTMrypayUV3ewkW=L<^7`3is?#kJoVWLzf`$$ zd8O8}Pt!e~PAobnjL%-r&3=2Q18Hq?qMp?V=<%5Ai@O19G1k7ZxF`;J_tg`NEh3R6 z>CFZW$TJ3az{$rbh|wHl^~spn9%dN|)o@K0@!8zXveMJtG0tlau)?g{{JgW}RW~mR zxVY&2atJ~U19t>CGu?FqZb+oY4x(w@I#s4|v!~ksURfQ(F5#mX1%CPoCroYEP+u*X z*f|{M7u#FIPM1sAtiz*Q8%GbuD{^dA3>egXG+sA%i$jBj^j_CbOE#G!NO2omNbW>U z>|rGYSY}*%p0b=MD_doJ6Z<$_+z{0EiV|3We-YPWeU6K@u0jR|(+G7vqxo9cYpH?4 z6a`@GP?lC;=(0bRt7JCnBWax!n5e(qW`YUZU%e`Sk@WsZ;${IJxF zR2yP#?K)NuYY{IF2(X`gz)3Iv6n0rLI=j23Y{gKFke}Zt8g1RWxKyC6mJqnFd&J-s zhXYq#S)qAB5-%eMIs8L-3te1*iWEsK848G@F&7U z^qDSqSazBfjJ!#yNf`y^m*=_eT*sJA8Zrmwi{!tizHdw)F@ZC*N(c8$D@?U@ZV1i4D1cZ-gsU( z{6KlMxE;$W^4}i4Uf}WD8jA6Bg>iMRhZZhY)gNSXM)9g*m%R+#4v@r(=x}2B`1T7| z^+*R>6Ex;JtZxpWwpV-G5N?d`Z>qYdidK|q3NgnhkqRMaV+~Z`B!#n7fXUDxRHn;= z#SS5iU{i#(>N{x7V#|CIK~9>Ij8_$pT4+O46%*CgR8p{?}lU)jId_x!wV z)4w=ay^Bcr{z6y1+SosaN`|nh@g-r|-DE6RpMzBCCTJ|A!2-3b5C`fz8i%xiqqNmW zJaN5hDO0qPI9T)M; zIpXX6l!@ID*f0o>AHsaXAi$YYogPP0e`G4KRR1+oAt4H4{!*(qP*shzzm1f71pE>hjA-V~^L;M3K0iI8x$4<_vAI}<2 z!oM26ta;9z(_AhXblThMArZ#GjsN+b4hfy3-!txuPGu`tNJ_J1y>h{~I$E1|ukC~u zJ>BC64pNbgmS=foNfW?u4cvK~JiU$|72P}ji3k6!)ZsD{_5ZJ*>Gr|3O^9~2&(4sm-}50HPV#Bbq&MmWw~P6# zDv+(@et2FlJ{&XR*u_Q#ZyOM=syZmy)(Psjp}sZfa+4Wyw-^MULZfP~ zD;BgX!s*w&%XXLU@x5F24CAljY4!PxU%Hr!013Jub#p=D)quuIBT@Zu^tk6I>xmGs zK_w&oxz%V1N{v(hTeRYPDaNhBU7J>N9bhE5AjZN-Pzb}UpBonqIhihgIxyCo)7yJ@ z7I4EIfrQ^8A$(mPFGY}ioPqR{R*2=sboz_`$uFaJ4hBqF_{r^$87mr!Jw?mpsDgw( z2ZBnhouvhr+o6Z8)m)=9Tx(t0D4nB=38T-WRiZhp`BT?0LZ5sIo;&MfWH!qU#=}}) zmBxDlgG0IkWG;lyEXh}xq2PWk3rFaCodXaN8I1vv81{8!@Q1BN3`I_&xtT~c7E&)y ztzHt3YIYHI$a>Gl6%!j7uFu{x+}uWS80M#ed2^=?!q=6)Ex1bu`Zl*Tfu`1{k!DX**NfaQ-miTPy~I!vAJ!D;_s^as0f*jvqiOL zYE6?HBmJ@=Qe+3)W-34FGBKv>+Kb-+x0e$ZI2H`uobQ-AOwQ7;N3&s*z9P!+xDk2x zdnH_s<7^vMol3;$9FZnceloN(*rzQwe!=O#&uE*p_&`S>2v<;5_G^lC@vKJ0*hRxg z?rX~rSQgoUF0v?hD3m^t|>-Hx|F z%k9dsA^hfiPhi9%S*E*MArtZ?aw$3{*`z+Cbo)}f(@x89 zcgafelu1)=JRGoOaYuCxA9+aWYFudFJFFfrRsw{mx~~op{XvggkkHqoq{fwexoK%V z8mekOl8wJkD2Z71M*cX+-Pc9$d{AMVoE9sEv=e9sMjb13M!@0vahtj<6QQUoK>Uag+44b} z!#Aylycb=CTEdiwiR4}`W*$ZddP!u+SJnm0X=J?@TeE2qBTb6xnaac0RJv$x;f2yL ziC^IeAy_(qFwAwwx4wmagg|B}p_ds>xl-a?AERZENpP>Wj9_uW%eCW!C5B*u@aq*) z%D_BaQH2Xd+XKw$-d-EIbayFGr&ga69u1bW1Z|ReOyN#v`>I_{PW#yC^Hjdn@%_BV zWQgbkrHKWDcdlwpqCWaJWk=i=BC!@=72@8dCe|!z(B&b-C|j@^<`|yfXMMsLn+_-Q z5B7k9gMoG_D<4fbyCQVp#d_F2N(KlS+!rTc6pv9-tf@)NhZ%@D)n1aThhFs`GCfWOwR z-f{K%=$L9vOv?jjlDx`xJ)!$^rBxhP0}^lE-L&GW_Ja6K_*TU*VTtVC=_<9`R)u!j zrx@v#;Vnvb0of+4*}&xqr}Kxe6Kc7s71C(2X`7%e5(W%k_lr-}BNZ+;H~9$xQ>|d9 zj#-=YerN)&^!IdOi`#rkpyw1&wa$bDdqm6WDgp2oalfYCfGG0iauF0b*a3iglz&*e ze*-7>LuoexhQ_p36GS6C?D=a4Ad@~0#qC2k=^U5F+p3!)`TqLCC=*h)gW7~}Wimou zFj5g3nWh^Tlhwb^nDZ@04gCy}NxDS|Ru^=I50T^qx3+~GX3p zhlZ@zhw2=kttWAfdnOniPabdbZu(e}{)S|k;1y*@xR#vT<dAa`?`he@ET>^o z2X^h4mMMHuJtYCJYd$;6{4hSa?b@z&#uW8UdL;rptPX`7f~476zfXx1zb1#65^&P| z9p*-}==On_Z}18Ta*~gP5@YRG2~LFfiHy$;&9CC)p)P+de9R6%FxF4`$y#L3cOXV( zFihk)U31H^x)6&fBRG0AMT%>(h+cYJ62=FACrI~Z6E|fRwv~w10~A!zgfdP4UmiNX z*vn`qi9ajS|E{H{AZ8aT-Tx6#p#9xU^Luf3A|p6!5Rrj%p^@0UWSemt#@@#xOP7K? zV$5yJ5o!kpa~=2X|6%=$_pXL&pyg)QFWgg-GoW?b)cIUoCALC#`hES*S=DjYaSXY= zLcK$hjH6t0#BV_lHIo6^2}?*44+h?r9O3DX!T+sRyJPY#1YjgUu2x#5zxbvtIbeTt zbM8Rum5twheG-G?>NsoKSzKe#C|HZul!Is)l~P55U^!)bAI-)R3ED4?Mk9azk2fy^ z-Jtt3#jF;ixpo^yQT~+j+rFk%MIZPSz)tN^e`Z#Vur)$4^io0VyVgN?8&Qa<9JxI*3@? z&zG0%zt+zK9!d1*2;w7|;Ls|2Vr)yP?6}Z~Oi)xL8ptO%B&X?2|Zr z-Ve#=6O?JP2rJbdu)#fDzkCxdIp9?v7kNbdz*(5(i(vUszQ}BVE1O0#9Bigj0rk6* zpY`7W15X2lO4&CknxaFettl_yQ=7 zE`UAoe}C@TA8y5em|EUXSHpZtV;lhasgfr$!9e_Di*lxx+5ti~h2%s4tr z-c~ixj)%_kn_#QY)Fw(;3$r$+Afm-SN)w}KSa)%qhzRX+d$KzZz+hY-#O<6nhq57C z{cN2IkrSno<#33Pc|t4Z%YQ)+B+R|XpKPuT#@?c93!X=ST{VJZEdsjI&bS`#&0ii{ zLQr$kK0&&!s^cL*e&L0TREx~go)MbodYmsXT`dMTztG8NNko}uii%*xC`rvSE;U8Iig3DF02rx!vSOcR zBU}yjO?K$HeN%vaf=yuXCjc=_t)*|>^2lvlP=Uc_UE+B|vD4RI!&e)Jp@h)-;jFj9 zuhez?;&2zU|1&0dnwJ}wd*SZodB6RD&bMZ+FU18VUERLZlhK`(`YZ`X&}~s#Ibser zn4a@rFXf@vRbhVnTH(*;6H;@wz0PTtocq2IAN_r9pVn;o*t}>#Se8aC@Y~FaDY^6E zD%mjh*D%!Fj><+tUoR8)^Qo-Qz@C0r|B+12d0`cS#?9=IQ!oL8T|2``1>guu z4d`+90Zj7v)`nY5UXF)uiRXi3bD-}s+v1Ls+vD0B)-OafAd%CaPO7OuOuJe9veUJ zR#?_MD53g4f4-78(Oti!@t}(b-GKuF<4#RaN<)NE6e4}{?Q=N?maNwj0tN_gNP^O% z4m1H+fwcB-SiuY|0%)dEyz!m04ISPaI+s1*RCMcE-I_ck^}U2&F8Onuo?Oj`3*i_O z(mnx~p8*f8b~4H=f-|E0o$j&u<=oduyeb>%J!*u{jPmegit_jyD1#CPo$HoR%>c(j zlFHZe_`OLYglOPU3T?5>oth}M`FK~8|LMLe)#43!RYQTnIg5oPQWgTjPfr7yw(eq| zbZPfw67H%h1%n1&C3oiLsMO@D=4taD>7M)$2vGOS=}0(av7_{qHV8jxW{FtL zEIrHsa?_+4730GPqYaZeCC&C$*A^SsP{x}3)SC3N%cG&QseZ}y(r*(};j}JA;sWl5 zIV21sNLsJ)JJNlnMkUYQKPuGy7T&a@aPYh0*GD!q-4%A5SvlK$rnJL1p4V$@MEnHg z{~N@Lr{Q#JyF@G!b>alE7_KZPv=QqLGmRMnmT0C7>sVM5>23WBEjq2jTebx$;BbG~ zGTCq?DJ00?FP?4M6u{`FQ~x594DKB5DKxr}vIAJV}yw0%kbfoMY%)yDZwrY#+6PwZoZhWQ2)GLetFoCxq&OCL>Sh#3iz@`VoR7P^t1K0;jM>WNVy`j%*hfu6?V zVfio$JT5mr{!q`cV9_TZZ+65OhgQ^}R|o4eB^AEm{e(NHR`m}do^j4!I$8C%WJt3^d zW*g6Su}n|wtDkU0<$bTY1$emH9>WBVfnq zU`P{JpPz7}g@0p}9i>X^Xl)p|C?o^~R{?frR~ygQpDz;M;WhZ9X*E!gTLki=ljkMV zB@m;KPn#&)$d?JIQ=PiM@1_fZe7!Y;j?^kQhH8VnRI_U{rH;NpG{cf3x*P;OG1rAc zd+QRz2EZK<&faRlmzRbMB;`7@I`I8?iYi>`r;!Ew?a>l0g0ZBQ)rvciQii1 zl_zp-33_=~1|3X&+Vbi_L@4|&Kkz3amo{5-(v z{9FG*^>D9?kF`9!0>#Yp3?RUt4#AyqGqPRiMzO7h$Y&ggMJ;E6D^S(daV` zAVRr7cX7}GLSd+7a92UQLs8X=q0mR3jaP4Ho>LGp3$#u@ zXIQxtbLybU?=Z{wY+?M|tq)qREy8LXA3-l-l#%rLt_KSDwqrkVa8ifZt*g&l0QTMTOhE7dBLK~sQaDAJNdnX z-3i7NJuRA?(12gU*9i>K*3iX5*9Zbcs$X^Bf&UdY3WHtg4 zIhI{81|VOl1a2HwrGZj4(z(=xQK z1K8(@y3*OApAFdXAE$QjnRI+iSR!c3Mp%ZkGI^j^^DmOTt~0JKyZjMwHby5ptqzry zu#b0%Qlo)+;(%^=t@n6{8Pa{Tlcd3!UM2zhwdzFN0LwA&SIH&gcpViyzKrhwzhIpm~6KQxiIDjUm#r+27wc^Chl)99fLX*jZPy+DvDNqMn~$&o!{dwaVH{+QV)NwkR*G)H$*xd3Uc zfDk<^pht#AvJfW(?Vo#DE zHyaVy12N%jx2xH{)?&YP*XW+J9=st5Qe$w=9yNKx#_Wb*kB!=4ip4!Ozbz+4zx?*ghG{87E2pJ#h)B!l_{CCP-}x4YUW z#{=eFycp@T^3f8vj9-2 z82|1REq93bK7(p#9+tEQed0nJW8w&TKbMCgO$O#-x1)P5u3?8}a+U&JU{@FFK}j;i zbDUs>_-!$8-4YWKZ1vg3xWcF1zc9&P3kw5cmrgETAyjHhIf7FjGpQ8VM9vHXc&a}H zrKD~$S808+Wk(g@T=uUh#a~j;Ct_8{9oJq)rSXzH@z-@~s6QmD7}W7`XYgCO=H-TX zHbckhF@=MeXymaukWF(m^lE-^ zy{prBBwXRj#dmzx4>ptN#f5L`WcX4)&l3DCW^AVRt(MVbCY_n>!} zzgs7=BcQ8XBSGd$Uxn^KTs#HOXgZS#t^pzt@>OE04AoXmntGZB1Nqd+L?>0B)L%eS z#;fuQ(0lSeQ*3t(uP}D0Zo+bol0qCC>JHv?Df)=4VoN4}?1oSrm0lCHuk&c%`__)X$iON0my ziG&R0=G2)O*zmFy$nbEL9egu;S)awouP|Kk4IrS3EBy?mpnf9NDgZL#zj7v?-sx{Z z)?z@oHN6XpZs|K(uf~wEkYx6Ugg21g)bgFqw%X_`(So|zM8@Z^F>bRyj@xj`UY3VE z)%xL#yQ`-aot9KzAqHk6^J_!qC-~m|3@?_Z#)r$*nbyCM@n6%1N(fl~IXgt~(qXmz zc$J#}AD@W6FR%1|GhkS1Cf!tG4IX;C$8ZCpdtL}{`M`&hFS+jhrKX#_+qDkM>6C0b_>XY z{om>%F_o{@JeM3>&ly&(g+FU-!FMP%$2VUz*I^&uO#FWj4CM(3>sbkc!kL#QYgh}2s#EAhkG z>0pzcm|@1lV=xLo=zN8`?W_X`|zr4?27wWVO6Ha|+r?}(gVrO1~? zuN}yi$~A(kPUd<~uyj`-HM^|LZcpSjF}m&*V5%`;#JA`dEYtyZ>fnB(Ftq|f9`av5 ztymV2y)v#Z=w4fU^f`&gp97lg|AJ4=E2m4mbu*9a{`xmRRnLt#%k|E=O1%v5s#TX} zW36+D5$>Ga!P$DTd&>h9Q+$H8r*>N{Xbnis(SeE_YkXKYKoPymD{=R&YYcMgR7I!D z{=*AL^WYr*dvxxV?~~oQr0%J4=5A+_xQQ|-M!GMMZdw5RDhtg(QaTG|4mL5q4sofw z!5vujxSgXSd4MkT0m?S!#t9G~(+e);QxY^l02}pnQz)7NqYAm|PbI!o6>&Rg^pe(?JVCZM;k%ZZamvSzo_ox3e6Z;e}c26 zQ3$i=o6ETnKJMlwN%mOZk8yN6Am@BdcONW&aGMF()&jbsLHT{Ky7+GdEaachv8?@y zMtv~;XCpME(S!@nZmE2!Y1a~I!()Fc)7fQu+>5a<hOguqj3KSkYJZF8Re660K8ai*d!;S#S^7}ue!+iv>0fbCjoQO*1 z*KRlWSJkGq6l>lZ-WTkSbXL2?9W(Nu6x_kc7WjEzBUdxEP42HbzvFuh1gTTn;Gopm zkKcaISwbu!^ZwcnS2B6}f5;Xt#Z5 z1)mJ5)pb&B6rsopAeGJyIyy3^biZhy8|M;>VdZPBEkTUl>EAtf{Rybrbex+@n5-_o z`$W6ON|1fUK#}^@v2qKEI%&QS&g4gsMRS$*TVk!$s#KMkk;TUaw-P&9qXf>lwc;9P zeT<`1`w`}fk{#hIKG_s`l!Ab#mHqY^i@A9&$hkbWPeCun5({rR&$#6TV;1 zExF=lmhB7DQz!rkz@g>7BHy_wU%WMY2`l{sS*-nppSItf%di#Apecmh^LE|r*#agK z>$P@Mt-i4wBvbK9{J67uKu>ut#R-osE*H{oqaMD`VyCu16BJSeWei?mnIDWuHF)1l z@r(aN=db#9M49a0_3cWyqp$)jU|9uEzyslS1>WUCDy|U0IRTW>`TzRKc_hdV> zJO%~#CZY$}KXxn-h0VM$l(9GuHicuE)F0az5Uu4^iEV4|p8kX0Ni>Oko2h`cqsL&3 zBWzRng%II1Yzi`RC`qLhH z<};Y-SY##j^vM4viE=~OwKT=zp$7<&8C+_1^iZ>cycolb;{J=&L|ecJFH&kcOX3W!zXHF?P+5LJI=x^ zXJ8ul%~-?6BHeLYZh0EG6>6%TfH7&yEy+L4E z#iv~9E12wM(p)rK8x8sYmhq3s0OBHgz^FF7+2XN#GbpJw!sM&Pq`&jsw*I&!81_g zCR=}dy(1HSWch706&Lor^&uZdoM}kDXKe8uh%Bj^M&YS*Bql&>d)|I7hac7vEk~pb z1mX>6y>k9NCHhOR2aPHcc{2>X_Tttr{b#V%qWAlk*NuWh5l7J_C-h%l7sj@>E7Nlm2CFS zV6eJ}Ygy4nKh%y{hl&o8j_VQYi6Vw5+^uOostjcjYYKui)4S%|bgmI>iw+&~AM6R;|9U5zx%?{tj`$7uU z#c}6Zev>Sn&nRh(ylf{^U~R40%Tv#DHT`DrbH1+*n=eDNr`h5Xjg;ygYKR&k!m8}} z@4qs+NH}0~rFgY=@FkPx?19jh8s$1BthDG^m#TXJiUmU8*W$7@l-#J9xyJVnHQ~t< zQz67bUK<$^S8*`w3%qoX4!KUibOONm&_F61)#8KNk=F@4Z}-&0m#kBrx{D*Zw^8zK zWy;5w2So{XOus>U?m1d;}T0+cx=vXO4#iktPReqd;{R ztWbERw$(Rpwu7cVykrjwk@8_!5WijIg%ezzss7IYLEHA{+0zDYuG@gN2=g7x8Oa_{ zQxREt`F>7|fefHRdeqNSVdNBvx!G|K z|3PY$8#w{b?-~wrRWwgA^xp&Na)awu*DlKJ4~Im&jjUZ6aU$s&Y(x)5uHB+_hT`FH zdU<^b6|tuQBS6>C-WPf6)sR0}gJi)a7A#IhAN<}}Dez)^{q@5oi2L9c%|@K8W&)Fe zOze7~BL5o3Yywr}xz6N`trh%tk&sD$WFC%?!!%v_4?B$H!}hmd9KKq)gq3akl>7^j zarx``5BfD1nTt@@(i0}V`~t2>@fJHM?ms8IcBWo-|S&j*MhxaQJL$5&zO=(Iel*n2U z1Ow8WSO9?_NRtqFPjGj?`+NV`-Ot8z=bkh3%roW8^?|9e-XRt~7CJh*LrDFL=5%xn z6xu(q1K<~ENP#-|4}E~S-g&yJ569=}=!EEy7tdLSI4n*ukK0?@Zc_K|Y(uKM#uOxU zitBV_FCazEnjS#LFbtG4Ynh)k>KuqU&*?{J*L_0lYOaSpw<>#Gqs-qmtA(?@hT;TJXc|Dx>XEKtgaGwG{l>wK^&R{HQRx zYeeeEw%tLVr=y3mBj_+EpgenjOu8gUFlKJj90GrmCs8Ml;Jw8VyWCQ#ip ztjxVi<71W;*iZWEoW0UyMoRg(;R0UmUPpZji-qR80J{VJCh17^?8PBtLC8(|Dj;>AtB8Bu%}j0$Irh*xEJB zSx!g<>?H)l@zYUpR0}?ziq&#KuUJ7iY1W{Fykllx6d!_H!hnD>3*yAGtb6N(fw|Xd zJCpN5E|@IFpASB6+&~Jd)^4TV-nSsJ5CWydu3DU-ofrpBJUf?RV0Us}t7s|jWAMxZ z!=qPlmhcOtH)0WibpmOD#O@izSTPzF=%Jw;Lc_~ioF!y09(EG z5o}bM<)w2plh~4=ca$r7h56jyd#q$-Uu-l+-KFU#A5z3Y*@MqVV#yk#HZs*8`Z1SO zdGG7cYb%Uu$;|s^)At=u(&V7M%vc6d(>t;cyHn}WgegZ-2(Ty52T|c@;v(c3`}7oQ z+tdrq6NUX_hb;YQrzjD*o4VlHA6$N$8U~?-_yD~z2C<12<3n}{h-HXlW?()3aAL#Z ztdZHUYDPv88iKCF5j8lgU^82A#zH{=AhT^IgU~*=YXj|qp{R3(<%sy*j;Ok}-tL*G z)IYCkLPk5`Qt4?%Ly%9f&Qii7&Om(s0edx`}d!Lw1RX6o_Nm8u?=b5dSM`U(c90C~QOehgS~pjdhkL%3-9P ztotXohrvIQG>{@%zjOrGEuF448R`$TqxsK(BzS>qN52OE;}w7biTO8)i9a;G>L%KL zp6ovhWzRC4R@8G{Z}qHe>+jwTdj8kQKH!9c0Qv^P?r+n!c?j6EFnK!eMKMO5gHb3VvsliG<_yJnh@29$6*5}-6$ts(%weT2w26tXi2_j zaf+#eRY#O&&b*7jHvJ6X34KH2L~rt1f03;2%KnL-KzuXTc%NmDJhGHVw$UW;fdhFi z293tkJSo0CY_Ap@`F-yG2m|)15I2niGkBWpNI1cWS987Hi_;`REfdY}5ShS-?0;ZY z!T_-X4<&OR?FyA4W6cV|j!-lSV*a#MUC ziFFc)nkkW!w!+bf2?#kd5x=K78p6~;?#B#@MiZ|?RucF29ssMUj67PTy-vvQEPYJG z2UcKwrvvoCd-TX8>>6vuV{HbpdoFO3w7b z9hG=i-7!(AlEQhK9)^Ja1FgK!8=tCJMWcYwLNNe9Y-nl&_`^~_ei&+z-dLrEY<%}V z1Mskh+mmvyDb|M2A@<{u4WL?_1(b${2Y?X_NY~(+x};-H`-eLy=ciA~6>Iz0t#vxY zyNtH|=hav`^W*pIYvWk zJajyw82=`zVT>+zUrqvXNUTjuMR6?CsIpuFNbY+CB;RYg4zx7S+C&k6Q?Yo*w7RoY zK8p)~%VyKx7DYz6q4(l=zyqdF!2^|=CIpT$;G$7iNG%^FBERt|3DO{QNl2GctyJ3~ z*>FecAnkM@IBJz1fWRz8r^%%#JMB@jh0$N`v=>Jc_2Bj#x!D338pO-a0j%70C`o`> z%1x6GXVhJ-n|R{SmXic_Js|Vb2&n}iYeK5Fr_0~kMTpO(1dLJ zu|@U`nqf%1Lc=BZ;BuUR+zjS) zMxzn0fW<|(Oi}{(;*Png)6RwqwF#u8=3jek2mcm z`08kMfGE6x;TN+%k((XS{+It-KJb&9Jy1rFxxz&{OcgNT9sEOG(ckvm7utbjppWAw z)1R<%inY|AwEtCuu`kZ{84Pe>y1*enV+*9_ce*6qwDwi}6GDKub#+gk4!0_!2fz3^ z)7*d!P|79ZgrNAmJN87H7rgxF{+=r#%b9B^#jp#NbajAMNr1CV2vKDM5G%ml)%x2m zNLOoSMA1H?{W+Vy&y_kvO?5}gm!d4Jf$bf^rgyYT?!c}KPm>!{*8-6ob8jHwe@9$@ z&ijlzBGXZKvLoQ7ar+NgfCsuGi>+YS0XUxJ>?-PFc7xFtH>CDcsc7N~`R7j|U^nCEG(kM=a4M4fY0C1U{Gt0mOQiyIrRL9gnvm%D!KM5c`G?BZ>khEMA zsx1z-UndL*@~+4&A=76-S10F(^rRQ9WzO~Q8;{-1$cccTQmN&)d!G^y5U36u>PJi7 z0nxrhz(&R--*^bs*H_%y|4(v9he;tu=@8}T8@m^!%V@_-1)gOh^?Da3BorLAA0FpydxG?xyg%DOOrL?^($_c@ z!?@Lf!VyQnHcp}zV-eqgelhr2eY;C)jDM7daB{=MVjaBD)`#ply7cHUb+iah7WB9W z(m?X=HEe$rlY=a3#hNfh<9pM8Tn+^eiD~}i8pLO5H58;}8C;r~BzTI>W zR!+G#N!;E*3Iz2wkmPL|8)IO2>1lFsav(GAqd{X**|9%X7H7ZTyWZB<-F}-L{J>5V z%LEgu5t=4xYRFluZp7clldzrk)^lC; zQLcSz;3MXJfe!NptPem7nS#V(YThliQ9snq#i+D-UoLDw>Sxzh<5YLMjWpeoZlNyF zF<)4|iZwW0t{z5f`I~hZPoLku7oiwuB6eY0jaH+PISGg$u-{KXYKN-K0L`nHtRP!_ z6P8MQ{zw(_OC{>fC~GNFSPcPyynxi_l*~aZK;EveA)43|*8f_L4wK8;dXDs~dr=*P z)-l0NeXJiwE`z`_-^2%c#9^Gx*6E)ay}t4-X^03dd2 zu?`aRTIm*j;eT7s?c`50WM2kIhy2tcw@^XAAT-+#HArec;> z2huC_hOHY|90eBdP?C9s*ey12YAkEM4EqTY%^Ig-i9@>dd+*V2V=r(Zgy_Las5(YU z=oE-H)oEPIBbsibXV0+h`zXzy6NwbK=1}*>BM@S50mqmf@f~RK-%lEN4MuJLqX-|y zoBE-N*%9?)&ToN_j{=7Ey_~kgK4$~iSacqaoe_xmyVd}7Q;%rLDW`UH{&@Ye%)C0FekFC-S$L{UI?$j*y1mD2&1{U0=?ksUs0 zCVG&`Dbf6Hk=Nr6BL2H}@S2*Y(!Zv&WRz2>-BMr&gv1F9XHs#Yj14$)6@Dsx&FO#3 z4nilz=dF7dV~-7_ONp2!08&-9H@XnO?zdTqEy%oJUoQH`C4jre6Myx1v2lZVbs^eG zJ0Y@=C!Mt^EPkfLlEffYTKM~KFeWz=KVb`8(5FXDz!vGj7MWg_xV?@Hbi{ zOrynO9a80fgB?n)yv)ahK$etx@uUZTx316C#1Hhu#rtk4x0zu<)= z8wTv|A-s+K(BIM!g%FHE_acd96nTV09|bm7PA*g^1{stZw!t9OX5PCv;_uBx6GZ?Q zjb29!UI1C~Hm&k}lWop^|KpCU{wO`;KW8<4M$}adyY*_RbRM>L5}2A_nemgE=toL#Kr2xv_E;c`@Jj^B@}ih#zUXLhA2X8i}{iCqs zqCY(p3G4vCZZwm2KWLNU_1mPN;t<-qe`@OewFgCiav*tg_PbS>RFKdGgbLN%2DvSg z^c3Fi@Zue)$NvZv6PiBVPwI9^Mc9`?AnyR%h>s~-$}obOhPt(8z!LO`?SH0HJrYu! zO;tDv90{b%5g!gh6G5i6nauT&c=o8!|MYa75qRV~^hWu0PyJWyM``T`p^!KuP~yuc zbM<$&OHbIDEB5RrP}0!o zuR0CgwMOJ9^k0XIMm&Rz^>_t5t1PY^z@(|MoMbUfSFk#aiKr!x` zs4$Vf4t>(o!vFaKqVN&XqG$2>UCMKQh%Laa>qjHY=jFtY!E25(^CL!^uKVqlzYnbo zkyqPh(`HoQo`Z4 zxtzMDi#(kFJ>>6><3GnHz%>WEcaL#H{XsPCZQd*kYsE$FnH5Iy{P#3Hh#G z)c1_o-yJWn85D20u>=g%=!;4e0OWQF*}Kqm^!a|W|3_C&GC#u=b7L7Wc_7pV{E+>@ zC%U~t?Pq=TUsbwd-uEm%E_jbW11$kZ!Q~_}P(b%@IXN&LUmuZSVEaE{z+^bHG6ON$ z5e(c=b6UyTcCXA4So`F>EcOKC-$Vw`5CdzgHL0k3@h)cJ+zM!Q0dGAb=1w21?aDQW z%Ma8kKmIQTB8(tI@sQo5PXlhFIhjE74pb{E?+MU9(9Q@feZpr<>u&vRN`H1xIy;`K zjZm}9P#*+|iK8RV^`moeQt&U)zP$z+DMR@G7JG1QEt?SqmseBS|BaNOMC2E1WBWIp z0>KI#Au-uY-S{fYT26pVl)8E*x`l#9%D*peA2{-lc@r34iMWtv5a>Tx|Vj!bE^^`>Tb^IC!){T|0f_Bp`D9rEs758Nri z8FiOQ)X)Ewsen0*RG;eBau;+m7Mu3W0){XZfxpo+5vBJF@k z&#S9R9MoDdYB@fv^S_$;2EnDR1)y(we`8bXxCX42i1F;5oQMkHn%gs7;;ExGh5o0U zQn1S@(HMj^c(?sKtZ)3>8~t4zQvJw(`H8(Ln;3+R9PD<+<8C7sz|zW8>{XDNw&`JZ zC)oe4NcZJpzOWoW!MG#%h6&RENO_D0-T_^7}srSAzTDyKxfxF6|>zy9aGfkscDX${@c z_C_1dSC{r`cuT&BENxxCu4NZ54~dxkSY0~iF8M!yB0yHyK$gM9>P`E1hyGPgAqrztyjR*C-@9rxLS0hMaq7@~FBRawSu3VfxOc${(&zTC z9zof_yR{}U-O)g;c=wD4=-e)}CH7`ZSVhC03UYH%DD3&gf7}!R3606>iYmx{Mc5I8 zo}|%ARy@ObP#&FSS<4jtU$h2F%7-04%D97D77JF8r5SyWyi{_79Qg#_D45?KwdaEp z0$rZ};pr%IxIRg=dl4oAGoHP2DG@AJ1j5Rc8G(W!+5kKcBH!t~LXN*R0?z?ts8FRY+ zB_M933R&d5&l844Dg%&z4Fu$Z=%rZe`UpdO061Jt)mUPh9H6)7&wBAcv@Dd2C-Nwc zZI=l*m+YFFy)+Vh&x`)l-rM=r%Sx;9h%vv1$_tF-`{<8Xf{XLv(9Hi9kM6)=pBeNSkC)IEyzyO?(4zexacVA_B!if zd*&U=U}-A`DyLfe@PJgVGZXMja65EYYI33x6*_79%|Z@xf%&3o$F(&OROP*?@*6Hi%Qg(^d@P?{ z0I*@ZVbL=>K}%lB2a`gb4H|#_GYcTxcLy=<f>8K7Bctphpq?DK2bU1CpIZE5^p85&=EH4$u2LCx-NMsw+m68l5p$Vs@?(?iY5< zq-KeW9Zk^sl}&!HziV!Mn$!_6Yg|Pz zIJdH?#u9EsLMc*A6VppSqyqbx_+)(JAn!&LA@(X{_z}e%ZW8ROxu+jya+k%67|>JLnFP9)y;}-4Xb~gCw}-1 zYaQ#p^xeYVL%kBPPo4iUfmw2O=2}zy6GJPGEPbIEO1APGKHMT{~D%IsrQ<&VJbD+DWVOY6DsL`^6{4 zx3?$<84EnnnJzZIPsR}d1N7#JgeJ-@?5@I_77JHs2Iu|U?hsY&#$I*VPgU-FT0ElH z?e^+IkAtqN5u-;Gv1M5DgwmV3NKVKd*iKq@TV--XU%U?BNyY*;a%2O36V-A7(^G^@ zXMMT$@u*dt$wEpyoR}c)$^iNolI+z6E79xUJ3R+0Y_~27F0TaN`koSO0&&Eyu`3cv zN>?RH!~xt33Mo?G=QOq(PcUw8E<82tI#(!y5L+1S*yeECV-B)m4D62 zd8lC!@e;2FsH2z0anW;Vg8vt_%Hwaq6o5@MBIan5#6+of z2!2^ZYS0Fl=(JUfMhhhg6&l>4j7P%^#fgVz1AV>}HLc|JBo%R#nME&Q$ z8l$Ekp*ITf9v5ByWC}8s4CB-=c$*_B+x=7gsL-igI@P?|-yA<1zv#)q2E>R#jro%& zr+wd~TsgqoWK4?fUUWQA&CHzz(uXLI7A3a8&8?H)MM-6c>Jfzk7#r%XS8?2}MN@3E zooTyN%I2?IWf^DR9RRmCu(h!zt11F$I*mC&z|Hxu1Je01uC#O|+pp>1ctkGV=&w)F zSdKYep-JuuLa?uDwsnqY1xb*8{|e{KrRd&;pa5gPiKW-+SHgIQebBjK^68MvpxNeo z=h>d&IQ2X-bKOw^F49*?9IU)t@A5vwhqYX3bA4gRbM`}idP4ykTpT}u&UlK0XaOl> zz!>GKxQ{g)%@r0-t|#iCZ{8Y|Evs9FQjXm`%ev|^>~7hdLQE9U;8NAalc6)`y2i(^ z1T!Xs%b@FUBEuk=;7Qq78g069ZIjUq0xq~~oE#h5^S>I`Ofh=~XSYxU65EUR&LM-B ztGN7i3!^Y0j%)FXALcrySqxfd;-o_4LxZwGdpm zubt>BmhXvn^w9S7Uj*I94(XA^)u7ER*WgIKkyKgG$*nnM*|Auf=N~uoT5(bw4!U~z zkh4xpjZE+wVvu<$VgImi8mKq^P+~^zmhnt)CmKi)w)q?(0OGkEu$M?Lq zg%-f&jc<*;8=shxo`13QRaXv%7iDK%)ghYjT@soU{YIr;r9_iB<}O4tRYXY0+aIVl zll6Xi!WP!r+kFD7VZrN(>-(LdoBH#xQ~BEY?S&^5na_@NJ1APP9UspId9iC2%4;j(X0!gvSrL z_r6<8A&Bkxa5f$0`X4cZ8C<|MFpr@&5!w^vudfv++8+V;mnN=J-M?+C`Ml4N&B@JO zC_M${f;2EmStHbz_XT;SpmT(~RQ1=FajHMkWWEOWT-S-6zJ2K|jB@dgLjL|xlF%*M zwx)EPQwG*3_xV&fL+Z2bhuutW{xAYKRQ-WwY^J2b+ge5@7w;e(Ag2?QN>Z^T9w+sv zQR;;yb~h>FwuDjf-3fUIT-<^jGX?@Ja7I_CW|-=6E{YEvBe!2MZ(a6(di#lv6}y|x zG!etVDt#`M#CyN@=PUgqpj=qxaf(O_-}a4+bO52?U1{e47q|fnS9G|bQp-SB3@$!# zy4o@WofFzZ^W(RlTm)i_qH{kCN`$?WxU@70TvFb}474umxLf9$4P<5bAqECw;E5Ys3Ieb160fb^QJm+AZc9EtCUQ zxk~oo`ZsXvv)u%dS5?$K$azK$1nM~`t$D3_(v*@bPA^VI|uj_L~OZeGI-gxmuKH$B8xu@)y)C@`Xe zaf%{V1#B9KwK)-`OqC${ET^7KN&5O`E4I)sCxlO>f+KXl%S_uwHGAbUi4ZqvWW&2%yItV4%_w`PzXWZ&O6WcLRA4iAF}#{)z;NdQ`fD zAV@62hG`=c>zcdKbn|<#N&iif5P%_A%%m6=rahz`|91#+#-I zpj3GElvPOAW8r(SaSOxiv)k9wH76 zjYnUpMfY^h&ha<~e@TB9GHmAW9N6)7va-b+RNUXJSl0$ZVyGtyC#c7ff=iDi;t)=z z7(U&s2Jf*F5LE?+)K1X7>~WoU?=8@b>s?6!Gfg%azN%*7v#P3%Kcm+}%I_|aHF$Moi5by9wUiWHltYS;I?L)q|65VN8`5GVi=8 zGY@B*zg~WqzZbS9It`}9h&2%9_#Zf#KEZ{|-M&~)a!rRKYT0nw9=&lb|C{k^R*WVScn;e)mwLMei8SP3d$MfrKt|b_nGqY^|$I+G1nPU?cWcC%EYKI)z^N# zC#08n{Ig>CyIraB;e!ytQ?*PzerN9$(5}IXfYGm|SIq{cX^w~_ry!l*h7_jpHb;g--@I@@=ke&#=0yGvO)K&IeI!oYZK+B-&(aPI31M3IT5$xE%ej5oqvA)twu@f_I-v+vekDU02p*7?!_) zdD^o$esCc_Fr9hhGWkTwa^peN&3CM6XY%VFeoQs@J!@Mg{~IU4En`?XkZrkQ+Yg$E zsAA*U+4b^<5A3T*%jo$X+2DgOA|Do#PnZ;G^*Lu;zk^tXz$Zf*)LN9tL~QA#0x4~$ zH;n}hqx~>^qB5wb=D?dd_iDl7&Q6y5>vzx-Zs_oyisFFVP95;KZ(mP*(WHhr0-oiD zz>USd!vft`9+>_JjMU9}ven%G779h>A-OB;&pHVN5B>P`ly~y8yyUBnwuQk;>2hQp zBjy!kS+VvWf|Ljn28~3HIc;lFv8;=p3K+UFoe@t2kxA*-jH&oSE2r<-WA0ruvnaSC zuf%#lSZ(-Z9Wpax?)8F&QD{Mg0&mlBw?o=3er%c6i%8aqY4Urq^xxi3%Bs7k)GASWg)g%$AC0Xu3`$%B`-$rvOixQZuJ$;j4|TrK3G)qo zvu1i@oum{)<;j}ZZ|~iycniUJK~b$Mi(g7=2l!2NbDsa6H~(nJh|1$0c*8I_N9G(d z{7mmzqJ8~|<^!T*_Q+r2YS9tCw7Bub_A}Web8Yol?Pov0mJ&;^YpZiV0Gk!ruS6_@ zBFr^d&f+S#{*bt#Yij*)HNQ;WfPZ3^-t$L?-H#zGQL%fGTlSlkiwcyY7)YQEOjuxj zG%lzdk3-y~8E`}Si8gtWG5of? zu=pqsjOa?upFrQFMhCrZaRc{`Exi*{;Y!0DPIqO_aA9N^QMsK{vt_$Ye;Mo?2yE5n zQ^kSFH#h6TI+MKDWsxBD)=1w-PA}hyz@1A|O(t5QH|8*1AKGnVsFH*}+Vn?)#fcAB zVb^6%N4a*}_153yJC0sRbsH+s4CCrI0As&>iHHYNam%X~Vd56@-hsICou!@EZ{e(% zH#sA@&&%AM)5V5yk1v4X=Ostc`6`R!AB6vwCak;8i=!(}7=BeZo>gD~MkH=RRlJ4^ zLr&SC_&hA57Oh96ISjwaf6F|lk-@%Ifv@e6DOpcF31-u9be^qG3E=kgM%)E8C}xon z#h-8jJQ6OF_WhmZ?qbx8lF(@k=>&|8rp`eU8uhY67@)&ov7%TV7AR8lSNHbixuVpQ zYdFalySKlS+s#Rn%!jW9L*qDX2sbNlE{$4U3^UJ$-1<>oDi_Z8B<-c;u6Zc-=0RCg zJ16xupipgtP;T#@9OWDwPH@|@I2^joY2$*XY)e(#_J?hVFX{wG;qXjGAx8$pGEJ)Pc>h0{26%%-qTa;F?Hb?E(0uP2x&Jl9ksk6gc!y5j-_-S}nODSul9FJwyM+HLy&x&pVQc*l|y zFNL-!_R}@HcY+a2pU5vfYpr-FGVD7XLF++M+wfybXKzn{A-F=kS;-^b?;0ClC4k{_ zduJkdmC9T(SY2FfqNh)=ID2h*OD%j>jIsUX{JkC-o+VLP+Wg*6t{a=$*tM;@gj#-x z4ojL+m(p5i_?rTu(=CFcM#W-2S_bpT(bH)NEv>pLb&_lN*|mE!R#}|pUA5sTyW`5A zMvlPq{;Csp$ABh@4sytEx{G5GhX;KML*$;&{3V9XBd=%?TvD31H1dKa$tdzkwZXyN zYmXr?8DUe>g$_$s^8`X+T0O@W3rl7?im_dCo~dWC1vS=jy_R{ywEYf~dwk8FXvs)wDU1E>HX?D%4o4JqQX*~ueN1zrgkl)Id20X#uz zg}J>uYR9r87-L?N&UEDo>o0+^a*(uLf_62RUfE4}KassHF8Px~k>H3`=5I2GQr^td z1y9M$yD)4XdId5Qgk;ePwFHGs=YrLlEAMuQWiZG?DC+lD2jdQ^`H7nE3SkL?XCVU2 zXZyPwg3^K;MmQG*`-8}l;z*nt-{O?qm_zXltWLtT%g8jjDeR%b1iC(G;S%WM7mu9} z?aJTO4~r?u0+g{0Bo=kUE7LpIRVV-mSq{_ESh5e=F;DB^YNFV1hfB>K1zT{*!Ma}r zT5?3jky9G5{A?1da+bTxe6mi<$F$195{kDg2d#?`OOK+0>mDWSnt&qSgx~u=6uH) zFTa^K7Mn5NRvy(3DZb(xkkhHHby-9|ah=oOM6IXn?L^MDxHK&l%lBFetvdQ*w5dGjrPWOM0y_N~)@v$%`Z_$#1Fm)U@1e zTY`rfFb|-UXtcKSM-Iki!y4@$SAshbq+C|AXU<*XOUS*$f|sC_=2Fn#k*#3Kzby4i zdpsbQlsndZKH?Xz#QdlHD^bhE;kUkn(j@L)g&0(dC!{Xh-s%R$*F*iqZetA9M88E` z9E`Sv8x*E3YtEkOQX=0mM_>9msb$RrHI@UL-BwGI$yCUd$}~CQZ$uk9H^~`kQ}5Q! ztzh2RbG<`$OkdL&S>xl@DHO+;w`=D8N-&XB#kDq0^(SRGKj)TkvAoq^Vl>(y+gt6J zu0V*n_jJVJKv@U@(t$>IjQM2z{&p#%W|=lID}Wn)zx=iMHGsX}uWSlsrg%L3`{dz? zy-|rrH!?ogZZzMryEW%yE_>TvV@NjU_o8DXxEoL_jh&bnT ziFiUl(d|gsyW_k_!!HL5d5T$ITC2`1@Hdy{mHtdxG6;EVe_s3R?&ZSWcQ7y2+Mag< z=;%00fkgtP4Ibe|7aVs{=oZEtFlE*Uo2~^e_?phD_7$nDVme%eLFpLERs*B6O;Rf@ zdp8Z4%^G>bRQ(FUU8XqU6y-VaZq(vh4HzRL3=X^lcOOVbRF;ARiBj}scdX`W?935+ zwNg?{OdchsrBs-Bp;c-7^RV5^$7R6hjBEW5d2;VLR9tSX`aHRK7bN94<=XzIZu4a+ zR3zlb>|Z5 za#;g@O56LG<#dPhAyq}f)jqp5(-7Ggk*!UHawd~FgisiU^U{p`jT`MN53D)|%yo00 zcNq3XXFyR|=yl&*i9p0D#VS6ZJGNtY?rU8ly zxmWkqeFknQn-Gw`tlhZ&5RSXTh&rY=S?iiKL+|;Z{C2iwMT+@MHZs?a>&*M^D$mm$ zU!Ok_9!~O=bV$oP%Y9zR48u7oJkcCbI#}(9EBm?lN$RRn%~!gs8z)ZQ;{zMg;ig)y zDy?v$&C}2RdVF^Q>b-bnT0t?)W48F94dUaAb^;igvtZaZSAvW%r=7x@g$mhrhP97J z-;8p+h-!b$S*1D+#}Fc^vxs)1XW+I=}8- zFICJ$sZLpYdFj>LgS2D&$cQq{rP)XQta*uZUy1AGX>hUm$#O;_IR`m*>+399+h^n-t zc9dSXqQ{TAJ|w?$3=_6vhG~qskiqewBJmLdIs4%T|0xm zlgYQJ_{4E>Ir(moq-p1mzIvIROF%#GETZ2OZE%dgS@t+9(D8oCmwZd_=h4G-&THB+ zd*<4+JXFih`i8r)=8;M%2OJ%f+0MDQ zH6V4huDp&(@kqU}J$S`S$M-T%q^p|F-)QSm)8PjfRG9%s z$}}Jm%v_t~$OkWWJ>i;jeNN~AN`WxlqHM8!|Ofq8>}^#ndx9xV-w zczEZvv(oRKU%if7inGDH23wDH-zf&5!oEGOV|prSpIK&b15&M6`?&q z?8dZUKh+}dhht}K%V<@7v;2L#{@9;zdG^IJL&3F;^xy)`wGl=FFi0$^Mtc3bom9z;k3Pg{Y|@Pf3T$TDHUzLzl1a7@hR&wH(_ zYmOTsIeVr!5$VrQ#p}ao0>f`+{9I49Yq#v*BHZ`A{U+>zTIr|{;;^7rMaa6y?7-&i=FwNZzrEZW$)`sJ59e+>^k{ENe*VZ> z-WJ(U&c5+}9F>(|b{Q8GVVoSr4cU{F)ZOgB26!64Ho(L}XPyS8MKn1ExJAm~8UGZ@ zQ{D*KnEaL-%6!CjqZdRVDX(K)7T} zuW#yGOl0#Oni7i0B-vVY$~@s3)I*k=l(T}5#kG9V&s457NG^cb7^Op!e*UR+1|}w! zOwoAI9A4YDBqx7WXID^G#Vc?0(u^SPck||D?}t)@L9ZUy z3$UJ@=#H%ZqM-g>Ytw1Z0{ek%crTu#OkZeIr^w^DcU*t%ZL7y);Zzab>X{c-h6K3q zsi%msb5YFwFFYwP=eLe`Md<2QQdJg|qrY$@2PR}sh+04J0#qKAhaZ|{F=^w_H=@aM z9`%;#v%XAQy=Kk7jMYsRJY3=2BFhZsAV*oY4Q%*}2RUYJh$^%9?nlVq9o_Iv-eEs# zGrfY|0E57%tX9;RC}5ttu-vW(T%P!`V#Z5jht%uq!0udcWh%v#hM4Lk_v%z0#@NR0 zat`X2FFh6i@+&S%l(Kcl>;U-epmnPIM4s1@vH3F@+U}b0jk7T0d=7mpp-t|`p81b$ zIy=o)e%IP?d1GH{{7!Xy<~s)ASo0lgFj`5zflZ?vt^@wwhw)Mg<0Gd^#s!TQcyK8i>jM_KkGaoApMjz{8kV;ICWIw{hXSHy??8b-mn#uE(wF>6#Xw$|;sDn3}_q*jn z1uaXg#q;|^N~&OAMUum692Xv&-H80QD>wf@EBEb8So0OlvRP+Ls_aZACARrwtshF0 zA7yhe-Kk@sHudJZj9(7;ig;F`R8FqZN&AxDs6vpZ8atTzmp1Jb@l|G`GK^ob9Ei=i2d}Pxy6Oi@)8F_Wn}FS_RchXt-UP=S(&=Hg_fmO9_QQ9w z!+Ux7ezcGBF);Jg14V&vAle_ii}+l&5=Lyz?7dTd5%X&K*Bj>?wT{H+1D-7Kjj1N1 z9SvLmWs(Z?Bxlo$ZnDBh4hC|b5hy-P($;e5D<$|a&Wz4t%9HROH32pMiyTUWS*Vaq zrIOV*Z;vr!%o&~W+Ew*xr8&IkWrc^U*J~fB?R_r`0~0O^wL>Q_6Lh%WOAdn%l&ArS z;zZcjlcwiu9Dfn+!&?n<#(eK?ia z=BoqfXb(z&2W=X zeg%G_Z9sA@@e;fTZW|deTZy+Sp67W!(Hu^Sd>`>raws4vRMa#T(W6)fbgSMEs*VIk z&?FgBnGA~-0Bs=J7ZnV{reYxb`P!aCdC=bPV8n##ib!A1*^X-vNiHI8Jw2v2*jA~o z_S#yOSSo?&Xf2=?OWs#yMU>WH=+LQ zP6yI?+U-(*@p60Y20F8Koc@tjOyuoZ3^T8~MFMFt_{^RMr>x(cTFwtKVD_49wH<5p zb@XVwzE|MnNE?m7RT*p?EOv%{l_v&eIBD+WBMcV6uD-$edUv^Ipd2{rCEAX$`<3pO{RoNcD13qMieT`(OjiiPyjW-8iPx3bUk|S)c z-0Kqp4Wnsw>2Hzw#fMobW~96{sDmfk1XX_iON-bs5Gl*CyH=!1^NU5su(XrOl(JK* zr+!b}&e+I(tOQjXksrU>v6Zb1_0S=9GPFjgL>b>YgdnhAoB zb>2qYxf&B8xhXG|wmtUP9+oCTbYmLU(>Fx&7gNVw;mKp|Jv&@y(3sX z&J|0gHKEgz%UB`Bv?dqT*KR6&o?Td^&Zczy;VM)pgmMj zG>q7+&-9&C={XXTO$`N)WZKivfewM>NoBm=XBITkqNsvG2{`CCx4w2Pr4WWMKnWRH$TJ}*nC#ECM6>(y73ko`H?uGKLD8fH z6v`UMEt4v}SFKbO$P7D}OVEE7$FaQ$&?sN}PP&RU{TUR|N<%Ygp zd$_;kx(`B3^?LyM==ggxuZ-29_zBhigJtQ(jCW1X+HvAa19Hje;LdJ&PKwW58D8|*4oJZ0eu|-*)u)qJo_V=Hn`)O#stgKE@e|gAi>3tqRV6`skf-< zl~TleX}a1(&YQX<;x1+pgA`(yN`vOb(GdEYTq83Wrs#Sm-NSWMm z2vX14>HX0{Ad=DMu!CtG{o$i`*=M~1u+8&Sr&EVgm3}#Ofi-e*FE}efbR{6Lnc!oL zt5;j5^u+l?iC6uDWkVGJ2BI|IN}zXh5FF|)J>K;v0_`MvksB*dI-dOyKeMYAg)a)m(-e!Tq zKjtEzbHT=d0-`r!`rUrIg!7c;K~SrXDUHDaVVNQ{P`EIf zVa~ExfHS_xqD9}V8qc-eVKrH7i;oMevdKr2%o2mDR=??U0-6?UgbT$~J@J+S6s`93 z3j<=CFO=v)$V4<+Oqf035}9YA6ZSa$#rE?2v}tHyC{eyc><0=I(yhhIpW^{sFO*2f zNt_u!w(oLr7I+M)%|Rc2j1{a{ftS=M{CdPHQqzLnA7iabb(H;he%{q!%fgo)chmOK zs6^GTS24IMRdrRECXd^8x6WFIOgpBWqnBXxa}2acb(;ycmjTsSOfEoG*tVy6i7iW4 z0s1&;R-QrEoyT|1H(GW!c<{`-T9Z{0xbc~Cg_nB{bsllhzfhi(x5byKpMK#v!5vBj z4f~RgU!?($%`ju6t69s=BMu;(I{VF#{>9aVxy?(ql^XDv7sxo`dXQ%4TwQ1(>;ubo<3{#lz2C#G zHX8GvUj;F&FVtnocngkq@+S+Sx)dxXesSW&j=5BBMgTgSNj{86Tu}JxVuS4dX?(Cn zPsw(_)Q{^F7+Z+sxKEC@+;*R>41dkQ!9f6i>2#ao%~FMGE9egWj9RGHH$c!|kQpS!yeZ#z;FtyECLWZ=XI)&9r!t|0cK>^__kuxA zubs}Oa|-&JbZ0h5Uko$+h&ZaC$6kMnVB(mTu>jzcbC7|wp)nbedxj`ivYh6>@!shd zB~R@;)aI^&>aK?t*$dlMHZOXDZZzu@yVsa%Anbp+qS04pfD^YmSf&Id*x_`_Bp8+2 zi^d3RbOiC1XPdAqC0$WZC)_`C5BOqW&wsde2pP9r7#qD+LiF95ID1pBvw)^14 z`z!)KpNwo6wj=Ew@uqk1a&u4{9mxnbf;{D1@KMwBE16rsDT4126&V)n}D~E0@zID({~I zT?@enLB+&q{(rvUdm{Z8A8|CWzpw5!BSLUU(LTg`PuV~dv`{l#3kBQ9O z*PCx$iNbL)6a4@KH{5n5z0Rj!IKBAwYY^pi{lp$*oB+iJl=^C?N`@a25Nt)$%?@p9 zPLPZng{>QIZEkKx(cfTzI8%#y{P~l(&R*dD>gQ;qsln!>w8~(aTFHoE~P=JXo>v7YHE~S4Quv8b*`S=W8zBnL!&{^@)~|- zu-@MrpCevT#N56WC=BHY(06SWQnS3*qSWm(`wnkUy!@Vv=b+*ZLxUH?+zKesIxJin zeuBCL$+}wZ?`bxq4FV!-K7k2WUrzy@aoBm_Gmk}kGk9o=7cpCY(u^_dtawq(71;5C zXvQlY#6W4=cMxZ+)w?%+Q@jji-SIB(@JDr4zJD?VbWNj8gjcT(HY)ef)?j7#kRd34 zNX)Gdc0-=pPNTG!6h2We|j+qUeguPg8dap$@G{}%fop7sYXtDmR^lMZ>i(P@*dv)*mb$5Vq z#WbbbdI9JW`|2<#m(Y|#5b~Z|2syeTSB^K+uAEe3j)LVOc)9*Y~ zpD};0%Ix#QL*PLOaxTuKy^fo56er&drTB@1T%EbNua0-AmzQEPW0;_Ws={&Od|<1?F<+i(*j#l|Mhhg6B< zqh9yEgAV8#rVF>}U%$W(wzN-4-H+?e++AHNX$<nW(JD242bRqm3W15bF(&bnkPsnlnAe|xC{WRkdRliq`_3;Uxsg`2Aq zL!6dW1QQO+U5~N5Mz>v|a8~o|$6qIG*n-m~G444ml*HyoCs;yv2d9GP>MEPN=FFaL zO!xY^%O{cnwfN+H!h97N@8h4A;Fd=ryQ>mhyTI(40MP16j&w#c{hUp}*T*2DRj0jW zZ@V;7d>gE=@y32;rO-`smr@amdV=98#bwA>`<|&Ld6zZV;#<9!L{P#iKhsfVskZ^xqE@JEK3p@N>6F&G z5=!432;E+nKal#^S-s2)s#%SXt=^o9rh@JEN^v14~N zp2!A-PP}~QgW1nZp+w+bSG8*-fD~RF*%WXreSeyifON< zcjFG5t>*W*Zh!qW2z1{bPg~)jgV^b;s^Cws)t;Utd-07!A4o5jQwVIrE17)PDge0c z4wbx(?T+KK+ z#wQ{zxfT^HuJO_sba9zWKmDnPfDlcGGC~wu#($P~tal+f^FZ8*V8`&M4>=BFokyAv z0au5jMKTZIoS3%`SykF`wQ_uNdFqVhE9W4=GUJ=W8GsyB{|set%ogAYFh|(dw0CxT zTb|qlG-CdcU?1`$F z_PqLIFwKpyG)k;m0!aGa!~n2%UnSZc`1*ml=Zza33hY`>U@%aP-Mlr_!!t6NrqR)t zQ+yy0DuM(gmi)?Gts1+G<;o!d?a``%PEK8R%=zYgoFEeaP@O#)2}plfK?5gmY=4A0 z>>)2aXYVprM={$A^P(e*0CB6>xxKg?XOjCZ`c$5k$s%Y8l%%ffY6~hq6({kCNO$8} zRjft=V6HeRg#I<0dwVck*Y5$O3}GFxDLp;_+D~gKgti8Pjz63@jO}LnuuC?E}=YQixvO)%{mV=At!*8ah2si>XXYOB{dKY)t&c!@ftR}?5#7di!7*Tt#V4uiDn z#{wvZa?vNs25ihOiR(Qvy6$P#(sw7~CJd}nVIHGEe9o;4V) zvkXA_*`PEXC8D~k4r>!^(l{umc_0f+&8@q(>oo^`?_$v;twc^a&r?z$znpjmt#Ww2 zmbXCT>sI#*RWjr2cc@Qr@D&k!LoS8Tx`#1B_i6xH#S&=e-S}wxa3(H3<$~`8_X4 z1%iEYoFq7+odZB-=$zF0fMJPtHXMmdT|vS5HwTbr&_9dqS|0~Mb#>5l70e)vR{^2S z0p63?R|?#nBLy;In4nWG=_6REJ;M4tG8nrHc=eLf^<}SxA9KP}DG_3>iG? zlt^xqS|n)Y4$^@jr0u=KpD$LDENV-pTmVi;W5C0xx}AQZj1KaQRv8dUgFZ4o$a-;5T{)Ru%?6rz+qUFI zo;Y~iMBT6NYS@F5Ad$h{6m{UcWk6GL*H9tMD*H4qt!Rz)gVoBBO4B}f7nkBOO)}p_ z_!*z*R7+7UDq8?LGwZR;6G*xP=1zw>C*yTNenLqREda(oBDs8_71B){I&iq=_pgys zODnGkmUL-NT8z+O&n9gIa`>QR-FKDhXndZgPr7?s{qf@W41;e3{wP*Q82?dlLt_Ud z9Br@-MeD9Rzy-y6Q&3<&L014tp5}0#{Uaj3hF|-eurk1|H^bW`L-5@m(5(P3xZwMv zNGoK>xMF+v;d1O*-w}WZUqORzTZ|!9=3U#8j1Lcbj}!E|82N-_h=3xH4zoKt4$xp` zMCrH{wht*Wi*!G9ekW6ZsgQ0+{KT&(w6rUb#vZ+;(!@1z9|SmOxYw?UMN{rP>G=R| z;_tY3fSiMDUqG6Vwm`lGAjeY}&`^5;?n6$D7Zthl%QPhRag#oTexBUW9TF^X0sKc6 zT70?OU(>w*36KPG-H1;(l02J041gF2VF0(C3l`w4LjX9KoUO&Na8IV zX9V89eC^XOsJaiNJ-)ul-nKTXBJn>g0De#1{IXVQqj8S~zrf=_@+&D-qkC3!`W!w6 z5%UF46Kjo9MZpY;W#p#i2l zLR%k30M2|Gq%JGVwsA&RNOjv|o_EpoJq5WL$Hm)uTJCVg>HAs0?LY2pE5t8g2gY_x zbdgi%#vN%E9Kdnk{u(&msZ-*rU!oU713G51ytfTjFD#mTpa3AavHA>&OOQi=-vfbG zUGIZdz%ggevRjinW}A46p9W;X4ik$r-%yaR>=ty_*yBi10eqY3DcQHDbBe#xl>X8v zvmCkkxzcr_7^KDU`pjV<*ms1`iI&rew>$TsNhas(U)%q|k#z*Mo5u^QtQo@`R{E$T zMH3(11Yj=@L6_rIK_y|(6ZX3SR}(|Svj#kk`n;3RjuV0}EU={8b^uopAngK-&5%M~sl0C!TS2kd*YM z{NFpmQFS9{SLZ-e&3R}ZQ)GJcM*+Y?hcbs;J}1MB7Fpp2$RN(<>s9&=K|--4HsHb%kI3|d3B#>zkz$N~kbFD{?o8j`WDh zlH9koj{I4FmO4^i(%;vcq_@RC<=fL@z=7&lnAVkhz++;y0)E!6{DoJ?HOE9=vDhnV zL7)fX+g_5+ypyY-{hRz4fCzk04|)bAxF&z7RL{c!KkYLOe4cK}T5%yDqMGdFR?w`u zqKf1utZJ!uN6OFhL$ASKNS3QfeD_X7ll;`Zof8EJAT{C!8J#(0XRY|XOv)g@Tbg<5 z&Z|5Iza)vii}0KSkW0;-nxF#Ood&*k#D}}Zw`L|LQ?G6x3u*`oT0L`VcU!?)MPE{2 z4&Ct8&o$$-{ORj>ns*z*vD<-=EAMzo_} z+gwaFvh7Pry@`hb@7i|#$vzfXy%gaqZ%eFR;eC$aOfpm}XL@~NI(=++LQAc!MY1`X zR0p#8EG~)Dq-f_~B^STUrF1Be0*b5~8{F%A;`5lngr1;oNrL8!3iwv%{@Gt2GrxK| ztyj1NWTelqRee01b~NwXoNRtXJ5c?x;j2WDsHnRCP?a2rHt-j}Q)8H^g1)sK+PphR zJ5G3h!>?ICb$tiCGX^YDbjAJHndXi6_kJ?F_}y`2YppVy{&@DO2YUU~gPCAlrR zC8Jt@$4{m?u&QH*%&+=^8%{|AQ5y?{WM=K|kR#9g=X#U1O-WClCRMRmd3g$!kOT`A5psOG%>Yt4{iCKnhU`K=YSa)u$yyr+l3Lb`KQX z%*id7SO2obqNE|v!{wcO2NuQUZ4LUAjiVo;TLswD^*E`6KVI={FWyZUk!=h0!cbg;Af+?s#E18=p|_)B`i%)g}G z>l`G`u)j+36rL`!_2*K|j|w7Jj)z;ld{PTl_8yygbEum0&bn(M)zG5&J|-Q1EJ zTF$+1wCUv28uE)O5opRrs>3ro>EkN+)d`WCH%Gp)^?G+?z(at!X)*M)?nRU8ksI0S zJnK8)MFc45nmd1jj?*bPJA}tH>hRMi2x0BmFmp13f)aL>8!dEzCpa`WIzP+l=Rr^K zMn^|)yG%L20O2rZ=iTC?^y36;<^+wY5$4z>u-(t(eL(|C)p)Co|SaO z3UPy1olFR!B2M=Q=^rflhTP*AUKKtCfzKy3HNIOs_x3d2K!XmRK=)Aa%A@NPkU9F& zGfG0OY6Yqxh(++OYRQt?p<(@9r z6s9l1G_OyFVfPP!Lv@cnHN!^N1q^-#K3L)S{N*)IK_wmc+R~ihLQ%L7wYzhSE%}f% z5Tci6TDq%7EI#Y+3Fl@Vr-VIJM5`Fh<(-X2pS>vF z6&j$;H;DnW0-GpL4NB+!`-bzYy)tM0(a;7nqsEV^9Tv>gLm8#;Hw* z8**2BJNVHp`-2VYc*xYbqv3TSloqp9)PA&1_H&G-M1q)@ST=a^Fz*`LDA0NgO`=9_ z?0e0=_J*O*a+EY1&MS14T%`H#oHw}5k6`zK#3ev?iR1e_BluMVleagjRPv>unfX}P ze0E%;u5u=O1Z5id#sS3Z%lH#U{vRwKl&H}2E`FioUiJr#{740}>?@H_Ogt1f-6YR&zY2~P=h7kX90g?@)JbN1bd;lI zt4NHKiXiKE<%$e)&DMzcu6<_g(jX~5a(Izm)U$GAzrwv)|w%cXMtIdM<7}rtLH=z|?5fT5^yS6*)SNc@PPs9Doka?RakL z*-umm@yL>ghdCk8%?(sU<^#I=A=f%4+#M3&T%;sgAl8hG>Xl~E3b|Wy$c=7h=&PGx z`;JlK7bYNIf(`_xK2QH0y&ER_j^r8rK3E~mJ4fd_bR`EHT&|a?1xoT73YU2K@suEO*;9DT!yFc zSprCr_G_{k zXPk9!ryB_IMd)}jHm(qKVO>D(v|hQG0VHb<&g!?{kh^TgEpNN&D|$3#bSo6+-}H1$ z&U3L3ec6p;0<9G{wMQMGVI-2qihG;Xpsw8TA~?YBF$FZHDZY272+n19dXwWwPAQ-b z+D%XYusi&g@7eQZAn#R%;kNa_j*pf2VN%aA<(Xq<5xHXEpp?E@sM{4 zNa|ozQ#|cXLPw_uCFlXZq}NZUO@l9iCnP&5_0U5RqDax1tw2a1D3l%z)`W+KbF4k< z=df=R3b@ zGd7Zt%8Y$%_;1@I6{e&$q2RZb4*~%e*S}W)Muw9VINwp*JXJKL93c$v0^9kHx=zC= ztKo2^J(Q?$CV?x%59R{4q=Wn4sL+3td0QNWe7dIVHE77UrOSp zubwa*^I(J`+&C#^jP35-GrcDY+wz4UodaN5hn>7UIZ^MwGw)*sU$5iv#h5izM}OnD~n z`s?fVu)BP2c%0b~$nNLUkaXF;?uOggZ-JSEH5J277TwRvUAMT_n7iEz{xA%^WZ2&s zvwF?06X?lT$c^G*%Lm8E^ToVkn&s)5tG# zwQ`KuNj|PDEk%IN8_rPdJnXNW%qNGOv-3aLUAMY7@8A}-?WvOH9!l?1iEVgTfF>~^ zA9M@_G$*N@gMHAR{PKkHy5>v2|_j(u}|~Q zQ=2Bh1$9n^2>x0FxC_KSnEns&NL}QzvgXZ07Mtqia>~x`rDk1pumaA(|;uxl@7fK z0EdE4&wcgY-?A@Lum?h{4B64|*Ks+CZ?slEw3#Zp1+MT*waKnmN;#XZJd%Q|YL%C2 zj;v3J9VDg0)t;EZUO-4Figrn*J_x}c&VECaH0Z1OHc<~xVAi1FrC6Qo4Qnnjp+{0Y z`JxHlpkg}ajmwT47snJBZV$&cH#qP5`VJ(Dazg7E2diAOOmhcYoqc`HZ0)3z#ootu z1J4KuV*sO+`_z|w#o(E5i$QmnUR$Uh`GK*`hb7UiYn62>L~k-tAW@#!v`Z%8>%s@;Ul+sy${LxWs?&%$hpd z#YNlL*f@Yj;ktwIj#3&^?gssCAvUPRnd}4MksHYY#9l_I4P)sdq0(Bxh9^+rwNW?q z(XNK8HudZ}-$puIYrH3N@#4jk?S=ES7%X%p8*6P;doB0jtH^+w#f)1gFs~RR>##ce zhh6pTM6v}#A!sNqhN_qjBiotzwX@{U>4`rKMV*FtoAjrDYd*|HJ`hwZ*`mbELy;wm z&lprkwdF6Knnq7x*DSQ8(B-A#?w7%6|q}<;evoChMX(63)@3V2)W&{Ke+n$Qc*x{7)pgB zE152vPqQ}MXI9D5ga)Y?v9vTF7AVimiFs;D!eQ#<`}AOKve~Uh_|m2DKB1SdbYVxE zgXoGUXYW-7h}~&{SwlM~f$)EfZF0?D>;;>et~?_MK0g4lV2tz`%`EzpCF`s2Wc(61 z%tv+;-G@hia9F#v!ymbom8y@L5%Biv!KK=PV z&EMxl$xrF!(~-iPq@jm~M!`NDIFbpyFDwKYigs${iw<`R_A5=ue)(VXTnbY@DE`gb zaC=X5So{1Mrx9^y8cmu;J}7xo^$LMUfir(433ljyCi({xX465Wi zJyO+uC;`dR?FVgb#NfKU`TAOl6TrjILVfuZz+v#{trA%uQ$WMD5L1itVf*3Bz4wDp zV>-tIw#KVs9}!EUT9+xvHc|&+b6fuD@aNtmi3EuqKDfK*#1&>?j|4E=)&aw9e42hu zc_bhi1-4Bd6RGgHaB+I!uSKPVv0|#LrOnLDsHq+)2EJ;0a-|)13*CGur#$n`mgpu4 zj`Bz!@gClYH6q^lrOfhg1Gb>ZU%bPh`~0G?uohjI3^zQYNrf3_jYO`-ucitUxyXms)QK+6wWkX9(Iamdv=}9&s}R0!;DkVR6hJs!3LwlFvA9; z8i*9C4uy+q@A+zuwwnT)#u)jrMn^E%qcJ-I4&`)dKD?tm6Q`8_hymVu12HTwVZ1GN z?S$o#R;GYL=p!p@Iwf@wbIQql(4a4&#FnVm3N63iRxo8AG}!=gy#urQo;DBpd!Vgq z2=nVU`g@yu$K|T|+;XW13OvXXkD}wBln`Q=`vM1TZgyVqn*S6PPbo+&lw~Vul)F~% z?d@Gy%sv6l<6b*Yi)T@uS-a}(K?i-!5Sj9+6B}fxqayN8G&v#N#ye*7t zwD>H3q=?(m7(C=LfG%JbG9(9}{}kH@edGtQXoZH;l)eoBw8M}vM{x{XhOnl^MQmF~ zM|iXQ*1TTXkn{hTV))?GrXN3|OugfxY26)%Kn8{19UA|H$P(N0l>2BCY`GLAispG&UY0pK&%nrYm zuvA>;siw7mqrQbA?_KL?Yis@fj6cI1QpVjC*<>L+SS42NCAk=$&JC1ds+gUsy(2vv z&}gRg=l-1nk-pRwNsx%@a++vZo_Kp5jfQHQmXfsTtDVqMIhTMIcEb%`F4H*p|2-FJ znBUp34yt8@N8elF^ASz87YBnU+SQ_07wrO6!%)&ti8)HN!-nhNp?rY!xy)yvB`<@s}^q2u@dvw-uNO@J8Z3x3t# zm7{rXEVNaD7ei-JO1ey6y>oQ#ity6c0md&y6nVlFz?9fk*8w$i}UFA8f10T{C$6tzJOyuUbNM$xz;$tx^oczumD_ z69RVA$K==^w{Aq!+lzyw50SS_=Pnh8p{_#odin;RF}I6#1La2cr@KTj7bb2vdEf@( zSnF9Jl4hkOGx|NeRk)ol?2IDfM^GTJ@Cvs7V@d!w9V*E%2){hHc6{GWb&~7ipfw}? z>CA_=(qG!u5IsS}Bz<|&EuabiD&|RQWsO<8iNnb;-}@9D=B1=g@3Iv=tIV+Ih>~kWJS#z=x%!P`++Mrnx$p$s&6_u3a7ZMUGdfKfrXZU+H z^va!ox+q)?p&_VH5gR^Ndu(7gni&_YuDl}U2G3VOLyYg?+fv0=s7;v;qeU79>p3s{ z^HD-jX3&et6N~J=uK0|*6F0BIPCjmW4$fARzWOujjH(uhcA7#1Xz}$`pi1)-_y%*9 zr)DDlbCY!;dir|qJ-Dl1x%AlnrHg1V9?k;aV?LO^MwlDIBe^jGx<{;v+X z|ELLMTD}wI`zMI`y~X3XfQJI~n{+_|jbJggIzmx{ZSGt6L#?EvR2(Jjj9L@O zWgZ50#`Z?bTF?zGF(3v@ZpJJr(NC%gB}xCMH^8T4sc|D!I)zsn=e{AI*e!AUTLN6T z=M(opR)}8Iru&Q)>vLuLe;TtJ3K+GJDaTjq-9r<+X_3K35Zq9SM${5eWAArxGvA+G z5ffwVkiYQX@q^}0qZ%-V4WwE|k>6&%)7*JTi zOlbdm3}rNCP`y&Kba!WEY-JnuPmY1J50BK}jO=X*+-297K(49Kf(o(np=FNC$FqY;TCLYhn5d;nXoO%$w#|oWk#J$d?617;u)^Zv zsAcb+(LzY0(G)pX@b_8FC2HkC8gf36w%6anGypR{A1aktXxeizX1o}k4m2qRgHYcmTF@JD17{>5q|lU+eSvnT@h zZ;p*ngj^$*U89L?Dfz6AqrCul_BF-Cv$$dSqdk|Fcg4mJV4>WaA5 z_^qYj8&;`D`sGFBXzjm8kT~%iDQgNfWvYLtqy(Xl+Hyb0RBBP0sX5YU5y~?`o~tu{ zFdsg&(cR^UACBcT2mdF02tjc{y;epA>^fd9+?l_4XG1udiYFsUYw|NQu2Dz%AnUrZ z#QiryF(FuHK0P^iuhntklyCnN!BRj1;V;tDpCuXAg&IOjiz#C_5Kk0N|j;B-v(sWq2)pIiyx^^KX zTkAh8fQ{75V-ytxG_|nRSJg>`29~5KtwcesZZ{uBE6+$XxlUezedI?IwuPO4I*!!) zSD9Wv3pDFwbVyO_NNOMR;7H55&+PbfX5TjzCwXZPmji7XDe!aq%TQm{SHQQ4_&ZiF_RN8f@y)C%FsD}Exz2@O|A2fDdl5Bk7F7&We# z43?#I+nasvf%kN2+ByOJXbvei6iy0V5{sTFn2oPRV&*JL8xhE$te&W}2WO$*$k$Cw zBYk`LhV-U`^b9A2(!c6fFmXG_J7#W2le&`a1j*!=I)kmOVQ|Lw&W4Lq zFT)-IAM}adf@hNem19K}NR5Y~i1c8IG_qB&+cSYcrr^o6!gsP{n z+NO7%yITbP;o z{QWLZf4NSL=SNNfD2MtaAZp;^0nnqjXJSqP)#y{g;rMkAmlN?zgpw9FV%2^wc(kLl zbEn{9&V*MU5x7aJL+pi<&!^8Y@2&z&1rte8)21^b*K6*@!Kemr`h3|AvN4RvK8ACO zzF?KE#hfW0I9#yXdJ3SU$k9o}lU8f3n#9Hk)EUTl=VRW%)d9xZ-T1NmSAj96MaM%A zSDan(oF&=}lir<;E0LP<{k$ow`|{(lwyrWf8QBV1=a%qy&vH~oPF2w3{3|#fn4Dj8 z16&jI2|Ffrn4jca4rss!<}xR$|APy)X||nF1VOg4+qfL}IpxEwm$PeSB~!r2qt(5G zc`t$Fv06EvX;l1NuBC-pR71vt-ai_L=TG@aY9sr`C(?<_wLctr26uqS6yG4nZWBt% zE`>hFo?ma0qg-un$W32gA1-1ypR^NN;w{9PuM0j4L&oj}Hia_vY;Afh*Qyy4M#=%a zU$r-}*I~GP+gOef=iT{|o?G`3IfR@rQ@+4;>k-@YeejLI)sY}e4cF%SBGVxnSNnGS z1rcH4WWE#`-v=zF4^N;`kdgE6WpP*t3&h%>e}o*|pGm0oN#Fn?d!JiOS`)Sf--G7ri}aGQf-KvW|NZq25)dDk0z^bO6#a+etHv!_eJgY-|iWqKgXC z`EHS%0f>ukt*>ECzKrOSqzp>huS5+^`_pW-oTuL9CnQk`qU@&9rVBtaydz4L`@;i&7-T|L28*03j2m%FUF z)<)D%Iju_#yn}7f`nS@sL}Ukh95Z8%Iagx5QJJ@VD>|fc>_pru)@!dv2AcOjsSq9I znTR}GdyvxksaNc>z#w_sWk3B+AHilgTBjRuOQW771HTHJI=xIt8FwY0K^R&aZ z(lTq#zInwZrxHK0ki!^Mx_ z!x33?hki6Td%vFLZSpp<%G^4R0m&^Tqb!O4lNQsx3JxSnLTB!iidls@T}S9M?&XQ9 z$??#DPWx{eGV;fM@riy({tZfM;pws?e{(4Pl?swhS|O`~b;lqo47*H5mK-MtI*lMN zWO7D;Pw_fJjB-^$8CAKav-{O#+n1KU)cly1y%y~U~Bxcqefk)G#!wEOMb+z0+r zQ_Ay${1IuyzF%TT0@=sRwjYMdf)+QBU-$G8vE@lCJ0U3?guqPXw}J*_S>7pZ|BI)~ zmnE#9)C{~xA_jC)%JuWZDjyEM%~^1<0}U&Carj3%yv{xDtT z!@>*e@Z%6+^4BG(3(H-WFYw(YJ{>K5y$kN4(!2WwdzDl7jewUrpN{Io67en{yzkD~ zuEKwIWs8$P!y3cot(2_-iPP^Vi}g*F8N=vD-Qv(F?o!aPIM4!!^_-s4Ijm?Wm~0R9 znps%Tme+JT1bH;RjW~e5F4|Cojblxv{qP1%$EAxw=-)R#XkOBO&=kZEoT(dVRU=o{ z69ks1*0;w5pJ;%nHe?48S0COEYFY*RcQoknE9GeuqjL+d0wH?Fdl98rt_9*?gPp5F z=nf;h)BBFfx^4tKzG=*8JI}o|*q?q+1I4RH4f!1!NNX7312xn(!OQqsD7A_@dmTLo4N#;U4ru zT8s_Mhi(geb-21G-gvvd#EMM+P`Dv|3NjA!Y0p2~gFPI4>-50Z%j;KqG!swB&V->c zKH8lZ`u*l)Id=ctui1Sx=@@b&q;9&49r}X@?f=s$E_L9~HqZMr36fSR_`wlRyiCP zCxf7+i|i^)UyCeF?#Ru}?UHb~DZRQ7uJzI9-8O1Ch*8MH)gCIFyG^%8Y)z=_RgeZy z6t}=@G|%LEM>1ui`PL+i0{cKwWE;8h`1My+DcD`aB;a}Qd1-#U+<4I@QUH5LlruE2 z2QO58nML~i*M7?k|({d zmjzFc2vmc6Qf`uG*2zzY&4#A&IeLPy*tKQ#Jwu6G*U$J@{G&if!$OwgD(wa1lIBH; zE(9DfK^ShcfXc!N<(b*$Gb-mm_}1iKs&hGY;pH&Ftf|+YxLvDZIaC$+sq)H9^;1=Y z5jT<>Z18Ro6*KS3G}nVtSI+x(MHYdK=5mm6{_gK7g?6sRUP_Z^vDbz-Ya37#$;bz> z>`d@85CeuaE6s0x+5%edh;FXzeo;P8K*@i1$zI>*i_WjU zJOsPLVj<-xLGigtWm3?n-s?^&iHO`_agM#w2)_V=f$TUN#qIjKHmv@=L(QoD(eT58MHK~=n(J8p1l$?=v;Y4E|W zdl{YMa5eN|NLADLVfDer&#!p;=L~%V70Fa2X4sL(Nlo!oPc|!i=-1OfLbEYDuTBu0 z5()@OKj*6})h#+>V@`w62k#6Uy=kOC9xo>{fZ7R+hxio7DAv7Q*Jpgz8#k!8nuj)d z{{B7JZFa3o1FR=@fcN_KLQ~bG{DX4u5~{<3f&wUM>7?Zkg6t>PJH2xefrM?CYHQNP z*{U*5-{Npk_#;;h<@pDzNH+umRh1h&rj|kRuLX*HlaXxIUww6FlE+xp|1se-ANAI~ zq^GXH-K3aU`VsilZZz}e&y7-r2&<}Bj{`r~@vCTO%1WpC#+B~t>P&BztjG*HLmQqb zMc7rI$v~2XKkd>`a4Y7|3&X7DL-^i^*fj+O1+12?dH=Sbj?RwBh#(2%iwDn-br~NJ zPVQ3URa(;ucIx~|Dev0^M{2G=LWx3zu-B@LN=Y>IzJ8)%dM5xFiwtEV^z-1EjlFRo zWQ;X-3Lo9qFyhe9cIA4?JZcdYG-1K_dbg%%!=g*!jUlN4atN~+P1~1a+ACm;iRZOn zwNHn=y^dhTm|z$CK{36PA>ldHfYp;+e1|n#X9-n|-x5^B-ZuCX$T1+5gqo0N6>h3R z-EQ??OW&V=Lva4&Qr8mPQ!!3zhzrR=TdO{M2o3Jf%!H*PoSqcgIu<`rn;gRK_kiLx zIrG0_sE7t1Xo7RY1ad6(nd+7q8^(|1!`io_bT-A~g?7ca)(@$aLoiD#XrHE8#pJ-Z>Ke1AgiCDyq> zIxho-98PV|P~c1EP)l}#qOY%^RPu2Q8hXi~-`kZ5XOViC9x4!XjcmD=k5vQ`GTFI! z`->#E=mswtFfC7a%0qI*Ll52yJ6w1Fl09+I(etDB+E;Exih(Z%dzMAoBA-?TN4N_{ z(uFMPKk6DOl-rlOlEiYiM35Wp#fFNe<7TK-^@y;g>^!Pr#-M&e)WyDieQ?n>ftY2Do?r(j-1Z88~Eya!p19(XUaw8`; z;`K4u8Eu4O4bl!`;_)8#R>ZJ(>!6_K#_uYMX5%d$C?G%R^)pZ1F{8X>QS>z4xu+ky z8PBg)>tmg=_onaJB1lo$P@R{Pu*&VWE=Qe zZWzmFavc!dn!H==1;7@vtvLb+owTQYJO*gK+eX-PW` zk)CfoZynCp$!`!{YCZA6iEuVeGf@c9MT0RjvdlPxQ$Q)pcl}sVg0S7c2u*N|TF4C*Yd>ybXdXFBdY0*4!BEhB0^XTaO;zqJ zUebBGV9(k2ac7V)@AzVskCXF>w|b#9*vjR_$R6xQlAq<{XEFKShFEWIlj9I?{r=!F ziNUHyO@T2o3QQDXiORkY(UZ5mp_Y!b)w?3i=jiv7$+>vZUu(UBPId(U$sx!{gzcMW zX1U?;Xd~C`#cizB=pUJK?8#hfmV1OdBhP;4j?nLYV%SZ>T7A%y8Pgm~>dA_3<@xj) zbs09Qf%xGd!t}VH`e71w{n-cSO{H&+WuL>ssLJNv6n&Uem-xNN!94!_D>^igtbN-z@u=8J>MG{5YQKYNmNQVQ1N_%1(ZNoLD)Y znt6~PRcKY^>fBeHcCI;xA# zS6vGNn4xCFLFq<6Maqu&=jx~A5TYd5+? zEql|*+88u#xzq0jg0G!jg*q@qG8`nN7`LzKy_c5g{S|6q@ya0w)PkMPQ25&8I(KZ| zU76kb7}r=NHYI1f)weEayj{>8^?o@)v(*9rNANi`NYRLt4d8zO`l$Esm8BYOoICa| zhR?W47sKOhHe=Z~%Ytd?{h6IqvWR>AS=7a{Ss_&+jqTqX@-4DW`%4aI)YN~wiqYm> z(<*+95;_7#8VC$lMeA!=(M= z0Ez=&{Qq(F-tkob|Nnn^opMld$V!D{Z`lnin{3C-%1RuP8IF9B#S5}oy~({)r-C5*qY*m1{7m|V%naF5@@H~XR+eHS6LTnzKtHFC2jA2XaJ zml{zUEJkowN4@SM{h&>OC>3M3%Q%(}GSRy^dj7C8huemoi0rj%itj^;8=H*Z*I48? z=6EsA4gExwmKwI|wGw|!+hd9a$62>_50kB>29CqJ4RTp=!RmEq0L{123K5R=DnGai zyXzHTJjXzFj1i5=)Q+)Mq)tmqV=8K1ep7vJE<~ui<6(o=>XuMt3Nc15oVpPwg`aab zZyc!Lyoo!^CUh??c~-~LwZs4?6bE$u1p^S@DwTnTA+WosXU27`RCds({mbnpJ2RU# z0)f|FE~^kfu9v@1dzp2R`m$r0Ca4p7tI{&7?@a`;BJ9dJ!<%;k*T)}|c7rk*h&N;9 z1lnE=pF@qpf?i>pvq7_5u)AfK5im9?Q-*SS7CO&H)LYM7;gJ4?i;=b8%z$B>Qw*8O zxtcA=85yRiJi8DIxstP4Yx}-EhGm3BL82uMLf+aVn)W0MkK|&o*FVZKtZ7fQVad() zZTlJO({$&nH9%!JHLmVv)Iig6e53oXvVu2U~JW^3Pooi7lDpZ`NxYcppGO_sa23dS< zSs@$XIa#wLZ2&n!scSoK>9dbDii&Tgs~SN=cbl;9Ahvh0oQ7Vc6estPZ1?`08n01s zPkbqV*m9HOv-{=;c{v}=v*t^_=c;%hPA&(&?kVv`%uHN@R;m0|r`vPE)67X8sz&wq zRaPPJo*U0920v%$T9;T`TQ7(%=1e2@N~Jhn+&vXYAEz=et6Xn6#z|~m9D-1#_Ffn79i)AakVw3c3aA<31 z{T3&-Mhb_c_UHnloU864(o1#7m4VHtSt>fuqByD-Z~2E`@yZgtW}%0x5xeVraBX$A z&wLBn5U0Nn4u6=|DYLB?K%7sF?xenVPgM*Y{7bl8zGKMS>n6)vDmm1bewnh zqQ+x*e)|uw4n!|qA(WMRbGAY?0~D;mpm>Ltz)@Sh$*dtUgp4miAW{Ry!-$fOg(&LYhgdQ`{M`Ni5j1N9pdWefTqH%@z-L( zPu~&aIWOFDgY4@gR^N`@B(ThZJ`XQLUn!7IV24sEGNF^Zdl$kkpK^(8K{eVPhW3^7 z#jd%K4i8~kR&2Jpi{6?!-?-Xog{{a+C&5I;kU9DTW$%O-mX?-28BZE$lt2RehkS^k zUGRMF4Frku{Pfo3!+~ygBqHndzPn;1C7mHC3#;@NY%0^PcV_tA@cH0rz%^1_19S9m zsk1kmZ&PwbGPdsT?9M8;xRbG{VmrS+jn~*#5-8wMoc#7s8YyhoAN=-N00mRaz*} zoG^VjxK=9W(_j0DS^K;9)%cJMhQqCu>`4+QH#~er-SNYH04&bUbCfgw;<^L`^l5%= zO{z2)AU#8TtEqB245mmPR1*^uU*tARNJymcGFAW;!z5NoXg6Ac^+uJ>L;{vf%4^%V zRiEZ0U=o|Tjm|`xSX@fGUBT`}pj_7-31YH+E6*9p7*BX_@pCvOlgwu{vOlXtx3%R# zqnV}7HG*xP*W>@hTl91LC^R$;8&+yu?k%5>QR0SgpEv?x<}ylEsWJ;QZH%8v5&m!X z9q7cjFWys+4B58zw=Q>faCp6SH2d$}e#oA&tK5YfJtP@|&Z)ZCD((77P_0q2`HW=AQx}Is&=HqO*RN|xkJuCU__C@%b4x(v@_pX5X zR15g#E5dfVbTWWz2(8XF8J~s15Ehg;rxUJJr;6t?Gx98Nu!L9}A%!O7*iG|?E0K}4 z$f)O}i?wf-gLKlI3v6V^mfW=X3@!(pvu(O=GvZAnd%ZEpKpyOBFfK2Pv(%SdrvtPc zeO}u(Z`wg3<%<>!^%HZzz91B~gZulltx7>ie`V__HHjA*`9t|t2rcX?@rIs_YF1jB z$L>0kv%x(7wZQ~(Hk*Lbr~Nb>H1#Hb`-?*089luOuXi%?aRd2=LmRQ9Qv>!FA-cuG ziBYGNGNa(A(*_eMW?RReIRi_>RAd(M|%AwWB3G!pjddZ^B`FB<)U%DK$1a>&h( z%?FmJE?YQQ%C3PN%1`RFo1HjIE6Sy={+KPtwCi>&4V)q4VhtkJPR2YhH2U1=sFd@t z;WCd5a1P+xyR-SXWe3jjns+QE#z(}!lMLkNoDL&dbl>D_DVZ91T&ADrIXd9|j5gVI ze6Wiel4Kt)rzaetDJe5{S$f2lZ^zl7=GxiW93A^(Om7>o<1#wUx^2z=i6-^N>MIut z)%MR*B|tIH!%|~cfwj)X-JS6IqVGu!)mh91Qza;ahOA9^nO%H%U8-S z6Iq~)b~RIa_i%BfMxMYQs(Ujuy-RS;%flb_Ixeo3dGT{X>gY6G ztk#81#9k-o_|(H_zkM5Rj7*oNa5&mx$4nO^GcS`wMHp58sO$A=(?T$EwrGK_sMA~k zf8Yqa>ycytHpG2U%xSh6SG`N&Uaw}hE&3rWMPw=APK#}H(?-+MbsD5U(6jZV;fH6$ zkRH#6KX|HfoQmumjwxQC?}+a+J70ngH;GmQ&n^z%h(}zSIdiBhOu~vy#E$@FmX@3E zzyno~Z?9$xp+_NzSHw4j31op7q+0D)K_UF)nDXPmealaCR_cvT4`zD*8zK?RwtNw6 zW~WFcm6(1l^=T!Z<$HmmWZesvqT4qM83{D*pOBsycP73tk_s zs{{BA&ix$1Ux?rUO}v0)J-QsrW_1IV;5K?58fu1EwecSXo0|hZxMA_g#UWKL#>jqQ zp;_0wW#Sd>vl7jjb?QV#avPAV`COwz zG-bubouk*;sn5RgZbA00wj3nZJgHKRLhQ*E^t-h|GE1>#Sm8sbm@>Mvp(x;sKLc8m zi%g$4{7H%pXy;z1o1|;HdsurDoHtf_LneaJ;pRUr zK-)eDX#eMb{JS+&9d4vs{*B+1vR)NI+|Ox;8+F?`Vk3x?7HXFM9LlPYq}gUY-NK<( z-fP8yyOta_>E&cmW4evFEoX#s-oYzeGGzZMLZa;bp=JNzbb@_>G<4`H=qT>um9!>L z7a~>I7l$X)P4LSBOKgb$RvU6LWR~pb>w0f=2rAmF$|<*nTKGs$YEhIbONeeN9{EhT zh>YydUA;=PTaVr6^mk6<$JfS4d6)if_ss8hd$g;c%+%$co^@I!)rHC4-=^W5No zrRfyxXI-?l?_b9v6I~E3`t_T(f{|HfIO~p)YfMW-eRVJ4(?@QozGR}xE$;No7A>r` zHCl0fY+~~9a3e}bM>i(8?115JG-P3AZ6D{W@}jl`XLSKtz2jV3|A(XoMim4mdZ`~kEIeuw};mpzS6Cwm1I`h-TtC|iz&Bx&3 zFHQe*UKA+B&?p!<&QnUByX|(aCdA#h#Pv=|Z%)RvY9Y3o{rNKG@ZlNg-D_iQZ-ML< zx-?oR9lDWpZi>wav^6-hM53E+}EMjVL8gOPt5#8e0mFk1*01Zvc0_~_z0y2AEKkRFIiu?dOn zh#TVcyr^)gE=JNaPy-ywefa45edT3IF3T@)#lef$wJDY^7Gc@`E_fgQOQQX+>e z_ZHes?lz19raa7A_Fxjs6ESE~elf93Wb$r@(D&=!_hOcjXWXfLK%Hk%#q=7*NJjqe zf*%)#^mq&yz1&WkZ|#wV6Xp4D+avaFj@jkWl8{d^{RhA~QFfVYNR8`Eq&DX)!qZ8) zie=50*Phwh#nwqgxrXYS8_#8DsXdchBG;1r7Db?&MFS#j;}8mN8PT7Ve9U=Rg=LLC zH$w^a;@HD%HV(xhre9Uq{rCYccK{&}fyZaK8m&YEohM)M^MEctpiC9L`FaO#bd`m+ zeFwX9jpnF>tWIxBj7rP-i*(Sz$}rs1QjZ*`~7F`5{y-MOX`i*BZ|p z4(8z-G497_&V#j2vP$MsC8tQ$e&btE+aPzE{<9ADA)XQ{lAmRqccbRW*1sEcW5L-zDbJAO7db;kK>IMeVg4-%w@JL4C zjKD5jSddu86Tf}&LV1p&RR`ji^_nDnlfO_TH~M*G;`k<#4Si zY6|4EX)3a+!dqihS1iEI|4Q+tn&A|=bCJ!9?rWX~Zne!TXGX9}8aw0-nU7-kTip;_ zm-CWxfsg+H-D#+QSz%ihbX_*{qLV){g?jtkQYEn>wFOVz?81;#)$Q$0k(<|gU?@!> z*(PK;w50CUH;3G$qZ_gfJooI8?Pd0Ff!qAmy;;LSbdPL4wu>s;ILwG9FNVy8aIm$7*@e^$6TX^nrSf_Wq|Fg% zB99?5>J99GYup5udg~y9=`ETDnCV1k23^M#nQ_>X>3R-N4R4?a*I^wczyK-tiAtd4 zwXBdGp?4`n@J?=nd^2*yJYLNjSF57|#HYY@#U>faj%aeU-s3n%k-Moo50dL* z9E1DOxQsS~Pg*ZOU4vsCXDbOGGeHGhCyv6|=GqXyuEfnfad&C+w%-~8*#rr#@&ino z1<8I)9l8>&qE40U>LI-bQt|@FzCEO7TB4Mj+6pybr@ge0i3T=9mvE&5__B}ga~4~B zr&eQ>=P9-cu4kzsnZZB1_dQ;8rt;tqykI+N{8w*F0QW$7m5T_TwdsvF@nJM&-zZ1L z{&H9aFv||ok=7sAIyLx|p~~=5j)YQ_?^fx^uS&%f!4i@hIjksXev~zG)fm@;gx^W7 z)zqDLg5O!?zxTV*xm&mr@33O>7BaGH>TocPIaS2v)xC;!$nq`Ce{*dsgsjva?a)$R zLax6fNZA6X%}?*yYC%-N0=Ifg*rL>d*iRQ$S|~Y6ABl$4kD3w$lw&rxbXrF6U<`$R zKvbHJ71cjCa&~DcMJ(77qfgd8Lp!u&Hj&>31NEV+n;V6#zVZTS_Jr(zJ6vmN*jt1d zj_T`e`ygMc&!YpRt&ub*vM(vO4|NAlG)L_L`^?3=8_qi~({=Dkw##UJIcazf`Y0a{b&T6b^Hn>C?Vj`;zFn;}(&C?b!n z`?0@_hFoIZK5Y@`i@=lOPiq58CZ4>HWCh zxqB;3a_=(lZ41QS!@N6gG$egArvI|WbVVMOkMg0>BlC-4xqb62>D&?r_pQs*TTCHk zmgPi^DR%FNcW4gVBfLR=vE!kptUuA)19yj zdA2gQ#H-ZmR&{stn1#MAJm@g!e#Ec7Vb!s z3Oy(r$C*20%eK@p*A`|f$KK|b(u#TsTr9%wEWc>z5MT72PHwst?)==0uid|mW|xR1 z&;z%Vf(L2}V--OJk&lEksshV+tj^}m)HcU9>npG5vN%(ih z_qO=ad1|UmP*0G1NKVallE9yye_&E^?bq{9PQ6O}-rAKtU}{V7r9XA7D6i%(vn4E) zd>W7+>Ce_}2{kp#{vfML(`bRPySE2KExx|AI`#}}x(-mGz_Y1SpDyd+Ij`ML z=x`$!w5e$9U#a2PHhK5+=CM9ee14MQp+rA0dXK&C0{7YB`Z>AC)>yNxyBCFkYB0P7 zy{losJx@nnR4QzP=d2ze?sOPqWf-uNQR zk*}c)f-`CfM_$*pY#P&pQs8xC9~Ciobm{!rHioNWGj)X)@HM5onpOJG_vv`owz1@l zx|v%dc;x`!6OvgrmgFd)t}9*$dGl>;TGU0hud~wdZR)|+g>3C|;%C>X20h&lJ3G7N z;o;#cF@1Uh5Z$h?w`{D;w+Z~$4x(WSrPvVPPv6YT!*kKspZ#qO6?%WW?Ezv>sX(5i zGN(&az~ym$wc=JuEyWhzXkbj)1K|OeRou5%7oDpN4FXk>wXUZ%g3Yg%Rzcq`Y>r2J^f4pnou1JAc$wRtgTHU_?(x*dF3{(} zLN0pdg!;;QqMRdH0^M4RT~MA)&QEe6-oGZ)1mxTqHV)({m3dHz7VYuuS?)$A5PcWN zNcV7pTEmPxPt(T3$qEl*_r?YY9-(fW0R-J!w8e6@ZuW|n$;r6oa>RaT9L>khL5*5} z$T$`o?;fR{*0w*OKAnVw>Sr+sCbb%wZ_#|!-D2Gkh@ZUYk`1D$&szr&TL|RbkZXC1sk+kegRh&QMhy*S6E?)c z)4xrC6Kq@fn5m^p3il+NmMzT9KRlTL)qB#*F3$AdlO`?7*SZrGF_=Oe3!RkF2wbn_ z80>96Hq%QjbMPLDNiKI&9Cr66$Grj2dm!qfb)6&BG8!3mOBt0cbD{m=b!pDI?qnd5 zX0BvL22H<)}DNlocxO|VDm1pytL4umY2 zjLY3$8dyyXpg0|BtvnC>h2l2X=EWRC0+6evE}m65OPsesO%>8E{XySoUhbyk{XL2~ z173>mP*e6hHiVqV1%Tp{==58^TNQ%5;*4_QB(ppo&Z=UA=w7
  • ~O#e=Y;t5!n&C zan07=pTy5{QwBZIM%OA#%8q<-iR2>K#8jcP?z*?2`+XFB49MQf#Ux|xm#ZJoiO_)NPdQ1*+)OSJZ>D$4~i~esdP2x2oOcZ6Op1mIQlGQyTIE6(U5} z=1o_Zle>Fif!}I+{mG`Xev_SH;r>w`-;$MK-YiARQeUsrvzl6~Z%K(S3&k%e52e)D z@(NrGkr*Ba?jb7JFIfc2AdTz?7Lzjcu78^DS9u7Tmtp6jzoVn7r@ep}e{ zvof_(A)3%EY_o57h{)JraJjch4n|2FzK!LDv8uMRgT}Lczw`7+vvyTbg!;mFo{JFN zo|ayU?5}XUc4xuKtTt;_L@ez?Kl!!nuKWuP>XKdov#oI?)0{9qJQaQAHPU5{lggF} zef@cSQdTJlvajo2kxKO24-1Vn{>eBuiMj92`vcnUUa-@UZ+7rWHIe4~+0*l{k5EZi z;M{7&v;e7iJGRrls^kPv?R!x1<+=TEaOfc5GYv*0p@%Z$B;ghPxi`TBKzN(1^74bV zxU*-{1p8Q*17S@xSJ~RB!9phXlzA4X5w6A4hGK9JWeU&rvsPEAEkefjhv!3dcb|LE zM#Q(}LH5S%t#j8dsK%goe)91)N2?Tur>D(Cp%wV?${R>IAKXXwWqsx3-cK))Lv3k@ll&M5<_mIuRai!9mt_MUlS|~!uQ7* z=6Iutn@ghj+O~pj`v?#2t1MJ7x}PA~eGD0Z@mp7qWO;%3eF*PY9p?QvVYXm*Ar9|dfr+cBanT*s6RS!3svY+# zG@rshXk1UJh>&5v(&Q>7RCit*qI$PMnC46`R zy0J)PhK-sE2@||1_(IbyWCPHxP)~h&?{9;x{jz#??S(}mMP&a+?JdpUx_1}5DjzC0 zb0!yOr>JBh_7a;4IER-`b6?gM)U6aPtFEpFPH3%ADta>{0h@2FJ1GU_OWW-kSE@&Bv@!>VtY^r6#)*Ki=2WuePNWwN8iR zfuOfgM606N*1@v%6FV?rrE)iu)8fB}_zlqlhfW?J0|>DD`1G}RFISN!n}0yAoL;mk z<8u?2uf3?~Y0#>d_sKH2qGoMc2hzB+$%_J!UY!R+yxO~{EXa89PoI$Zd9?_osJ$lm zohtfVb|$I;2DEu}R#W;wP}|zddUDUZm>+AQRr+zrnSq$p;SVV(ABFQ-^|nOZs&FX& z5XzQ-W|p=?7WRBv5Q2^s6r>>Ps27L+kgs{kYI~LI@=?k}rOX#-TvIx%7$0}aE@rDF z-?8lp>enG(`@cuVR~gHX5c^5X>}%hicWybCgt=_p-q1x{{q0o$rWK_U32EkqUpCu{ zMRg8>gF<&6#`8f`*&8L%-sjac1tjv@Z^?lT90k|YnB9JT@l@UgmcY?@_h;5Mw(a-A z&p@z@<;;e9K5{MFgZRvuo*R4Ji^!ruUh4oz*12}jyd&gE5!R(@yFWP2&P(ZDFKLNz z_^FBEXPBq%!zL5P<<3AB7}fyJaWD?Cf64rZb&-je7590%l!WH^0|T|z#RinLEwX`+ z-F6`8=yuh$i3{?ab=!l<9ry~09r(C{UAYcyc$oFb$7oRPI>BU2aMY}Q@_J=LM2Fx0 z)}`|9ctdZkR(?OEEOB6bH;(+YIX>P{jjR7csHbuI$N%2uyHMpLwlc-y-ypa7z~}UI z!Yqj<@(s7-qlk0HQ%K?u^GoZh3SA?+d*+a0*Jxbn+Nj{<^l>F$MvqC%Layt&qnAoh&1 z%!27j`a77!q~6gn0qDmEt>zH;xjHRJrQJrMqjZT&@7KN+n(8+!?1kOiAJ_j{cjQM) za*ZdG`T6Y`Z^%dEXBiDfokw|ad=+jXz-o|_Vbpytk_lU%?_H<|(ly}Zg_A$=@nNeg z;nGg|atw^WRpDN5+IDpE zHi&&yqdY8i%fkg=JL%|9Q=$qRClv<`-mV6!9FY?|Nw`R(ZcTH>m%VJ$K5S>^>`=;p zRq)a${ya}UqEr>Uv1919b5>aV6Xx4VxZiVmAN0f{L2i*0gZ_riTR_$vflkuldpJq59G-EJy zVpnG~DzKE*cw$U`b*QBo(jG%YItcb?TCJDqiW2>Qc-OSw5nTH0r};hO`Iedv8USOBi09 z>v9V}RWf|52oX`k98P(boE5Q?bZk5MmG|n8o9!fi7|+)%b#zOwDwhc&%mCZ@9kzLa zFR}p~g{}d+JRydbq~Cz)U(}eU8J5je1Bz9LLeJ#>A597GR>$szO>^F9Z5h{PStCFq zjvbH$god_Icq;N?>5J2N!mTL=frO9Wpqq;n`oL?3t^7!yZ;j>NZvnSsUiG#BP#&%w zN1!JCYHNHgKxt|En9i)~*WyiAUtWq73>hkZ`YiYC0B~Ia*6U9pn^y0mX?yPO2Q0g$ zGxnNAKYvJ&dgJ8B6=;__4*b3so&U)7b=x7=Qc6?knrAd(IY{>m=b+xQHOA|+5>Mmp#cS6o?t2W}x{+ob44c55* zvj^L6^WzQ_=MOBNz1%^tl$RM6asn&ufp_FC`9Rs=J*TyBnC%Dze58F)K()3%Ugzuf zq%?T>{dk#xq4k(IxCNfIFho;TWuNdVvs>lf)bgyR_?K`!aXR%@rK=?hS=+AjAI#I^ zk4Ki$FsudYH;x1)Ba8xV+Mb%u6&d%0Ge?Gn!~QGDBD{V+rXZm4`YBmJ|aM z(4KKJv)o%-mQ6n@P#y42h9&onzDIn90?^Z=-f?9H}sU6IVP z*4xt)jb}8Ka&wMVZKYx=s;t1(#Iaew(wK)VO~*p3!#v;JaCFS}8|`F`YF#wj>f*zi z+yU5F*iI2+Hn(Xg6-zoEW|+w_`#HGAPIMS)^O(7DbN^a<1?>n%v-MsPP~HuWyBfbZ z-#L|~f+=~Nt~pI)J?Fl58-db#w86r$M69m|b*GN)$w2*dxvu&skro3Nq)wPeMj%Bl z`pp?di_}4=-oqV=;$i)H4_&J9BbvWXz0$V#35 zciMRwTr|`6$QSko8q6WP<6`LDIvV`CL%=n*lT>^#bh&rdiH^i8!d0S%COD!v;{B^A zR8+DFIRsYcQFtUltTS_39?a5PboI0EV+RZoYgaR8?OCGjY$jXMkZNEuE?x`mA=IZCu zG@LF)f&xJyq$QZ%%=YLnpP!yh-@Cj-B+TE&SM7fv4%0qE*mu+;#NW;te!3**Y`c7# zIYc?ci_|#mlMGr_nFvH|b+eFrMsiae4ye6LyARx~Wb>t=o~jPO+kRu-BQtHUd|h>Z z_}=?xV^k>p5@kAcH!qWHP?LBPo0v;eQybKu!Ij5%LjA{0E-_7LE>ItejU%_!S zX8Sue3|jtzvn^(hft!$UjA8ffF)BlIfY5s1Ktt-$7Roh@?Ti*mELsf^>@ut_^#GDiK@5j}?O1%qA&_7EEICBOf z1yd-%K9t(R1P0czS&-g=NxHVNveD-Ovk?jK!I>r zd0-g=bgC!g7h30juBVS$0U$T^8mf9cuE>FSGY4bG16Llp#D1KO_o#^aRMQ!Skb&d# z^-;zn<~N^kDu20vih2_?9FtgKUxRvDdNC5JK`wT9gW~7|-`Q4g?1HJ(=71IJ`6!eK z9l7}n^upJd(1Q99-Y=Dgy6#4DYQx`QT-D)0auWJZbQ`QJESdcP|EvG`KR&faaeZ&FZvCTdl~WO&l@fyBj>$v!MuVBGIE`x3`A z$Tz4@QyPzIjo+bBR|trYshlCRCRQ6pylg>zRO5)`3ctysMeKp%9|@9B-|GPW!flb6 zn_^$I@IK~gP98&M)Ls@#g8a8{DWAB@`68hmpNt-HQa_A%+^@ep^{v8xiRjKypa z;ulKeuN~SGp71?c%FdW0K@fWx2AbVh&;$pf0*JT3;oDuG@2?!!{R2rUK6{_2`Qn8= zf&1=dh(bl#xl-eh8E8{MvPKy{1pZBC@>di8dC6ZP%2@37mTp@}`GE}VZmz|zPFt2p z#SqeCb1Fr7;q?$O>}z#eGm)1odMV8Va_lFoy=~fSRVHFYN*U-5`6jJ(5V+a&>@UT} z`lFHgMGHk|Mv03hrBNuJ6q9;@-x(CqI8Og(o-5z9OGBhq->L)*AqNkt1(em1YWMGL`ZlRt{T+GF)S$?s@r{~5j#$! zuYv^PPM(sk$o0&hq1hsXii49XEp6z>`Er2GzlP&5zRMf);?7;B_?Tcy)*9s z-||e##`Pk{a2#zreD4oct~?ywz}-UlkAFdF>B<{s-1uO+RYo{G+^ zX%WjhwzjtCK)^OUznxnpg0my}WhQ0eP(2!-Ff-8?vM}ZXR((-nCc}cx{>($`N67$Uzgw<-70WIm& zV<)4${~+fL6t7jDyHC9R{^V<#i1qs_qGFW3kE9`MN$>C{Q6Z#Jk`tZgIXQrni95xq zbmdZH)bU7K$Vx6YtN6Cv1UwP;oOJlt0`Z7_Xe(n{hqh!9MrpO{Vl^(He8e4#z9zEK?bRdf* ze_MIr5twtcZ6BpR&9e-Dy^TI3SEO@t^ojHdNgbv8g@ij?-}Sz7)Zw?^b6*Z5O4y#u zSd}*u^G+;}NeNAid*-6{4jO)DtcKcfPVIjT?-fRw(t+$_J&!{^i+=EdpMd^e0+h5U z%`taw8-a`0>{B!=_tw-AYBEp{M8NY~_z!g*N|jkRsDBBm!m9E9HDn=|=g%5!jB3xvn1zAW8gmDzQvro_qa&@E|#8 z(1H0#Ltxoi#o(%wGMnrVTR{M=O<@6jpN`a{AjB7vxUDI5y1q&;g0uWLDBv%+teIx5 zWSS7Bt(?$$0Wl`OF6_pte)2cB;&SD&Ns@Leg`SP_HQ4X`g9m>IM<*g8w#k~sOZ2Te zX@$R35GPa4otMJ9`M0Fz)32`_t|(7FZ;U`mP1dLC{sIw|ph3!c&mD*lv$l9$IuR%* z_KSG8)i6WD*bc`gY6|JUE6U2NFh!}qz(uYJ@DmPMWX)6Q#mcbv%Q(*0Cw_1J4y<7` z2;(;`Zd|0!@u)q3K}pTXh(q#L{lnQ2Gj1$hAO)zp2t8-Gb(ToZAJ>^f&)^BDiotPcWFwRs+0LW4 zxl4|Eu3>ept@)^dkl3Tn{bon$u)4!00D9_9+c8!Q9b4DG2f1yig7cmO(2nuFJ0iC& z5%kk5RaN`DN^fMf^DyFI!anLecWd-x?tgHHkguK+FNTs$HiR$T`QMAbA5pSIpl#`J zWq+DXqZ1W@4%JX!$;QfS1ek*`;vDS4(|Yy(}`1WQI42qKN&!{vT?VK2>H* z3Xpnip0GgK-0IfAk_}69GEMMp5Xwi8e~E=}ObS`~*f>z_1|5?V$Nn4mPos|jyR>A# zP75USHMV823_2F41d1LHMi9-nSNqQ0(A)qkxdgrWRn31hjm$ey_ws@W1r=;?!6_VI z$AvXtoPx05DED>~+Rs0Qm%7gbaA;2+M;H8KdTm~faA;v&I`JaDHQWq$kY3iX5pw2w zxi`O#kk#;(-Y0o_#sCfQFCzVq|NGaK7NCv8_X58Yh5d3O?t@X(9>9RV$Y$q2)$&hV z#LtdeUg7v3`3uUh6sVed!5V%$32R3%A+`rx?gS5k+f!yu&^qnG4q3#2s`2@6*-!kv zLH1jJT?n)-!l2jYqS8tyaaGI|A6%Hu8mlJWpNb(w}?X^Pu`X7?p_ zr5-hU%@H@DP;wSHB0W_jR=-&W3#LBOlnf;h-#G82oSMah*SEPMR9_J95VM6_)dCL}nUR0Zbfp4klUM z`QQ*Z@68QvOW^;1TVJ4O&2j&IasDDK4miO?et6dD1n(Z*ZpL! z1%#rg;@0U-v-=D^YKl1U894bv=_{Kev_ygUyC zP{ECF>n*5y&2cFtVb!BiKtp-!{+~?1F+2no%mpAA|M@-m%_#?6*vfqL9?Ig!nTm2w za~y@9z11=h5c{Yp7BQW&3hk2Mh0+hK+&uZWvsL`Tt>LVET*@G&COD6fq?b?IXh=is z@Z8M~_{Q_Ao>6S)0E_Q`6%)V{O(Y&^Yu2kgsXtd+un>Pjf4*ds53el$JJ&%s%XWCj z--pk}{ars93bZ0ta}OVZl`5rhp&^7FSOjQD@@#y$n;h^BPP!^)c(r!x&-fHa9ieLN zho%3ni-=f#9V2w}3Y31|5KI0U^|k6%B7`zaO$lU9i1$5huR%|HXBfoara{cntC|}h z=y3`151RLz{c$mbRLpUAh^{_l+v^aKcfI?`*CZ(3IApu|BL2WLw}|=7D9RhAOUT2wLaAM6)^BH^HiSggoM?TIEV`RN1H{l*N3aJtGo zOqiMazGM#0c;O1@1%+dfe<{WHBHzUFWL{zlJy{=ui5u0qqdZ5tWY_ z=YlIMN@lU;)Q?A@2vM%eH~x=#`SmzJ@;l2u`2!eB6Wm@cSYYC}ll@8uCDqcCR`2@$ zMthE_>^Vu^3ZW-~Opgl)S4xtu8vk2p>iKR@et2XY$MXmORg*;VfNucVQPJVq*7b03qb>Q)JQre_y?wB^Q2s31!p0e+7Xu*6xuK z(kQn->a06|9Cp_sTCeVd&|ieGk8o4x_xBH7939(AcA=;Kjn83-r=ugNS>!@j7v~3v z-$@|OQlG@x*84juby}Rj1W1t++(NmWtO>-j+d?S{d=AQXB%UAD=dJ`90To44O#HtU z9@xDPp>Q{d9DhEtNTKb+dKLEvge$K?D9n+07&{60>F|QiPXrh6TEki!@1EWG`;iQV zDcqiD6@2#9W6GwhXhtrHauQ4aV05NB2f$f0%7t=6(49ndr#zxD`qNn$7zQ>zN+SJQ z|JIlzR7i%k=9vVCmXl?3xxL}Q!e@*#+>)xV1e3vo4IrgX!0uYb*}VM*KxvgYpsl~N zF|8v|1^ZLa2fR{mNF%GX>!;W9@dT4`H|)Zv_csSB;1G)JQ7{^~;Y5p{a^ezT?})m| zY9IP0T?tixyBqz#&$9*9<-iq;wA&X@CCJP>f~@J)BhRv1hS+jksh%?eQ}#uXJFX|C^SADG}J!@<@l4>qIQ~uot4K`Quhp%eQS) z!m}L{UGt3lzF&eTi66_yiAafO-XQFFGVVIOFvp_BjCY@-vA&{#M)ie)T~H#M;3K}}*? zrx08E)hiQ|ha%JCPoA38YpR!eemz#pa+T|9@QVyoLwV~;;??W47yb@tN5C3%_Iv== z_QfYd*ad`F?yA*oaOrVvAzME%r(m!*Rbelj{%wR52379=bg~6-%fIigo$)W}@y1LM zoUJ+zYb3mKMvu@^h8?#|cQ$~)@!mHJF#Q7>)66FuzLTh+L1j{)UW^C`?jL)q{J#x)VpYPN+c*q?4?l-} z1r<8}e$POyCaBH163MLP;ru$d_>i>KY=I!o=uX~?=EL`TrepfK zVLLOQ$NFulA)YPG%Y^Mwg|Obrt*7VyTx|s4B1ylHhkwauHk`09OFI4du`3+k3-ujE z=ytijK0k7G$Zr@P31%4@F}*!n(78Gs-1&DU69nWa-Ca!#)yJ6~96w`|4MI%sMREez z8fiXXyoVyxdQ-ODzd+ZbetdR7)BRihJV4xHau(k-ILZ^)L2rQzoL=#p+k-9DP9M$r zSqrI|@!~n#qNTB)oN<5U?0QBlzi>DElb=g*HVg_ZwObOGI11_Vt*rZzSzDBcbHa-zHAhkh%%l)ma92<6;! ziQLV@GCrps+(17#4B^N_%rcO3P=Igy{iR}4I0W;4zW+^1crbwoYWRqn*K^&2Ed@Gs zV*2^u8~cYBp7i3;n#p~QqXO7H!gVsfUOIo5nH5=)MJ#dm`1Y+IO?<~1Q( zF}T*HGfy1PFrc4SXeKltKh;|DZ|rwwt3CWcY5=rAr08HU;FQGN$fe^2I`@Ykf#A+c z{l}%ViP?D|Y6@3Qa?6*zQu}WKy3KJIu|k)ugm{J0GrKprAojZ8d~|}R!KIq|LVJet z)D(Gir|qS)$=LvcV3crhk4~Ho47Gn9bR^Ops*WGj&YO}+_TI|uebkwYB_C8@qRk*` zf(^Wro3_wi6}mBd%8|YFcqEj0N8gsV|C{8clz*o@a0ft?UFc|*ZmO(`2+vn2q@q(V zCr0_0>K( zc}I7iLwy>99xZ$LoxOqzkU>0_Op6?DivOIz;T&78{4i2rZbaE{8Dv`Bap36ievu>GJpO#hVnblfrYJ?P?Ldc*Wf`o*bC8s&gP#`fPaY5>Ip z75PDsH`oij=0UqRF$Ohan+q*y*seoKbUybm{V-TvJ%bN}nmW){34jflZ|^(5=UG{S zWz@9)M%`EOqZHTjRnFBorbofG6C!5**?~d~V7TydY6~@)s*8*0bYeUvp{$Jd9AgRgVDxk|1PYkB0k4 z=hT>sPCA#^x|Hk(gxytaJEHR={BG%N=$A!*yW!Ga-!D5&y?v>~^x2EO4^F0^v>5N6 zE6nxiS@Kb|?a~01RoTKzMw9L=qqv`PPhS_W9ZD=dCPk$AuF9V)xQw9J3=D1oYPwDkD7$mh(^K@3rSg$>+B71iioPomvWHn)KEH{dC0=UEfN%+I?chb#D^jnGGb zzX4E=n?Za2`m3$QArDaDTJ7d-4^R{JLM15hCYB;As!~B*gKQF-8oX0BN5fX9$loE8 z>oq~2#iALhH{0!-FBV8d5Az;Be3zj93ln(N8hboeIBN+pPRKV_%wgt zx#eufh_%l)4=+l!|NX5+%o7q#&IV00+GQY5f8?zXk2EOO{QDK9(&qR{+F#?gKZ_0J z4!7-R`n*)r9r7xwAEHgtdtF?{9T<+yDOy-e$h^vxt2OrPl$ghvKb_flY|?hmEMRMA z+D^=42O>?mCZ^b@<4(m($u%A%JZ;DlntvA@($i4;ZJ+A=da+Hd_^J+H@Sl5^I<37N z_#((cd3-qEjQ@6Ws;!yTqRjQwBVA!GRXF*j6`$@Azb(Z+HRy*1kyE348#95w46!w=)lB z)s=@s*+zXPvX$&oVl0!TP$48{$~uIQB|@vMBqbx+=GAMfUQA@mlEzk{Q1+#4Eku?? z2GN2vmd0*ozB4o4_xrA|f4Z)E?&mr8S$^kt&bgm?K6zChWG7Gu3!copA;F8rnte-& zxNS-_JNTa1ylamkGj8MS*%}&%=r)v5U}kFFOX5qOf&`g;~U1p7?1kV>gR7 za;PuX!93`cre2pfH*rxhtXX6v|L#ohe^RU93k#K6K7^Y6Z4ham)uDzBicKZC2EEt# zoB~EaN9nshO>Pt8p$qmc5OD9|-MYc#*?fELr)n=GP*P0r7~-^{W}pr#o_+L}O>Co& zQBK%7rJ|3f7r{}T)kVMaH4`@?3{FuBQ3wjnXK9>`#HJ=Ke^wwVw+}n_6WoF zZ|snvRuaax= zr+xZ!S+lQg?5w>`272a&1%+vQl6Xo+MjwUZzqComl%qDlaj?AJXJuDN{WrTE9&0;f z2Yg2!z!(%q&nEVm`KD~Xcep?;Pg80d}3F1Noq(vlR{wq$#gw9j3}Rp*ZF zvXTMcsk82CJE}!@c)ozWJju_meO0xVM6gsWF>>O?C(^y8(8uJy#)T=doDkWy6@LO< zR^gTvv0`>I${}?iCb!&Bu1E|Vc!83rrHu=FGcP_a`xTv6Ywx8*JF9=_P}(=Q59b2T zd7y6+C`hFe~U`4aqdQbVp4*jH8wLr`U;}&E}>{N{!efB|+8PgMyje{>P!fy{QUfvQAZauI2mdz7a|2+!q+|i}- z5u*Th*4=OMeQP(V-*l{CVczsbWo6||U1g%VJNis{57iv zY9C*?-zU38ME<~*fWK3I__st4Q7>V|p9ac^Q}*uD(J9oe=lxRJs4VK+IMa zx>@pG8m>J#ckR&k1N-+spXodLt`YGwMI!5)R7y#@7GWB9Z8%cuMUC%$O60?xIrtES zQds0nqtfoL9`L3qRwpnM!3>dyT5sOic(a-v_)7!}Pyt~KN=KWGsHZ~+1*6_%Q zK7}%l);Tpjy{EVLgc!H{`pIMJM`RhMDR)FB8=~{VJAZGj z75LdPePh!|$K4D7%9(W$e(~^{5%cbRl-`BP`<1#2owKCtk=W&ve^}n3As26((^*Vw zyg2QRwNfvW-a3TP3UGzz{;|~#W+T%^!7&1sKh0m3$(OF*-!rGtRDJ!|e5QYn2@`0u zjqxxSZgIGJRcU%^n?{e8SPZRVb9H(;K@d4$<*ppX*G!7LM^VC>@b}vc<9@!ZkuIxy z_ALAP^XIh+nG-C>C^&+Q>ZD4&c~eZo+@7+>T4|O2V%`Hl*2?=(6kjXJYOfIO=6|<9 zN6UScmXLURbL?NTI!gRh)iGk5~ zvLA81#Xys=(&1cjxv#zbm_08v(t4Yl4ej>}?`4JSb?$GJmMm)K^wd&asijt#Psm8i zRxDrqw1Qq;1lLb;LSZI4kZ(u_;;ZjbK*EH{YbzGcu;H(AUR}%h%Y|Cs-&y7UGIyq? zQsxFY^SAuKSvFG0_&h0nv#-pIE)KpeKSQd1z+1DX({b=Q)A+qI@#DpzwL7+#aN7h4 zeLnA=ezTfzyA*g{=jC^6e4j}pN|Qr0^4Js=kuYu2on?LohfYr2ZOKI3YL2U4`rH~q z>&65_&8!eV(+Bo2SBt>vtAlHNgCsSAzi}AO?>cvLOx2vhp?pYimSq-CnR>J1{kEJq zN+#8a#?9Hk&Yor-7R(x;x0o3rH57^3)MD3=3S{LOFM;GUkI!P39`TbF$| zs7Vv=w*Uma_3q9WUc$cdi_b3WpLfq+xUwtxyeg(NYXvc(5R8knx-pBeLwgd&Y=g(g z#|I=owLLr(qAH-ua==jm#N1lb*w|SyefazJ>%WYRjZ33((a4JQlflaAe>KJeHtiAe zdxb}kX*}3YOfDKfB-M0ni56bTqB-mw=>jL+JUT=W5V(b^JDA~bLU4N+dX;gU2pgM zs-l}!k6#iI=mM9XONSA6{B+so&4O2!mfs5V(B~qCvVQ5!8R+u=x*cY#CRu!Mb&Iex z^DcdeD#`hsalhNf*CgNJK8s$rNSIKwt0Q}&RoDX-d(t`wbNrSb$-Ag}J$-2@JmLB4 zQ&C!KDi%p-iUN(TiwGl`w`oCC$*=hp_)3z^b%+?r!7HVyq`w3aRNnaMr9G_+IPS_d zgWI?03C}3141f%bLVrJ6nl(Qyw3E4f7eYsVO;}hs)@sp)U+IcL!bI36IgZT`($a=^ zZ}J~ojyoCpfId!;;y3q2%r)s+_FJs@g%YU4pAZf*%w%yz5#M_+L~6v%?{}C*cUk*% zo@mK81P8wirnIwryHg`6EZ1~7POAF(u^pyV#?oVA2hK44`NZz{pwiCqwJ4WETH-OZ z5&M^op4xm6414pcR*@twGa^(t!9T#>YVYaPiJyDcnY(FjV0Xj4#-`;f5Hr<+)?hBleedp1T zZMWHmindm5<>S@g;A+>Q_gU|quOH8c8=Pt8zk<>loOoK@Ca z^sAImNT$9bId&^3z(#PD=h?;Jn7Dl_q5y|i~V|WWo z?8=gMMpzW_d8V(~^3l*M3Wuw1MKYlS6N9)UkQhDiPo5Y`ufKha62gl}0_S#kXFkrw zTe)U!^g1l~!+}Dn%Cb;ukk6NwDnAQR0j!5A3!c80E%a&kj=DTkimzABwbgd3@bl;f z9bb*JIJXzIH4QG^Ja_qtuRe?)uh)K{_Tn|qdSQp+9J4?zH)0BWkhhpn-r9bDu(Wu) z$tobsWT)RRS^uRq;J4~dT9vU}&BdWDjT##jbq+#_O!lBi)LTJ%+$Q9bd8^L`JmS9P zG5)5jODoIL^^#{v(&rv$eL3fV&bK?pf>#S}Vs%Xei?=-I>8@TKy6k@O*VLU*0qx`A z=d%{N3c_pLO4lb7#19^d-l1zXckO1X-d_&m-VgV%c1qqGKSic>+mu?p413dVd2W|n zS0X^O?L6gX6fBjWywi&fnEw1w^J4eSc)Sa>*FklT8NTI4r~Uk_0d0LGEWO4}DeA#9 z>SgNY201bun+3o)YzQnGhPoB59v@-kQuB*3&+}9ueizpR4jjT|5&ZoWxWb~3RH;fK2JrOU$ zVKYRU89fqG0bY@$^i=7rDZ>%#bS#u*acIl#3xt1upFIt;&OS&h)!Gz;e^Onj#Jqc_ znFw%bZ~R!~Yk2KvEP=+KxD%Ce*1iAw>8D%v+k$5uQ>ZsTQ?nKU7_a2A19#^-eA##| zLOb_$@&A^mE@GyBxCS|{1s50j)sdr>81HTK4NRwS0=x<|g4vP>_@J%X0Fy1z(M_yH&;e$H;l$5?Vi1)~svR1_-L2 z>dni`l`!o6a=5v^_0w2G>5?dyIE$^vr(wKn*8DJTlD+*oSN}<^t1ucrqsUE>hn9@6 zu=oq>MyNVgpApx}eS!=1UXQ%8`daRTg>PvywXG})J8nRwZ$T$g5b9rCC%=8vMu2ztz#roCO8 z=97KO4s3QQ+qc>{(Y#m=lI)1*vl^QC8g?(}FjkIUK{_H2ef%;50O^a|(HR{=8wE(C69vuSsGJxK<0MN+661qlVc6(tyew5KM8n7fxqbfz+0b?f40s~uR~E|h_=d8W0oCx zl%}MJl_|&l@ZoAE2Uy+tMYA_zn2y+je+gY%GXgL{Qed zg34|Cu{`A3!bkYAQO-D?hKPMD+3Ntt{+T*(aZe^9%_Q3o?9`ec&fbd}CFh&3V(h5! zSrRXA@5hgST=YqfSn>Xzr_ORVGjY?&?0W4hSH@fKUXZB!+^(wnWdGoi><+*bpUhpY z_SNAvXuIyA`A1ZfWYY3RQC*tuL)7E{aq(7$B})!P3uLYWzxg4P33u$Tj%oE=nH~Xz z3ZR|Z;wjb4BNW#?~-+-6%E+HXdFJ=k)Qz+jUwj?{x8S&kfrCRPcxPVY3qk=>ShZ<&jxABcM1u9v9&xbx`p=E8w~6>NJhSh zSleZTPjuBh<>=V;`SWM9Ze977n6-)`e;K+CqL2ZS`c2LK$>B8(C%b>GRw*S}kr!6P z|AaQC_V#0>k1<}i71+?#g zsz1lnYYuj#$|`fbgF05NlW_llz?CJ_<<8I2 zQ;-Wq;##lQ!TAm6Ck`d#pzU|0H z@`>)hVGy`v`Za{w^x#tZyF@{*4U{oWd+^|dH}muJv%2Pv_cm!=tdWu(XH}qV?ATp;7bo}#AMTgGk(Uj}gn^BzH~cga zIMI|ON~~%bYcBEhz0AH13Mw#hBs;=QFesV0Rqs|SiW_-FfSnL>pA*rHoI0#dpXi3M%usIE zI0sET=Km}?W?`XZ`NP+*>Qwi8`R6-7j$zMwk^sSGS`Y7kN9L||?M|;gFtoCAqv&K= z-kzol4>sI?9})8&{c4GeFnsuY^w2+Sh`-t1>YKCU=Q($ys0l=q`y>8UD)VpauIhPh zbkkn)l~aU+wM0=pegjr&;y02XJS0?Lc?r{&&QPJwN;uBe+fF*PbHaYxRM( zL~WUwnCsi{OAEim^3g1G9%Ot}I=`^6@G;lI4Xiq$-toFgIaC@rfOZc%WLIsYJJz@0 z+ID!q{?k<7lV8(>`hV4C#F%n@oS5)Tk_&o7ylWSL`2x-t2+n8E4XG`#l!!BnCDF{x zZ8ugSeetG?a+7LR&8oCLsF0W60=)RyIaZ-ZG$VZHyxT1ra6=}L{5jqq?`{PcG+ z6tq#ht_Gi(C~7+Rn#zqlUJq(t$t#b>q$1)HC;`|`#V6J{wI@o0|90N^`KC!V4o~ch zc4+)%Wwxq^b(1P*CmXQgy#Z;TEaYD2cqbHCMMR$}UfPxS>iP5iDtq^C>ET#n_9}qY z9YsKy1qgehWvAPu@M`76_)?f$~WQIEV}$1dZ^dCHfuldJ+1?{W)d;Qe*Q?rY4QEenl#qi6J*n=ct=N+L zl&|HQ(fs&w2$Y^QtDLtcNU5J_D;u}AscYJ9`h2%cSqWgmC_Bnv)g4vme!~PUH3Bpg z69r3jHBN33#Yb>IaC{KHMs)Ru9ItWUK==FHjoibAkl>d55ntbljaA#^lFGhGMs}lv zYIagP^ zleS$C!NuFGyY8Dn$%%M=>=;yn{}E$2jK#arRg0sSY!izKt$hTIQT#uQrNIaZeFT?a z;6FsD3n+N{0EJTi^3nie`v!0$uP88(pyU#%dSQW}&$|`DO_YOT@53n}9>Cw|rRib$ zNvS#McA#3M>g$TE?i93Y);L3DC%3gv7sZ2Of&j56JCp=vu4FfUeE*gP)l@5XeeaeH zHl&Ri6~g-ND@(Tqu{6N2Jr5jmj*b>1I$fQd1`i)NFz@L3Z>(IzUqtz6l*L^r!py=K z>OhoN^1`z4xS^g@kkTB;zd>PS#ZyW&TMyeF%+kwK(&27&j-c!#<2D5~<5v@Gc?XJFd z_A$E?CWbHmY<_Cft>MhwNH<^%XE1=I(ULp6802d`~Ah(diP?8(j)c~>3VF;bXsFIA$o4EZQpvFN>UMFOZkQBG;u!K0t9c_YAg9oe!uwel7l|#G6D!gN@}2a|Hb=1`Si?!HHg12DZ0+V5 zO@dTUhPAp4OAvy{aw+kTcCgypDwHCJ85EdmN%vX>Or0A^o%bo;*liZNHaV0VHoA#0e8ydiKR_49h=q-u z5spED%@wgFiRJYKeoqlr(j)V=V}tTjT}G(KqXD%BcSEWCQsT^EZu#A#?8G^?9$<+=k4mk|c1g@3z$Vd_NOn`E!(sbFae!c2p zme0p7I0wq)IT76Jw~09Opybq$i}9N)xnS3dP!>0AF&YHHA3}7|2(n-4E?ASTB;;0> z@W)FURR}aip%C>o*E*LTM!)6TH2dgxxF!i^Kv3ZO8vGW zla`e;IGd*JD~tdJ@PizhM!Y7k$FqyIS*P8zI#}_P0hG#4@g_!sT7=Lk699i)0P7sX zdYA`Y{>Am4jcH38@y{}ob?%EXy_--a}?xS3MQg}_A`62UeiIbd`&zKwa z5eOWaPdy->HgI(3<|al*0iOi9z(HsX4OPJ>p30k&qd0f#agDGI&De1_d##EaX_je+ z$1%XlpCPuoZ3{*aGkO|i89SR#54^SEuGrBXSm+OKOYD{EU?}MY0C&HoWfY{}zr$18L%FewUc$`8$A^C}Pko+BwG;a&1d|haMr)2j5t2l#@KmntC%DQg1+2$GM>d+d z%TRXHIh?_dSIwN>ypoEnD^PbY+&wZ2X2I-0+$KQdBP(S9 z*UcTH$gm0xlFORbI6vI!L?aA616I#f(Dh>?v5t%du?TdcNX@deM%fcc>nQy9V6b@QR> zPBTuO-8w2K0@})cf~Y}-ICe+1Py1Hy;23ALfF8p^ffA>bML`Twf*ps(cK^8@G;qfXP<4%Q2vGZ>aCw2>!9e5dvj-M>UZT?hZ{WHM3Q9EF47{$(P#8Y)#uS zk_pPZ1~O7cIAa)t7GFphU>Dd2ak#ItaVvp#6QsF3>t4z9w$qT)^e-|_W))x>N#~O3 zXl&5%{8DCIVK!eH!wsrcuiPV|lPT!-y#aq%RV)gaYH~Ez7IxG43d-z4!Y8ronr4j6 zwIXiHcbI6Fb)KKbOfdf8JM1<|xQ88{K5L7YopY0$teRdfI{?B9<3GO|k@3>y3Ei!3 znF5&43wUIIY*PeiMiQVUh7ej-P&=MAE{k*rO+@2n>9UVgL-a}e82S+K9I8{_#>U23 z%s2~B0y?u*~ckHUL5V3(U9 z7b#uc7!phj6t4wD5Ag+Ja}(+?11K6r3HoAv?EyGl5n~3sB%;8bKLrZ`cVYlLT4C}e z!BTaqQ$bAjE9|j?gXXm|`RM)aRX_E$pO?30ePlS-3RFNp)%y(rdU?1-xYk=P)76uu zlR_>bWCRfag>Iux*sOSePOj+}VWrz;Kta#kfHJ&*rQH7D+tSLD5=PWGg7KjpyiLYz z_yC^iBnlc8asYJj$ffC=2B?<}Q3p?1%4ybQ=$Z>`VTf&T?N#tU{bq|Gnt#cY+}B7kdw9ar?Ib$lv3a}`ir3N(rcAEfSZy= z=&>e_Fx_*@RrO(%ZYnUNjWSP5)UfV-=_wfRoSZdP^|-)cp0qZP_X2qZJZr#?Up%6j zo~#MEJcXFA+EB8Vm_s|dmDCLxBvD~D?)}f}r(AkFQwGl7u;>wjjf7*wA#Q85FW!-0 zII<(Ou3Yr*uDyM);D;_Om@M9*uE%K1= z%?I7WEQD(4o3K}f5jf#!m(_;*{uU?HZ~uGHhyrdjA}-Eqw@aFViIGYKS{pX#lC^X! z@O4T1(Vc{1m8oRew_0U0XIX>1#%Ex#Cf=)PrjRla7q1Ak2;y8$>ydC1gGfvsLd!fh z;*VJDYC}7G#MDXPAX6bNuu61O&tc}O49-{uHrdIzPP$%oK^wMO+@4!rc2*ACk6ksh zZji+4>n_H$tDa$OWf$P_%p1e~$C&T}{*g$eE;yi!s7r-y7Qq?f%OeC@UO~TkHE6p=p*$ z_iAjeW|<*d4I;n3vT{3E50k9|hTx$$-QHovW*Cwp&(C@3z5I1wNk7NoepJilklqr= zx~7zkhMGUa%WQjEPBW96y%0QH1WVuFawgmlIqGTcV7^+h=f$hA%%_d?q;ivPa!xD$ zhP7|3{78Ff%B61{OdZE(fY$}sS~fB~X&5+1BGFCN`!J`#LUQctZGu!{#qs<{^q0i2 zc*>H|-Hg?OFmYg$lm7l&HqP|w165dS9AnaIJOxZNVj4@N%Md~z7R>RyDwvDqHHzPM_BA%Gml-pW>l9&EI>Q=YP%u5gOiZn4@?9WhK=s# z3*-zxDL}*R)@5Y9*U!j!uh*hyJ?hP4cf7^*n&v&CU9=w8owGide|YM*(Oul&qgEb@ z_4uyk0=iA_4BWWWh?+?(peukN4gbnfaAcIeDfB40(AIQ62OIps8d(|^A3goo{{Z_N Br&s_0 literal 0 HcmV?d00001 diff --git a/pyproject.toml b/pyproject.toml index c2cf4ae9e..581b86a54 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -34,7 +34,7 @@ exclude = ["images*"] [project.optional-dependencies] huggingface = [ "tyro", - "transformers>=4.38.2", + "transformers>=4.42.3", "datasets>=2.16.0", "sentencepiece>=0.2.0", "tqdm", @@ -185,9 +185,9 @@ colab-ampere-torch220 = [ ] colab-new = [ "tyro", - "transformers>=4.38.2", + "transformers>=4.42.3", "datasets>=2.16.0", - "sentencepiece", + "sentencepiece>=0.2.0", "tqdm", "psutil", "wheel>=0.42.0", diff --git a/unsloth/kernels/cross_entropy_loss.py b/unsloth/kernels/cross_entropy_loss.py index 260577912..dc1ad269f 100644 --- a/unsloth/kernels/cross_entropy_loss.py +++ b/unsloth/kernels/cross_entropy_loss.py @@ -19,14 +19,17 @@ from transformers.models.llama.modeling_llama import logger +@triton.heuristics({"DO_SOFTCAPPING": lambda args: args["DO_SOFTCAPPING"],}) @triton.jit def _cross_entropy_forward( logits_ptr, logits_row_stride, loss_ptr, logsumexp_ptr, labels_ptr, - VOCAB_SIZE : tl.constexpr, - BLOCK_SIZE : tl.constexpr, + VOCAB_SIZE : tl.constexpr, + BLOCK_SIZE : tl.constexpr, + DO_SOFTCAPPING : tl.constexpr, + SOFTCAP : tl.constexpr, ): """ Cross Entropy Loss = 1/n sum [ -yi log(Pi) ] @@ -58,13 +61,19 @@ def _cross_entropy_forward( mask = col_offsets < VOCAB_SIZE label_idx = tl.load(labels_ptr).to(tl.int32) - logits = tl.load(logits_ptr + col_offsets, mask = mask, other = -float("inf")).to(tl.float32) + logits = tl.load(logits_ptr + col_offsets, mask = mask, other = -float("inf")) + # Do logit softcapping for Gemma 2: t * tanh(1/t * x) + if DO_SOFTCAPPING: logits = SOFTCAP * tl.math.tanh(logits / SOFTCAP) + + logits = logits.to(tl.float32) c = tl.max(logits, 0) logsumexp = c + tl.log(tl.sum(tl.exp(logits - c), 0)) if label_idx != -100: - x = tl.load(logits_ptr + label_idx).to(tl.float32) - loss = logsumexp - x + x = tl.load(logits_ptr + label_idx) + # Do logit softcapping for Gemma 2: t * tanh(1/t * x) + if DO_SOFTCAPPING: x = SOFTCAP * tl.math.tanh(x / SOFTCAP) + loss = logsumexp - x.to(tl.float32) else: loss = 0.0 tl.store(logsumexp_ptr, logsumexp) @@ -72,15 +81,18 @@ def _cross_entropy_forward( pass +@triton.heuristics({"DO_SOFTCAPPING": lambda args: args["DO_SOFTCAPPING"],}) @triton.jit def _chunked_cross_entropy_forward( logits_ptr, logits_row_stride, loss_ptr, logsumexp_ptr, labels_ptr, - VOCAB_SIZE : tl.constexpr, - N_CHUNKS : tl.constexpr, - BLOCK_SIZE : tl.constexpr, + VOCAB_SIZE : tl.constexpr, + N_CHUNKS : tl.constexpr, + BLOCK_SIZE : tl.constexpr, + DO_SOFTCAPPING : tl.constexpr, + SOFTCAP : tl.constexpr, ): """ 256K vocab divided in 4 chunks @@ -117,7 +129,11 @@ def _chunked_cross_entropy_forward( mask = col_offsets < VOCAB_SIZE label_idx = tl.load(labels_ptr).to(tl.int32) - logits = tl.load(logits_ptr + col_offsets, mask = mask, other = -float("inf")).to(tl.float32) + logits = tl.load(logits_ptr + col_offsets, mask = mask, other = -float("inf")) + # Do logit softcapping for Gemma 2: t * tanh(1/t * x) + if DO_SOFTCAPPING: logits = SOFTCAP * tl.math.tanh(logits / SOFTCAP) + + logits = logits.to(tl.float32) c = tl.max(logits, 0) logsumexp = c + tl.log(tl.sum(tl.exp(logits - c), 0)) @@ -126,7 +142,9 @@ def _chunked_cross_entropy_forward( # Do the -x separately if label_idx != -100: x = tl.load(logits_ptr + label_idx).to(tl.float32) - loss = -1.0 * x + # Do logit softcapping for Gemma 2: t * tanh(1/t * x) + if DO_SOFTCAPPING: x = SOFTCAP * tl.math.tanh(x / SOFTCAP) + loss = -1.0 * x.to(tl.float32) else: loss = 0.0 tl.store(loss_ptr, loss) @@ -135,14 +153,17 @@ def _chunked_cross_entropy_forward( pass +@triton.heuristics({"DO_SOFTCAPPING": lambda args: args["DO_SOFTCAPPING"],}) @triton.jit def _cross_entropy_backward( logits_ptr, logits_row_stride, dloss_ptr, dloss_row_stride, logsumexp_ptr, labels_ptr, - VOCAB_SIZE : tl.constexpr, - BLOCK_SIZE : tl.constexpr, + VOCAB_SIZE : tl.constexpr, + BLOCK_SIZE : tl.constexpr, + DO_SOFTCAPPING : tl.constexpr, + SOFTCAP : tl.constexpr, ): """ CE_i = -y log(P) = y * (log[sum(exp(x))] - x) @@ -173,15 +194,27 @@ def _cross_entropy_backward( else: dloss = 0.0 - x = tl.load(logits_ptr + col_offsets, mask = mask, other = -float("inf")).to(tl.float32) + x = tl.load(logits_ptr + col_offsets, mask = mask, other = -float("inf")) + # Do logit softcapping for Gemma 2: t * tanh(1/t * x) + if DO_SOFTCAPPING: + # d/dx [t * tanh(1/t * x)] = 1 - tanh^2(1/t * x) + partial = tl.math.tanh(x / SOFTCAP) + x = SOFTCAP * partial + pass + logsumexp = tl.load(logsumexp_ptr + row_idx) - y = tl.exp(x - logsumexp) + y = tl.exp(x.to(tl.float32) - logsumexp) y = tl.where( col_offsets == label_idx, y - 1.0, # exp(x - logsumexp) - 1 y, # exp(x - logsumexp) ) + if DO_SOFTCAPPING: + # d/dx [t * tanh(1/t * x)] = 1 - tanh^2(1/t * x) + y = y * (1.0 - partial*partial) + pass + # If y == 0: dC/dx = 0 ==> we already masked it to be = 0, so dloss = 0. tl.store(logits_ptr + col_offsets, dloss * y, mask = mask) pass @@ -191,40 +224,46 @@ def _cross_entropy_backward( class Fast_CrossEntropyLoss(torch.autograd.Function): @staticmethod - def forward(ctx, logits, labels): + def forward(ctx, logits, labels, logit_softcapping = 0): n_rows, vocab_size = logits.shape div, mod = divmod(vocab_size, MAX_FUSED_SIZE) n_chunks = div + (mod != 0) - losses = torch.empty(n_rows, dtype = torch.float32, device = "cuda") + losses = torch.empty(n_rows, dtype = torch.float32, device = "cuda:0") + + DO_SOFTCAPPING = (logit_softcapping != 0) if n_chunks == 1: # For small vocabs <= 65336 like Llama, Mistral BLOCK_SIZE, num_warps = calculate_settings(vocab_size) - logsumexp = torch.empty(n_rows, dtype = torch.float32, device = "cuda") + logsumexp = torch.empty(n_rows, dtype = torch.float32, device = "cuda:0") _cross_entropy_forward[(n_rows,)]( logits, logits.stride(0), losses, logsumexp, labels, - VOCAB_SIZE = vocab_size, - BLOCK_SIZE = BLOCK_SIZE, - num_warps = num_warps, + VOCAB_SIZE = vocab_size, + BLOCK_SIZE = BLOCK_SIZE, + DO_SOFTCAPPING = DO_SOFTCAPPING, + SOFTCAP = logit_softcapping, + num_warps = num_warps, ) else: # For large vocabs > 65336 like Gemma 256K - logsumexp = torch.empty((n_rows, n_chunks,), dtype = torch.float32, device = "cuda") + logsumexp = torch.empty((n_rows, n_chunks,), dtype = torch.float32, device = "cuda:0") _chunked_cross_entropy_forward[(n_rows, n_chunks,)]( logits, logits.stride(0), losses, logsumexp, labels, - VOCAB_SIZE = vocab_size, - N_CHUNKS = n_chunks, - BLOCK_SIZE = MAX_FUSED_SIZE, - num_warps = 32, + VOCAB_SIZE = vocab_size, + N_CHUNKS = n_chunks, + BLOCK_SIZE = MAX_FUSED_SIZE, + DO_SOFTCAPPING = DO_SOFTCAPPING, + SOFTCAP = logit_softcapping, + num_warps = 32, ) # logsumexp(chunked_logsumexp) - x # Do the -x separately @@ -234,6 +273,8 @@ def forward(ctx, logits, labels): pass ctx.save_for_backward(logits, logsumexp, labels) + ctx.DO_SOFTCAPPING = DO_SOFTCAPPING + ctx.logit_softcapping = logit_softcapping return losses pass @@ -251,16 +292,18 @@ def backward(ctx, dlosses): dlosses, dlosses.stride(0), logsumexp, labels, - VOCAB_SIZE = vocab_size, - BLOCK_SIZE = BLOCK_SIZE, - num_warps = 8, + VOCAB_SIZE = vocab_size, + BLOCK_SIZE = BLOCK_SIZE, + DO_SOFTCAPPING = ctx.DO_SOFTCAPPING, + SOFTCAP = ctx.logit_softcapping, + num_warps = 8, ) return logits, None, None, pass pass -def fast_cross_entropy_loss(logits, labels): +def fast_cross_entropy_loss(logits, labels, logit_softcapping = 0): """ Arguments: logits: (batch, seq_len, vocab_size) @@ -274,6 +317,7 @@ def fast_cross_entropy_loss(logits, labels): loss = Fast_CrossEntropyLoss.apply( logits.view(batch*seq_len, d), labels.view(-1), + logit_softcapping, ) n_items = torch.count_nonzero(labels != -100) return loss.sum() / n_items diff --git a/unsloth/kernels/geglu.py b/unsloth/kernels/geglu.py index df80fcb79..006e8c0f3 100644 --- a/unsloth/kernels/geglu.py +++ b/unsloth/kernels/geglu.py @@ -41,7 +41,7 @@ def _exact_forward_kernel(e, g, h, n_elements, BLOCK_SIZE : tl.constexpr,): def geglu_exact_forward_kernel(gate, up): batch, seq_len, hd = gate.shape n_elements = gate.numel() - out = torch.empty((batch, seq_len, hd), dtype = gate.dtype, device = "cuda") + out = torch.empty((batch, seq_len, hd), dtype = gate.dtype, device = "cuda:0") grid = lambda meta: (triton.cdiv(n_elements, meta['BLOCK_SIZE']),) _exact_forward_kernel[grid](gate, up, out, n_elements, BLOCK_SIZE = 1024,) return out @@ -133,7 +133,7 @@ def _approx_forward_kernel(e, g, h, n_elements, BLOCK_SIZE : tl.constexpr,): def geglu_approx_forward_kernel(gate, up): batch, seq_len, hd = gate.shape n_elements = gate.numel() - out = torch.empty((batch, seq_len, hd), dtype = gate.dtype, device = "cuda") + out = torch.empty((batch, seq_len, hd), dtype = gate.dtype, device = "cuda:0") grid = lambda meta: (triton.cdiv(n_elements, meta['BLOCK_SIZE']),) _approx_forward_kernel[grid](gate, up, out, n_elements, BLOCK_SIZE = 1024,) return out diff --git a/unsloth/kernels/rms_layernorm.py b/unsloth/kernels/rms_layernorm.py index 4db89b781..f26e59653 100644 --- a/unsloth/kernels/rms_layernorm.py +++ b/unsloth/kernels/rms_layernorm.py @@ -119,7 +119,7 @@ def _gemma_rms_layernorm_forward( W_row = tl.load(W + col_offsets, mask = mask, other = 0).to(tl.float32) row_var = tl.sum(X_row * X_row, axis = 0) / n_cols - inv_var = 1.0 / tl.sqrt(row_var + eps) # Must be 1/sqrt to match Deepmind's impl + inv_var = tl.math.rsqrt(row_var + eps) tl.store(r, inv_var) normed = X_row * inv_var output = normed * (W_row + 1.0) @@ -137,8 +137,8 @@ def forward(ctx, X, W, eps, gemma = False): n_rows, n_cols = X.shape BLOCK_SIZE, num_warps = calculate_settings(n_cols) - Y = torch.empty((n_rows, n_cols), dtype = X.dtype, device = "cuda") - r = torch.empty(n_rows, dtype = torch.float32, device = "cuda") + Y = torch.empty((n_rows, n_cols), dtype = X.dtype, device = "cuda:0") + r = torch.empty(n_rows, dtype = torch.float32, device = "cuda:0") fx = _gemma_rms_layernorm_forward if gemma else _rms_layernorm_forward fx[(n_rows,)]( diff --git a/unsloth/kernels/swiglu.py b/unsloth/kernels/swiglu.py index ff6b16268..f81b7aae9 100644 --- a/unsloth/kernels/swiglu.py +++ b/unsloth/kernels/swiglu.py @@ -41,7 +41,7 @@ def _fg_kernel(e, g, h, n_elements, BLOCK_SIZE : tl.constexpr,): def swiglu_fg_kernel(e, g): batch, seq_len, hd = e.shape n_elements = e.numel() - h = torch.empty((batch, seq_len, hd), dtype = e.dtype, device = "cuda") + h = torch.empty((batch, seq_len, hd), dtype = e.dtype, device = "cuda:0") grid = lambda meta: (triton.cdiv(n_elements, meta['BLOCK_SIZE']),) _fg_kernel[grid](e, g, h, n_elements, BLOCK_SIZE = 1024,) return h diff --git a/unsloth/kernels/utils.py b/unsloth/kernels/utils.py index ddee198b7..935f1d430 100644 --- a/unsloth/kernels/utils.py +++ b/unsloth/kernels/utils.py @@ -105,14 +105,14 @@ def fast_dequantize(W, quant_state = None, out = None): # Create weight matrix if out is None: - out = torch.empty(shape, dtype = dtype, device = "cuda") + out = torch.empty(shape, dtype = dtype, device = "cuda:0") else: assert(out.shape == shape) assert(out.dtype == dtype) # NF4 dequantization of statistics n_elements_absmax = absmax.numel() - out_absmax = torch.empty(n_elements_absmax, dtype = torch.float32, device = "cuda") + out_absmax = torch.empty(n_elements_absmax, dtype = torch.float32, device = "cuda:0") # Do dequantization ptr_out_absmax = get_ptr(out_absmax) @@ -161,7 +161,7 @@ def fast_gemv(X, W, quant_state, out = None): bout = shape[0] if out is None: - out = torch.empty((1, 1, bout,), dtype = dtype, device = "cuda") + out = torch.empty((1, 1, bout,), dtype = dtype, device = "cuda:0") # else: # assert(out.shape == (1, 1, bout,)) # pass @@ -179,7 +179,7 @@ def fast_gemv(X, W, quant_state, out = None): ldb = ctypes.c_int32(ldb) ldc = ctypes.c_int32(ldc) - df = torch.empty(absmax.shape, dtype = torch.float32, device = "cuda") + df = torch.empty(absmax.shape, dtype = torch.float32, device = "cuda:0") cdequantize_blockwise_fp32( get_ptr(code2), get_ptr(absmax), get_ptr(absmax2), get_ptr(df), ctypes.c_int(blocksize2), ctypes.c_int(df.numel()), diff --git a/unsloth/models/_utils.py b/unsloth/models/_utils.py index 7a6954c9f..73aa0c6c9 100644 --- a/unsloth/models/_utils.py +++ b/unsloth/models/_utils.py @@ -21,6 +21,12 @@ warnings.filterwarnings(action = "ignore", category = UserWarning, module = "transformers") warnings.filterwarnings(action = "ignore", category = FutureWarning, module = "accelerate") warnings.filterwarnings(action = "ignore", category = FutureWarning, module = "huggingface_hub") +warnings.filterwarnings(action = "ignore", category = RuntimeWarning, module = "multiprocessing") + +# Stop "Special tokens have been added in the vocabulary, ..." +import logging +logging.getLogger("transformers.tokenization_utils_base").setLevel(logging.CRITICAL+1) + import bitsandbytes as bnb from transformers.models.llama.modeling_llama import logger from transformers import AutoTokenizer @@ -31,7 +37,7 @@ import os import psutil -__version__ = "2024.6" +__version__ = "2024.7" # Get Flash Attention v2 if Ampere (RTX 30xx, A100) major_version, minor_version = torch.cuda.get_device_capability() @@ -80,8 +86,49 @@ "offload_output_embeddings", "is_bfloat16_supported", "unsloth_offloaded_gradient_checkpoint", + "torch_compile_options", ] +# Just remove max_autotune_gemm warning +import functools +@functools.lru_cache(None) +def is_big_gpu(index): + sms = torch.cuda.get_device_properties(index).multi_processor_count + if sms < 80: # V100 + # log.warning("not enough SMs to use max_autotune_gemm mode") + return False + return True +import torch._inductor.utils +torch._inductor.utils.is_big_gpu = is_big_gpu + + +# Torch compile arguments +torch_compile_arguments = [ + "config.dce = True", + "config.memory_planning = True", + "config.memory_pool = 'combined'", + "config.coordinate_descent_tuning = True", + "config.max_autotune_gemm = False", # GEMM is unnecessary + "config.autotune_multi_device = False", + "config.max_autotune_gemm_backends = 'ATEN'", # Not much faster + "config.aggressive_fusion = False", # Careful changes results! + "config.cuda.enable_cuda_lto = True", + "config.cuda.use_fast_math = True", + "config.cuda.compile_opt_level = '-O2'", +] +import torch._inductor.config as config +for _try_compile_argument in torch_compile_arguments: + try: exec(_try_compile_argument) + except: pass +pass +torch_compile_options = { + "epilogue_fusion" : True, + "max_autotune" : True, + "shape_padding" : True, + "trace.enabled" : False, # Output Triton kernel outputs! + "triton.cudagraphs" : False, +} + def prepare_model_for_kbit_training( model : Any, diff --git a/unsloth/models/gemma.py b/unsloth/models/gemma.py index 99374891a..4c4515b79 100644 --- a/unsloth/models/gemma.py +++ b/unsloth/models/gemma.py @@ -247,6 +247,8 @@ def pre_patch(): GemmaModel .forward = LlamaModel_fast_forward GemmaForCausalLM .forward = CausalLM_fast_forward(GemmaModel_fast_forward_inference) PeftModelForCausalLM.forward = PeftModelForCausalLM_fast_forward + fix_prepare_inputs_for_generation(GemmaForCausalLM) + # Solves https://github.com/unslothai/unsloth/issues/168 # Static KV Cache was introduced in 4.38.0, causing training to be much slower. # Inferene can now be CUDAGraphed, but we shall retain the old rotary embeddings. diff --git a/unsloth/models/gemma2.py b/unsloth/models/gemma2.py new file mode 100644 index 000000000..0669e4220 --- /dev/null +++ b/unsloth/models/gemma2.py @@ -0,0 +1,538 @@ +# Copyright 2023-present Daniel Han-Chen & the Unsloth team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from .llama import * +from ._utils import __version__ +from .gemma import ( + GemmaFixedRotaryEmbedding, + fast_geglu_inference, +) +from transformers.models.gemma2.modeling_gemma2 import ( + Gemma2Attention, + Gemma2DecoderLayer, + Gemma2Model, + Gemma2ForCausalLM, + Gemma2RotaryEmbedding, + apply_rotary_pos_emb, + repeat_kv, +) +from transformers.models.gemma2.modeling_gemma2 import * +from transformers.modeling_attn_mask_utils import ( + _prepare_4d_causal_attention_mask_for_sdpa, +) +# For Pytorch 2.1.1 +try: + from transformers.models.gemma2.modeling_gemma2 import ( + Gemma2SdpaAttention, + Gemma2FlashAttention2, + ) +except: + Gemma2SdpaAttention = Gemma2Attention + Gemma2FlashAttention2 = Gemma2Attention +pass + + +# [TODO] We must randomnly use torch.compile? +# I checked the gradients and formulas and I'm sure it's correct. +# I'm stumped :( +@torch.compile(fullgraph = True, dynamic = True, options = torch_compile_options) +def fast_rms_layernorm_gemma2_compiled(layernorm, X, gemma = True): + old_dtype = X.dtype + X = X.float() + X = X * torch.rsqrt(X.square().mean(-1, keepdim = True) + layernorm.eps) * \ + (1.0 + layernorm.weight.float()) + return X.to(old_dtype) +pass + + +# Logit softcapping +@torch.compile(fullgraph = True, dynamic = True, options = torch_compile_options) +def gemma2_attention(Q, K, V, causal_mask, self, bsz, q_len): + n_heads = self.num_heads + head_dim = self.head_dim + n_kv_heads = self.num_key_value_heads + n_groups = self.num_key_value_groups + + # Grouped query attention + K = K[:, :, None, :, :].expand(bsz, n_kv_heads, n_groups, q_len, head_dim) + V = V[:, :, None, :, :].expand(bsz, n_kv_heads, n_groups, q_len, head_dim) + K = K.reshape(bsz, n_heads, q_len, head_dim) + V = V.reshape(bsz, n_heads, q_len, head_dim) + + s = self.config.hidden_size // self.config.num_attention_heads + t = self.config.attn_logit_softcapping + + Q = Q * torch.tensor(s**-0.5, dtype = Q.dtype) # Follow Keras exactly + A = torch.matmul(Q, K.transpose(2, 3)) + A = t * torch.tanh(A / t) # Logit softcapping + A += causal_mask[:q_len, :q_len] + A = torch.nn.functional.softmax(A, dim = -1, dtype = torch.float32).to(Q.dtype) + A = torch.matmul(A, V) + A = A.transpose(1, 2).contiguous() + A = A.reshape(bsz, q_len, n_heads*head_dim) + return A +pass + + +# Logit softcapping +def Gemma2Attention_fast_forward( + self, + hidden_states: torch.Tensor, + causal_mask: Optional[xformers.attn_bias.BlockDiagonalCausalMask] = None, + attention_mask: Optional[torch.Tensor] = None, + position_ids: Optional[torch.LongTensor] = None, + past_key_value: Optional[Tuple[torch.Tensor]] = None, + output_attentions: bool = False, + use_cache: bool = False, + padding_mask: Optional[torch.LongTensor] = None, + *args, **kwargs, +) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: + + # Clear inference + if hasattr(self, "paged_attention"): + del self.paged_attention_K + del self.paged_attention_V + del self.paged_attention + del self.temp_QA + del self.temp_KV + del self.RH_Q + del self.attention + pass + + bsz, q_len, _ = hidden_states.size() + + n_heads = self.num_heads + n_groups = self.num_key_value_groups + n_kv_heads = self.num_key_value_heads + head_dim = self.head_dim + assert(n_kv_heads * n_groups == n_heads) + + Q, K, V = self.apply_qkv(self, hidden_states) + Q = Q.view(bsz, q_len, n_heads, head_dim).transpose(1, 2) + K = K.view(bsz, q_len, n_kv_heads, head_dim).transpose(1, 2) + V = V.view(bsz, q_len, n_kv_heads, head_dim).transpose(1, 2) + + kv_seq_len = K.shape[-2] + if past_key_value is not None: + kv_seq_len += past_key_value[0].shape[-2] + + if position_ids is None: + cos = self.rotary_emb.cos_cached + sin = self.rotary_emb.sin_cached + Q, K = fast_rope_embedding(Q, K, cos, sin) + else: + cos, sin = self.rotary_emb(V, seq_len = kv_seq_len) + Q, K = inplace_rope_embedding(Q, K, cos, sin, position_ids) + pass + + if past_key_value is not None: + K = torch.cat([past_key_value[0], K], dim = 2) + V = torch.cat([past_key_value[1], V], dim = 2) + pass + past_key_value = (K, V) if use_cache else None + + A = gemma2_attention(Q, K, V, causal_mask, self, bsz, kv_seq_len) + A = self.apply_o(self, A) + return A, None, past_key_value +pass + + +# https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L590 +def Gemma2DecoderLayer_fast_forward( + self, + hidden_states: torch.Tensor, + causal_mask: Optional[xformers.attn_bias.BlockDiagonalCausalMask] = None, + attention_mask: Optional[torch.Tensor] = None, + position_ids: Optional[torch.LongTensor] = None, + past_key_value: Optional[Tuple[torch.Tensor]] = None, + output_attentions: Optional[bool] = False, + use_cache: Optional[bool] = False, + padding_mask: Optional[torch.LongTensor] = None, + *args, **kwargs, +): + if use_cache and hasattr(self, "_flag_for_generation"): #past_key_value is not None: + out_weight = torch.empty(self.input_layernorm.weight.shape, dtype = torch.float32, device = "cuda:0") + + # Self Attention + residual = hidden_states + hidden_states = fast_rms_layernorm_inference_gemma(self.input_layernorm, hidden_states, out_weight) + hidden_states, self_attn_weights, present_key_value = self.self_attn( + hidden_states=hidden_states, + causal_mask=causal_mask, + attention_mask=attention_mask, + position_ids=position_ids, + past_key_value=past_key_value, + output_attentions=output_attentions, + use_cache=use_cache, + padding_mask=padding_mask, + ) + hidden_states = fast_rms_layernorm_inference_gemma(self.post_attention_layernorm, hidden_states, out_weight) + hidden_states += residual + + # Fully Connected + residual = hidden_states + hidden_states = fast_rms_layernorm_inference_gemma(self. pre_feedforward_layernorm, hidden_states, out_weight) + hidden_states = fast_geglu_inference(self.mlp, hidden_states) + hidden_states = fast_rms_layernorm_inference_gemma(self.post_feedforward_layernorm, hidden_states, out_weight) + hidden_states += residual + else: + residual = hidden_states + hidden_states = fast_rms_layernorm_gemma2_compiled(self.input_layernorm, hidden_states, gemma = True) + hidden_states, self_attn_weights, present_key_value = self.self_attn( + hidden_states=hidden_states, + causal_mask=causal_mask, + attention_mask=attention_mask, + position_ids=position_ids, + past_key_value=past_key_value, + output_attentions=output_attentions, + use_cache=use_cache, + padding_mask=padding_mask, + ) + hidden_states = fast_rms_layernorm_gemma2_compiled(self.post_attention_layernorm, hidden_states, gemma = True) + hidden_states = residual + hidden_states + + # Fully Connected + residual = hidden_states + hidden_states = fast_rms_layernorm_gemma2_compiled(self. pre_feedforward_layernorm, hidden_states, gemma = True) + hidden_states = self.mlp(hidden_states) + hidden_states = fast_rms_layernorm_gemma2_compiled(self.post_feedforward_layernorm, hidden_states, gemma = True) + hidden_states = residual + hidden_states + pass + + outputs = (hidden_states,) + if output_attentions: outputs += (self_attn_weights,) + if use_cache: outputs += (present_key_value,) + return outputs +pass + + +from math import sqrt as math_sqrt +KV_CACHE_INCREMENT = 256 # KV Cache update size +torch_nn_functional_softmax = torch.nn.functional.softmax + +def Gemma2Attention_fast_forward_inference( + self, + hidden_states: torch.Tensor, + past_key_value: Optional[Tuple[torch.Tensor]], + position_ids, + do_prefill = False, + attention_mask = None, + use_sliding_window = False, +): + Xn = hidden_states + bsz, _, hd = hidden_states.size() + K1, V1 = past_key_value + dtype = Xn.dtype + + n_heads = self.num_heads + n_groups = self.num_key_value_groups + n_kv_heads = self.num_key_value_heads + head_dim = self.head_dim + attention_size = n_heads*head_dim + # assert(n_kv_heads * n_groups == n_heads) + seq_len = K1.shape[-2] + kv_seq_len = seq_len + 1 + + # Prefill phase + # if not hasattr(self, "paged_attention"): + if do_prefill: + self.paged_attention = torch.empty((KV_CACHE_INCREMENT+seq_len+1, 2, bsz, n_kv_heads, head_dim), dtype = dtype, device = "cuda:0") + self.paged_attention_K = self.paged_attention[:,0] + self.paged_attention_V = self.paged_attention[:,1] + self.paged_attention_K[:seq_len] = K1.permute(2, 0, 1, 3) + self.paged_attention_V[:seq_len] = V1.permute(2, 0, 1, 3) + self.temp_QA = torch.empty((2, bsz, 1, attention_size), dtype = dtype, device = "cuda:0") + self.temp_KV = torch.empty((2, bsz, 1, n_kv_heads*head_dim), dtype = dtype, device = "cuda:0") + self.RH_Q = torch.empty((bsz, n_heads, 1, head_dim), dtype = dtype, device = "cuda:0") + self.attention = torch.empty((bsz, n_heads, 1, KV_CACHE_INCREMENT+seq_len), dtype = dtype, device = "cuda:0") + self.scalar = 1.0 / math_sqrt(self.config.hidden_size // self.config.num_attention_heads) + self.half_head_dim = head_dim // 2 + self. t = self.config.attn_logit_softcapping + self.reciprocal_t = 1.0 / self.config.attn_logit_softcapping + elif kv_seq_len >= self.paged_attention.shape[0]: + self.paged_attention.resize_((self.paged_attention.shape[0]+KV_CACHE_INCREMENT, 2, bsz, n_kv_heads, head_dim)) + self.paged_attention_K = self.paged_attention[:,0] + self.paged_attention_V = self.paged_attention[:,1] + self.attention.resize_((bsz, n_heads, 1, self.attention.shape[-1]+KV_CACHE_INCREMENT)) + pass + + Qn = fast_linear_forward(self.q_proj, Xn, out = self.temp_QA[0]) + Kn = fast_linear_forward(self.k_proj, Xn, out = self.temp_KV[0]) + Vn = fast_linear_forward(self.v_proj, Xn, out = self.temp_KV[1]) + Qn = Qn.view(bsz, 1, n_heads, head_dim).transpose(1, 2) + Kn = Kn.view(bsz, 1, n_kv_heads, head_dim).transpose(1, 2) + Vn = Vn.view(bsz, 1, n_kv_heads, head_dim).transpose(1, 2) + + # cos, sin = self.rotary_emb(Vn, seq_len = kv_seq_len) + # Qn, Kn = inplace_rope_embedding(Qn, Kn, cos, sin, position_ids) + cos = self.rotary_emb.cos_cached[position_ids].unsqueeze(1) + sin = self.rotary_emb.sin_cached[position_ids].unsqueeze(1) + h = self.half_head_dim + + RH_Q = self.RH_Q + RH_Q[:,:,:,:h] = Qn[:,:,:,h:] + RH_Q[:,:,:,h:] = Qn[:,:,:,:h] + torch.neg(RH_Q[:,:,:,:h], out = RH_Q[:,:,:,:h]) + Qn *= cos + Qn.addcmul_(RH_Q, sin) + + RH_K = RH_Q[:,:n_kv_heads,:,:] # torch.empty((n_kv_heads, 1, head_dim), dtype = dtype, device = "cuda:0") + RH_K[:,:,:,:h] = Kn[:,:,:,h:] + RH_K[:,:,:,h:] = Kn[:,:,:,:h] + torch.neg(RH_K[:,:,:,:h], out = RH_K[:,:,:,:h]) + Kn *= cos + Kn.addcmul_(RH_K, sin) + + # New KV cache + # Kn = torch.cat([K1, Kn], dim = 2) + # Vn = torch.cat([V1, Vn], dim = 2) + self.paged_attention_K[seq_len] = Kn.permute(2, 0, 1, 3) + self.paged_attention_V[seq_len] = Vn.permute(2, 0, 1, 3) + Kn = self.paged_attention_K[:kv_seq_len].permute(1, 2, 0, 3) + Vn = self.paged_attention_V[:kv_seq_len].permute(1, 2, 0, 3) + + # Handle sliding windows + sliding_window = self.config.sliding_window + if use_sliding_window and kv_seq_len > sliding_window: + # From https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L193 + slicing_tokens = 1 - sliding_window + Knn = Kn[:, :, slicing_tokens:, :]#.contiguous() + Vnn = Vn[:, :, slicing_tokens:, :]#.contiguous() + else: + Knn, Vnn = Kn, Vn + pass + + # Grouped query attention + _, _, cached_len, _ = Knn.shape + if n_groups != 1: + Knn = Knn[:, :, None, :, :].expand(bsz, n_kv_heads, n_groups, cached_len, head_dim) + Vnn = Vnn[:, :, None, :, :].expand(bsz, n_kv_heads, n_groups, cached_len, head_dim) + Knn = Knn.reshape(bsz, n_heads, cached_len, head_dim) + Vnn = Vnn.reshape(bsz, n_heads, cached_len, head_dim) + pass + # else: + # Knn, Vnn = Knn, Vnn + # pass + + # Attention + # if bsz == 1: + Qn *= self.scalar # See https://github.com/ggerganov/llama.cpp/issues/7805#issuecomment-2153349963 + # It seems like doing (Q * scalar) @ K is better than (Q @ K) * scalar to stop overflows + A = torch.matmul(Qn, Knn.transpose(2, 3), out = self.attention[:,:,:,:cached_len]) + # if attention_mask is not None: A += attention_mask # Must add attention_mask for batched + + A *= self.reciprocal_t; torch.tanh(A, out = A); A *= self.t; # Logit softcapping + + A[:] = torch_nn_functional_softmax(A, dim = -1, dtype = torch.float32)#.to(A.dtype) + A = torch.matmul(A, Vnn, out = Qn) + # else: + # A = scaled_dot_product_attention(Qn, Knn, Vnn, attn_mask = attention_mask, is_causal = False) + # pass + A = A.transpose(1, 2) + A = A.reshape(bsz, 1, attention_size) + A = fast_linear_forward(self.o_proj, A, out = self.temp_QA[1][:,:,:self.hidden_size]) + return A, (Kn, Vn) +pass + + +# https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L825 +# @torch.inference_mode +def Gemma2Model_fast_forward_inference( + self, + input_ids, + past_key_values, + position_ids, + attention_mask = None, +): + out_weight = torch.empty_like(self.model.layers[0].input_layernorm.weight, dtype = torch.float32, device = "cuda:0") + input_ids = input_ids[:,:self.max_seq_length] + hidden_states = self.model.embed_tokens(input_ids) + hidden_states = hidden_states.to(self.config.torch_dtype) + # 3072**0.5 = 55.5000 in bfloat16, whilst 55.4256 in float32 + # 2048**0.5 = 45.2500 in bfloat16, whilst 45.2548 in float32 + hidden_states *= torch.tensor(math_sqrt(self.config.hidden_size), dtype = hidden_states.dtype) + + bsz, q_len, hd = hidden_states.shape + seq_len = past_key_values[0][0].shape[-2] + if bsz != 1: + SWA = _prepare_4d_causal_attention_mask_for_sdpa( + attention_mask, + (bsz, q_len), + hidden_states, + seq_len, + sliding_window = self.config.sliding_window, + ) + GA = _prepare_4d_causal_attention_mask_for_sdpa( + attention_mask, + (bsz, q_len), + hidden_states, + seq_len, + ) + else: + SWA = attention_mask + GA = attention_mask + pass + + next_decoder_cache = [] + for idx, decoder_layer in enumerate(self.model.layers): + + use_sliding_window = idx % 2 == 0 + + residual = hidden_states + hidden_states = fast_rms_layernorm_inference_gemma(decoder_layer.input_layernorm, hidden_states, out_weight) + hidden_states, present_key_value = Gemma2Attention_fast_forward_inference( + decoder_layer.self_attn, + hidden_states = hidden_states, + past_key_value = past_key_values[idx], + position_ids = position_ids, + attention_mask = SWA if use_sliding_window else GA, + do_prefill = not hasattr(decoder_layer.self_attn, "paged_attention"), + use_sliding_window = use_sliding_window, + ) + hidden_states = fast_rms_layernorm_inference_gemma(decoder_layer.post_attention_layernorm, hidden_states, out_weight) + hidden_states += residual + + residual = hidden_states + hidden_states = fast_rms_layernorm_inference_gemma(decoder_layer. pre_feedforward_layernorm, hidden_states, out_weight) + hidden_states = fast_geglu_inference(decoder_layer.mlp, hidden_states) + hidden_states = fast_rms_layernorm_inference_gemma(decoder_layer.post_feedforward_layernorm, hidden_states, out_weight) + hidden_states += residual + + next_decoder_cache.append(present_key_value) + pass + hidden_states = fast_rms_layernorm_inference_gemma(self.model.norm, hidden_states, out_weight) + + return BaseModelOutputWithPast( + last_hidden_state = hidden_states, + past_key_values = next_decoder_cache, + hidden_states = [], + attentions = [], + ) +pass + + +class FastGemma2Model(FastLlamaModel): + + @staticmethod + def pre_patch(): + Gemma2Attention .forward = Gemma2Attention_fast_forward + Gemma2SdpaAttention .forward = Gemma2Attention_fast_forward + Gemma2FlashAttention2.forward = Gemma2Attention_fast_forward + Gemma2DecoderLayer .forward = Gemma2DecoderLayer_fast_forward + Gemma2Model .forward = LlamaModel_fast_forward + Gemma2ForCausalLM .forward = CausalLM_fast_forward(Gemma2Model_fast_forward_inference) + PeftModelForCausalLM .forward = PeftModelForCausalLM_fast_forward + fix_prepare_inputs_for_generation(Gemma2ForCausalLM) + + # Solves https://github.com/unslothai/unsloth/issues/168 + # Static KV Cache was introduced in 4.38.0, causing training to be much slower. + # Inferene can now be CUDAGraphed, but we shall retain the old rotary embeddings. + # https://github.com/huggingface/transformers/pull/27931 + # https://github.com/huggingface/transformers/blob/v4.37.2/src/transformers/models/llama/modeling_llama.py + import transformers.models.gemma2.modeling_gemma2 + transformers.models.gemma2.modeling_gemma2.Gemma2RotaryEmbedding = GemmaFixedRotaryEmbedding + return + pass + + + @staticmethod + def post_patch(model): + # Patch model for Gemma + layers = model.model.layers + + # Torch.compile fails on embedding matrix?? + # Workaround randomnly fixes it for torch versions < 2.2 + model.model.embed_tokens = torch.nn.Embedding.from_pretrained(model.model.embed_tokens.weight) + model.config.update({"unsloth_version" : __version__}) + + # We also do this for the lm_head + lm_head = torch.nn.Linear(1, 1, bias = None) + del lm_head.weight + lm_head.weight = model.lm_head.weight + lm_head.in_features = lm_head.weight.shape[1] + lm_head.out_features = lm_head.weight.shape[0] + model.lm_head = lm_head + + # Gemma has tied weights! This means lm_head == embed_tokens + if model.model.embed_tokens.weight.data_ptr() != model.lm_head.weight.data_ptr(): + lm_head = torch.nn.Linear(1, 1, bias = None) + del lm_head.weight + lm_head.weight = model.model.embed_tokens.weight + lm_head.in_features = lm_head.weight.shape[1] + lm_head.out_features = lm_head.weight.shape[0] + model.lm_head = lm_head + pass + + # Also patch all dtypes - BnB seems to not allocate the correct type? + # BnB default dtype seems to be float16! + correct_dtype = lm_head.weight.dtype + + for name, module in model.named_modules(): + if isinstance(module, (Bnb_Linear4bit, Peft_Linear4bit)): + weight = module.weight + quant_state = weight.quant_state + + if type(quant_state) is list: + # BnB seems to have float16 as default! + module.weight.quant_state[2] = correct_dtype # Cast to correct dtype + else: + # https://github.com/TimDettmers/bitsandbytes/pull/763/files + quant_state.dtype = correct_dtype + pass + pass + # Downcast RoPE embedding to correct data type + # RoPE must be done in float32 for Gemma + # if (name.endswith("rotary_emb") or hasattr(module, "cos_cached")) \ + # and (module.cos_cached.dtype != correct_dtype): + + # module.cos_cached = module.cos_cached.to(correct_dtype) + # module.sin_cached = module.sin_cached.to(correct_dtype) + # pass + # pass + pass + + # Add 1 to weight + # return output * (1 + self.weight) + # https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma/modeling_gemma.py#L89 + from transformers.models.gemma2.modeling_gemma2 import Gemma2RMSNorm + + # Freeze all parameters except LoRA + # We do this first since += 1 seems to not be liked by requires_grad = True + for name, param in model.named_parameters(): + if ".lora_A." in name or ".lora_B." in name: + param.requires_grad_(True) + else: + param.requires_grad_(False) + pass + + # Patch RMS Layernorm + for name, module in model.named_modules(): + if isinstance(module, Gemma2RMSNorm): + # Must be in float32 + # https://github.com/keras-team/keras-nlp/blob/v0.8.2/keras_nlp/models/gemma/rms_normalization.py#L36 + # module = module.to(torch.float32) + # Leave + 1 to Triton kernel itself + # module.weight += 1.0 # return output * (1 + self.weight) + if not hasattr(module, "variance_epsilon"): + module.variance_epsilon = module.eps # Gemma doesn't use variance_epsilon + pass + + # Clear deleted GPU items + import gc + for _ in range(3): + gc.collect() + torch.cuda.empty_cache() + return model + pass +pass diff --git a/unsloth/models/llama.py b/unsloth/models/llama.py index 2368a3767..e19b85726 100644 --- a/unsloth/models/llama.py +++ b/unsloth/models/llama.py @@ -15,6 +15,8 @@ import torch import gc from typing import Optional, Tuple, List, Union +from ._utils import * +from ._utils import __version__ from torch.nn.functional import scaled_dot_product_attention from transformers.models.llama.modeling_llama import ( logger, @@ -25,8 +27,6 @@ _prepare_4d_causal_attention_mask_for_sdpa, ) from ..kernels import * -from ._utils import * -from ._utils import __version__ from ..tokenizer_utils import * if HAS_FLASH_ATTENTION: from flash_attn import flash_attn_func @@ -78,6 +78,24 @@ def original_apply_o(self, X): KV_CACHE_INCREMENT = 256 # KV Cache update size torch_nn_functional_softmax = torch.nn.functional.softmax +# Fix new HF's inference code +def _fast_prepare_inputs_for_generation(self, input_ids, **kwargs,): + if "past_key_values" in kwargs: + input_ids = input_ids[:,[-1]] + kwargs["attention_mask"] = kwargs["attention_mask"][:,[-1]] + kwargs["position_ids"] = kwargs["cache_position"] + return { "input_ids" : input_ids, **kwargs, } +pass + + +def fix_prepare_inputs_for_generation(module): + # Fix prepare_inputs_for_generation + if hasattr(module, "prepare_inputs_for_generation"): + module.prepare_inputs_for_generation = _fast_prepare_inputs_for_generation + pass +pass + + def LlamaAttention_fast_forward_inference( self, hidden_states: torch.Tensor, @@ -542,7 +560,8 @@ def LlamaModel_fast_forward( inputs_embeds = inputs_embeds.to(self.config.torch_dtype) # Normalized from Gemma - IS_GEMMA = self.config.model_type == "gemma" + IS_GEMMA = self.config.model_type.startswith("gemma") + IS_GEMMA2 = self.config.model_type.startswith("gemma2") train_embed_tokens = self.embed_tokens.weight.requires_grad if IS_GEMMA: @@ -642,17 +661,38 @@ def LlamaModel_fast_forward( offloaded_gradient_checkpointing = True pass + # Gemma2 has alternating SWA and global attn + if IS_GEMMA2 and not hasattr(self, "SWA_mask"): + from transformers.modeling_attn_mask_utils import AttentionMaskConverter + n = self.config.max_position_embeddings + self.SWA_mask = AttentionMaskConverter( + is_causal = True, + sliding_window = self.config.sliding_window, + )\ + .to_causal_4d(1, n, n, dtype = inputs_embeds.dtype, device = "cuda:0",)\ + .squeeze(0).squeeze(0) + + self.GA_mask = AttentionMaskConverter( + is_causal = True, + )\ + .to_causal_4d(1, n, n, dtype = inputs_embeds.dtype, device = "cuda:0",)\ + .squeeze(0).squeeze(0) + pass + # Go through every layer! for idx, decoder_layer in enumerate(self.layers): if output_hidden_states: all_hidden_states += (hidden_states,) past_key_value = past_key_values[idx] if past_key_values is not None else None + mask = causal_mask + if IS_GEMMA2: mask = self.SWA_mask if (idx % 2 == 0) else self.GA_mask + if offloaded_gradient_checkpointing: hidden_states = Unsloth_Offloaded_Gradient_Checkpointer.apply( decoder_layer, hidden_states, - causal_mask, + mask, attention_mask, position_ids, past_key_values, @@ -670,7 +710,7 @@ def custom_forward(*inputs): layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(decoder_layer), hidden_states, - causal_mask, + mask, attention_mask, position_ids, use_reentrant = True, @@ -681,7 +721,7 @@ def custom_forward(*inputs): else: layer_outputs = decoder_layer( hidden_states, - causal_mask=causal_mask, + causal_mask=mask, attention_mask=attention_mask, position_ids=position_ids, past_key_value=past_key_value, @@ -838,6 +878,7 @@ def _CausalLM_fast_forward( logits = logits.to(self.config.torch_dtype) loss = None + logit_softcapping = getattr(self.config, "final_logit_softcapping", 0) if labels is not None: shift_logits = logits if not hasattr(self, "extra_ignored_labels"): @@ -849,7 +890,12 @@ def _CausalLM_fast_forward( loss = fast_cross_entropy_loss( logits = shift_logits, labels = shift_labels, + logit_softcapping = logit_softcapping, ) + elif logit_softcapping != 0: + logits *= (1.0 / logit_softcapping) + torch.tanh(logits, out = logits) + logits *= logit_softcapping pass if not return_dict: @@ -983,11 +1029,22 @@ def _fast_generate(*args, **kwargs): pass internal_model._flag_for_generation = True + # For newer HF + kwargs["cache_implementation"] = "dynamic" + + # Set pad token + old_pad_token_id = getattr(model.config, "pad_token_id", None) + old_eos_token_id = getattr(model.config, "eos_token_id", None) + model.config.pad_token_id = old_eos_token_id + # Autocasted with torch.autocast(device_type = device_type, dtype = dtype): output = generate(*args, **kwargs) pass + # Revert + model.config.pad_token_id = old_pad_token_id + # Unset a flag for generation! internal_model = model while hasattr(internal_model, "model"): @@ -1013,6 +1070,7 @@ def pre_patch(): LlamaModel .forward = LlamaModel_fast_forward LlamaForCausalLM .forward = CausalLM_fast_forward(LlamaModel_fast_forward_inference) PeftModelForCausalLM.forward = PeftModelForCausalLM_fast_forward + fix_prepare_inputs_for_generation(LlamaForCausalLM) # Solves https://github.com/unslothai/unsloth/issues/168 # Static KV Cache was introduced in 4.38.0, causing training to be much slower. @@ -1056,7 +1114,7 @@ def from_pretrained( f"==((====))== Unsloth: Fast {model_patcher.__name__[4:-5]} patching release {__version__}\n"\ f" \\\ /| GPU: {gpu_stats.name}. Max memory: {max_memory} GB. Platform = {platform_system}.\n"\ f"O^O/ \_/ \\ Pytorch: {torch.__version__}. CUDA = {gpu_stats.major}.{gpu_stats.minor}. CUDA Toolkit = {torch.version.cuda}.\n"\ - f"\ / Bfloat16 = {str(SUPPORTS_BFLOAT16).upper()}. Xformers = {xformers_version}. FA = {HAS_FLASH_ATTENTION}.\n"\ + f"\ / Bfloat16 = {str(SUPPORTS_BFLOAT16).upper()}. FA [Xformers = {xformers_version}. FA2 = {HAS_FLASH_ATTENTION}]\n"\ f' "-____-" Free Apache license: http://github.com/unslothai/unsloth' print(statistics) model_patcher.pre_patch() @@ -1200,11 +1258,11 @@ def from_pretrained( 'nvidia-smi --query-gpu=memory.used --format=csv', shell = True) output = re.findall(rb'([\\d]{1,})[\\s]{1,}M', output) output = sum(int(x.decode('utf-8'))/1024 > 4 for x in output) - if output > 1: raise RuntimeError( - 'Unsloth currently does not work on multi GPU setups - sadly we are a 2 brother team so '\\ + if output > 1: print( + '********************\\nUnsloth currently does not work on multi GPU setups - sadly we are a 2 brother team so '\\ 'enabling it will require much more work, so we have to prioritize. Please understand!\\n'\\ - 'We do have a separate beta version, which you can contact us about!\\n'\\ - 'Thank you for your understanding and we appreciate it immensely!') + '********************\\nWe do have a separate beta version, which you can contact us about!\\n'\\ + '********************\\nThank you for your understanding and we appreciate it immensely!') for _ in range(3): gc.collect() torch.cuda.empty_cache()""" @@ -1760,6 +1818,7 @@ def patch_peft_model( elif model_type == "mistral": apply_lora_mlp = apply_lora_mlp_swiglu elif model_type == "qwen2": apply_lora_mlp = apply_lora_mlp_swiglu elif model_type == "gemma": apply_lora_mlp = apply_lora_mlp_geglu_approx + elif model_type == "gemma2": apply_lora_mlp = apply_lora_mlp_geglu_approx else: raise NotImplementedError(f"Unsloth: {model_type} is not yet implemented!") pass diff --git a/unsloth/models/loader.py b/unsloth/models/loader.py index d87af0a18..9134d4a22 100644 --- a/unsloth/models/loader.py +++ b/unsloth/models/loader.py @@ -26,8 +26,11 @@ major, minor = int(major), int(minor) SUPPORTS_FOURBIT = (major > 4) or (major == 4 and minor >= 37) SUPPORTS_GEMMA = (major > 4) or (major == 4 and minor >= 38) +SUPPORTS_GEMMA2 = (major > 4) or (major == 4 and minor >= 42) if SUPPORTS_GEMMA: - from .gemma import FastGemmaModel + from .gemma import FastGemmaModel +if SUPPORTS_GEMMA2: + from .gemma2 import FastGemma2Model del major, minor @@ -138,6 +141,15 @@ def from_pretrained( f"to obtain the latest transformers build, then restart this session."\ ) dispatch_model = FastGemmaModel + elif model_type == "gemma2": + if not SUPPORTS_GEMMA2: + raise RuntimeError( + f"Unsloth: Your transformers version of {transformers_version} does not support Gemma2.\n"\ + f"The minimum required version is 4.43.\n"\ + f'Try `pip install --upgrade "transformers>=4.43"`\n'\ + f"to obtain the latest transformers build, then restart this session."\ + ) + dispatch_model = FastGemma2Model elif model_type == "qwen2": dispatch_model = FastQwen2Model else: diff --git a/unsloth/models/mapper.py b/unsloth/models/mapper.py index 4b4006508..cec7332ec 100644 --- a/unsloth/models/mapper.py +++ b/unsloth/models/mapper.py @@ -191,6 +191,14 @@ "mistralai/Codestral-22B-v0.1" : ( "mistral-community/Codestral-22B-v0.1", ), + "unsloth/gemma-2-9b-bnb-4bit" : ( + "unsloth/gemma-2-9b", + "google/gemma-2-9b", + ), + "unsloth/gemma-2-27b-bnb-4bit" : ( + "unsloth/gemma-2-27b", + "google/gemma-2-27b", + ), } INT_TO_FLOAT_MAPPER = {} diff --git a/unsloth/models/mistral.py b/unsloth/models/mistral.py index d8bd85d47..e0b51a16e 100644 --- a/unsloth/models/mistral.py +++ b/unsloth/models/mistral.py @@ -275,7 +275,8 @@ def pre_patch(): MistralModel .forward = LlamaModel_fast_forward MistralForCausalLM .forward = MistralForCausalLM_fast_forward PeftModelForCausalLM .forward = PeftModelForCausalLM_fast_forward - + fix_prepare_inputs_for_generation(MistralForCausalLM) + # Solves https://github.com/unslothai/unsloth/issues/168 # Static KV Cache was introduced in 4.38.0, causing training to be much slower. # Inferene can now be CUDAGraphed, but we shall retain the old rotary embeddings. diff --git a/unsloth/models/qwen2.py b/unsloth/models/qwen2.py index 984bf7ca0..5b9fff5d5 100644 --- a/unsloth/models/qwen2.py +++ b/unsloth/models/qwen2.py @@ -43,6 +43,7 @@ def pre_patch(): Qwen2Model .forward = LlamaModel_fast_forward Qwen2ForCausalLM .forward = CausalLM_fast_forward(LlamaModel_fast_forward_inference) PeftModelForCausalLM.forward = PeftModelForCausalLM_fast_forward + fix_prepare_inputs_for_generation(Qwen2ForCausalLM) # Solves https://github.com/unslothai/unsloth/issues/168 # Static KV Cache was introduced in 4.38.0, causing training to be much slower. diff --git a/unsloth/tokenizer_utils.py b/unsloth/tokenizer_utils.py index 50b09275a..8727ca03f 100644 --- a/unsloth/tokenizer_utils.py +++ b/unsloth/tokenizer_utils.py @@ -963,11 +963,11 @@ def patch_sft_trainer_tokenizer(): " 'nvidia-smi --query-gpu=memory.used --format=csv', shell = True)\n"\ "output = re.findall(rb'([\\d]{1,})[\\s]{1,}M', output)\n"\ "output = sum(int(x.decode('utf-8'))/1024 > 4 for x in output)\n"\ - "if output > 1: raise RuntimeError(\n"\ - " 'Unsloth currently does not work on multi GPU setups - sadly we are a 2 brother team so '\\\n"\ + "if output > 1: print(\n"\ + " '********************\\nUnsloth currently does not work on multi GPU setups - sadly we are a 2 brother team so '\\\n"\ " 'enabling it will require much more work, so we have to prioritize. Please understand!\\n'\\\n"\ - " 'We do have a separate beta version, which you can contact us about!\\n'\\\n"\ - " 'Thank you for your understanding and we appreciate it immensely!')\n"\ + " '********************\\nWe do have a separate beta version, which you can contact us about!\\n'\\\n"\ + " '********************\\nThank you for your understanding and we appreciate it immensely!')\n"\ "for _ in range(3):\n"\ " gc.collect()\n"\ " torch.cuda.empty_cache()\n"\